We use cookies on our website. To learn more about cookies, how we use them on our site and how to change your cookie settings please view our Cookie Statement.
By continuing to use this site without changing your settings you consent to our use of cookies in accordance with our Cookie Statement.
May 2024
Insights/Supply Chain Considerations for Companies Deploying AI

Supply Chain Considerations for Companies Deploying AI

By Brooke Berg & Nathan Staffel

Some of you may remember the old public service announcement: “It’s 10 p.m. Do you know where your children are?” It’s gained some popularity recently, if only because modern technology and parenting concepts make this question seem antiquated.

We can argue about whether parents in the ’70s and ’80s needed to be reminded to check on their children. But for most businesses and industries, there is enormous pressure to implement artificial intelligence, and very little understanding of the complexities of deploying it, particularly when working with sensitive data. Here is a PSA for our times: “It’s 10 p.m. Do you know where your data has gone?”

The answer may lie in your supply chain, highlighting the importance of robust supply chain analysis.

 

The Risks of Deploying AI

Many businesses will need to embrace AI in order to survive. And yet many businesses will risk failure by embracing AI without fully understanding the risks.

For example, sensitive data can be exposed when an unwitting employee enters prompts into a generative AI system like ChatGPT that can reveal sensitive information about operations, clients and more.

Some AI models are vulnerable to prompt injection attacks where a threat actor uses prompts designed to make the model ignore its original instructions. Given the right series of prompts, even trusted models will disclose sensitive data. In March, HiddenLayer researchers found Google’s Gemini large language model has “multiple prompt hacking vulnerabilities.” Gemini, which is part of a common suite of office tools, can compromise other AI systems.

Companies also need to be concerned that AI will lead them, or their customers, astray. Sometimes this results from the continuous learning the algorithms must do, but it can also
be a result of data poisoning — introducing false information into a training data set to make the algorithm perform in ways that are counter to the intended result.

Data leakage is always a risk when introducing novel software into a system, but AI has amplified the concerns and the related legal risks for companies.

 

Tracking Your Supply Chain: Five Steps

Given these risks, the value of AI supply chain analysis cannot be overstated for companies deploying AI processes. A clear map of where data comes from, how it is used and where it
is stored can empower businesses to make informed decisions that align with their ethical standards and compliance requirements.

Almost all businesses — those that rely on sensitive data; those working with personal identifiable information; those whose intellectual property is a target for foreign adversaries; and those looking to protect customer data from unintentional leakage using AI tools — should employ a standardized and thorough process to investigate the supply chain behind the AI tools they employ. They need to know where their data may go.

 

1. AI Asset Inventory

For those who have already integrated AI tools, the AI asset inventory enables the organization to understand and manage its AI resources. It aims to catalog all AI tools and services used, along with their specific functions and data usage patterns. It involves the following steps:

  • Identification: List all AI tools, platforms and systems in use, both those developed in-house and those developed by third parties.
  • Classification: Categorize these tools based on their function and the type of AI technologies they use.
  • Documentation: Document the data that each tool accesses, processes and stores. Include details on the format and sensitivity of the data, as well as any data transformation that occurs within the tool.

 

2. Data Flow Mapping

Visualize the journey of data through AI models, highlighting internal and external processing points. Attempt to trace data from collection through predictive analytics tools into third-party architecture. Also identify data storage practices at third-party vendors, as the risks are otherwise unknowable. Follow these steps to map data flow:

  • Mapping start points: Identify and document where data enters the AI systems (e.g., data collection points, data imports).
  • Tracking through systems: Trace the flow of data through various processing stages, including transformations by AI models and interactions with third-party systems.
  • Mapping end points: Note where data exits the systems, including outputs to users, data exports, and transfers to other applications.

 

3. Compliance Audit

Review vendors’ locations, privacy policies and adherence to regulatory mandates like the EU General Data Protection Regulation or the California Consumer Privacy Act.

Many AI regulations and policies are only under consideration at this point: In February, a U.S. government-commissioned report recommended sweeping regulatory changes that Time magazine assessed would “radically disrupt the AI industry.” As a risk assessment factor, businesses have to stay at the forefront of AI regulations and policies.

  • Regulatory framework identification: Determine which laws and regulations are applicable based on operational geographies and industries.
  • Policy review: Examine the privacy policies and data handling practices of AI systems against these regulations.
  • Gap analysis: Identify discrepancies between current practices and regulatory requirements. Prioritize based on risk.

 

4. Vendor Security Review

Evaluate the security measures, data handling and incident response capabilities of all AI vendors. Our experience has shown us that intense competition often results in startups pushing novel AI tools without a concomitant regard for security.

  • Security standards assessment: Evaluate security measures such as encryption, access controls and auditing capabilities.
  • Incident response review: Assess the capability to detect, respond to and recover from security incidents.
  • Third-party certifications: Check for industry-standard security certifications (e.g., ISO 27001, SOC 2).

 

5. Contractual Compliance Check

Confirm that AI vendors meet data privacy obligations, and that those obligations — as well as the steps to be taken to remediate violations — are outlined in the contract. One
common deficiency in contracts we have seen is the failure to specify when customers have to be notified of a potential breach.

  • Contract review and analysis: Ensure data privacy and security obligations are delineated. Ensure the breach notification process includes defined timelines and notification methods.
  • Gap identification: Identify and address any gaps or ambiguities that could lead to disputes or inadequate compliance responses.
  • Contract amendment: Amend contracts to rectify deficiencies, ensuring all compliance obligations and procedures are explicitly defined.

 

Conclusion

By thoughtfully weaving together asset inventory, data flow analysis, compliance checks, and vendor assessments, organizations can effectively unlock the benefits of AI technologies while managing legal and ethical risks. AI tools offer the possibility of a true tech revolution, in which businesses and customers will be able to separate the signal from the noise and make use of information from multiple sources. But the revolution will happen only if the businesses deploying these tools (1) implement AI in a way that ensures they are getting accurate results and (2) know where their data is going and how it is being used.

[1]https://www.oreilly.com/pub/pr/3435.
[2]https://research.ibm.com/blog/what-are-foundationmodels; https://crfm.stanford.edu/assets/report.pdf.
[3]https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificialintelligence-in-everyday-activities.
[4]https://tracxn.com/d/sectors/artificialintelligence/__cbMnXfS2GfFo4Vi2dxZyUy7l4O8WyzVYLseb9keW5cI/companies; https://www.cio.com/article/1257377/weighing-risk-and-reward-with-gen-ai-vendor-selection.html

Brooke Berg is a Director at Nardello & Co., the international investigations firm, and a former Central Intelligence Agency Operations Officer with more than a decade and a half of national security experience. Specializing in natural language processing, she has worked for multiple AI startups and on deployments in the world’s most challenging environments with the world’s most sensitive datasets.

Nathan Staffel is a former Federal Bureau of Investigation Special Agent with more than a decade of experience delivering mission-critical emerging technology. Nathan worked for four years as a consultant and manager developing and testing complex AI systems for large enterprises in heavily regulated industries.

 

*Originally published in Law360

OUR FIRM

We've got you covered

EXPERIENCE

We get to the truth before it's too late

CONTACT US

Why risk it?