How can we help you?

AI is rapidly transforming business operations across sectors. From healthcare diagnostics to financial services, AI platforms are becoming essential for increasing market advantage and driving value. Soon, many businesses are expected to integrate AI into their operations to remain competitive.

This article explores the key considerations when a business relies heavily on AI platforms from an English law perspective, particularly during a sale or exit: as the use of AI grows, so does the frequency of businesses being sold with integrated AI. Whether you are building an AI platform, acquiring one, or relying on third party software, understanding the legal landscape is critical for success and risk mitigation.

AI Integration: Understanding how Large Language Models (LLMs) work

Before assessing the potential risks, it is important to outline the basic infrastructure of LLMs. Essentially, a LLM is heavily dependent on the data it is trained on. Some LLMs used by businesses are developed in-house, giving the business a strong grasp over the training process and the data used. Most, however, rely on third party offerings provided on an “as is” basis, which can reduce direct control of the LLM.

This reliance on either self-generated or third party data creates the foundation for many of the legal risks discussed here. How the LLM is trained, what information is used, and under what terms that information is accessed become critically important, both legally and commercially. Often, these issues only come to light during legal due diligence in preparation for a sale, when questions arise about how the AI platform was built and how third party licensing for LLMs is handled.

The risks of building AI platforms and embedding third party LLMs

Businesses are increasingly using third party LLMs (e.g., ChatGPT, Gemini, Claude, or Llama) in day-to-day operations. At the same time, some companies invest resources in developing private LLMs that minimise reliance on third party software. Despite the well-publicised benefits of using AI e.g. faster development, cutting-edge capabilities, and efficient resource use, significant risks must be managed long before a business is sold. Here are some of the risks that any builder, buyer or seller of AI platforms should be aware of:

  1. IP ownership in inputs. Training a LLM may involve third party content that is copyright-protected, which could be infringing. Active litigation on this point underscores ongoing uncertainties. However, if a business has built its own LLM and trained it on a “closed” dataset of its own material, this risk may be reduced.
  2. IP ownership in the platform. It remains unclear whether AI systems can, in the UK, be protected as patentable inventions. A prospective buyer will want to confirm what intellectual property rights exist in the AI platform and how heavily it relies on third party software.
  3. IP ownership in outputs. For content generated by an LLM, there may be questions over who owns the IP in that material—especially if it embeds or modifies third party work. This becomes even more complex when customers generate content through the platform, leaving uncertainty about ownership or liability if a third party alleges infringement.
  4. Contractual terms. Many third party LLM providers exclude liability for infringing outputs, shifting the risk to the business using the LLM. These “as is” terms also often disclaim liability for how data was used to train the LLM, meaning the business must shoulder the IP infringement risk.
  5. Data processing terms. LLMs can retain personal data inputs, raising compliance questions under UK GDPR and other data protection laws. Providers like OpenAI have updated their terms frequently in respect of data retention and usage rights.
  6. Wider data protection and confidentiality. Issues of data storage location, cross-border transfers, and privacy policy alignment are critical. If confidential or sensitive business information is processed by a third party LLM, there is a risk that those inputs may become available to train future models.

These factors shape a business’ risk profile and influence what additional insurance or contractual safeguards may be needed before a sale.

Recommendations on managing these risks

In an ever-evolving AI landscape, proactive preparation for commercial transactions is key. Important steps include:

1. Maintaining comprehensive records

  • Development documentation: Track all contributors and any third party software or licensing involved.
  • Data flows: Understand and document how data moves through your AI systems.
  • Policy framework: Implement AI-specific policies covering data inputs, output monitoring, and LLM usage.
  • IP position: Demonstrate clear ownership or proper licensing for all platform components.

2. Implementing robust monitoring

  • Input monitoring: Define procedures to ensure only appropriate data is introduced into the LLM.
  • Output review: Routinely check AI-generated content for accuracy, IP risks, or inappropriate material.
  • Usage oversight: Track how your AI platform is used and by whom.

3. Conducting regular impact assessments

  • DPIAs: If you know (or reasonably anticipate) that personal data is or will be used by the LLM, you should conduct regular Data Protection Impact Assessments (DPIAs), ensure robust data processing agreements are in place with any third party LLM provider (covering data retention, cross-border transfers, confidentiality, and breach procedures), and consider anonymisation or pseudonymisation measures to mitigate potential non-compliance or liability risks.
  • Cybersecurity: Recognise that AI platforms may require additional security measures.

4. Protecting confidential information

  • Private cloud: Use private infrastructure for greater control over sensitive data; avoid intermingling customer data in LLM training.
  • Training data restrictions: Do not expose confidential business information to public LLMs.

5. Updating contractual protections

  • Customer terms: Clearly allocate IP infringement risks, reflecting major LLM providers’ liability limitations.
  • Liability carve-outs: Consider excluding liability for AI-generated content infringing third party IP.
  • Indemnities: Use caution when granting indemnities for AI outputs, given the uncertain legal landscape.
  • Enterprise versions: Where feasible, consider enterprise-tier solutions offered by LLM providers. These versions often include more robust service terms, such as stronger data security commitments, dedicated service-level agreements (SLAs) and, in some cases, more favourable indemnification schemes or warranties. By negotiating with providers at an enterprise level, businesses can gain better assurances around liability and data protection.

6. Insurance coverage

  • Check if professional indemnity and cyber policies adequately address AI-related risks.

7. Compliance with new and future legislation

  • Stay abreast of government proposals on AI governance, including potential documentation requirements for training AI platforms. 
  • Be aware of industry-specific obligations (e.g., finance, healthcare, education) for platforms accessible to children (Online Safety Act) which can impose additional requirements.

Conclusion

The question is not if a business should integrate AI or LLMs but how to do so while effectively managing legal, commercial, and operational risk. Building, buying, or selling an AI platform requires careful planning, robust documentation, and active risk mitigation. By implementing solid governance frameworks, maintaining detailed audit trails, and keeping pace with regulatory developments, businesses can capitalise on AI’s benefits whilst limiting its exposures.

The key takeaway is that: investment in proper legal and risk management pays off throughout the platform’s lifecycle and beyond.