How can we help you?

A Thinking Business publication

When San Francisco-based OpenAI launched its chatbot at the end of 2022, heralding the latest leap forward for artificial intelligence (AI), it swiftly attracted 100 million active users within two months. As the fastest-growing consumer application ever launched, it has propelled AI up the agenda in every boardroom around the world.

The potential for AI to mimic and generate human behaviours presents a wide range of opportunities and threats across the whole spectrum of business activities and indeed societal domains. But as companies rush to integrate the benefits of AI software into their ways of working to create efficiencies, a note of caution is needed. AI may be able to assist in the performance of many tasks, marking exam papers for online education providers, writing reports for time-challenged executives, and even attempting to offer legal guidance at the touch of a button, but if misused or relied upon without the proper process and protections it has the potential to cause big problems.

Elizabeth Mulley, a senior associate in Trowers & Hamlins’ dispute resolution and litigation team, says: “If used properly there is no doubt that AI can be great for maximising business efficiencies, and we see plenty of examples of our clients using AI to do just that. But businesses are leaving themselves exposed if they are using AI and their cybersecurity is not up to scratch – cybercriminals are looking at AI to see how they can exploit it.”

The opportunities associated with AI, come against the backdrop of examples of where AI has been used to target and manipulate individual users, such as deep fakes and cloned audio. Indeed, these methods have become so sophisticated and persuasive that recent concerns around sentient AI have even led Elon Musk and others to call for a pause in the creation of giant AI digital minds, while governments grapple with implementing regulation to ensure appropriate protections. Despite this, it is widely acknowledged that the potential of AI for good should be embraced. Rishi Sunak opened London Tech Week by stating AI presents "an opportunity for human progress that could surpass the industrial revolution in both speed and breadth" and expressed his sense of urgency and responsibility to seize it.

Benefits aside, before jumping on the AI bandwagon, companies need to carefully think about how they will avoid running into disputes or compliance problems. Businesses need to proceed with caution on multiple fronts: 

  1. IP:  The sources used within generative AI have lead many to raise copyright concerns – and there are new exposures created when AI or ChatGPT is incorporated into creation of work product, for example companies should take care not to breach the intellectual property rights of any data services they are licensed to use. Incorporating this content into any datasets being used to develop an internal generative AI product will undoubtedly be a breach of contract.  
  2. Risk analysis: Businesses should conduct a comprehensive risk analysis and consider how those risks can be mitigated, and whether the appropriate people within the organisation understand those additional exposures.
    “Often companies have tech specialists that understand all the algorithms,” says Mulley, “but they might not be talking to the right people in risk and compliance, or governance, or HR, to be sure that everyone understands what the issues are.”
  3. Data Protection: One of the biggest areas of compliance risk associated with AI relates to data protection, which people don’t necessarily think about before embracing the tech. After all, AI technology is underpinned by data, because in order to train systems there is usually a huge amount of personal data inputted to inform their ability to answer questions or develop processes. Charlotte Clayson, partner in the dispute resolution team at Trowers, says: “The personal data that is collected and used to make assessments or decisions about people is now a real focus for the Information Commissioner’s Office as the regulator of personal data in the UK. They have been working with other regulators on a coordinated approach to AI and issuing fines to companies that have failed to comply with the rules.”

The ICO has just published new guidance on AI and data protection and has been working with the Alan Turing Institute to develop practical advice to help companies navigate the issues. 

It also recently fined facial recognition database company Clearview AI more than £7.5 million – one of its biggest ever penalties – for using images of people that were collected from the web to create its global online database. The company breached UK data protection laws in a number of ways, including failing to use the information of people in a fair and transparent way, failing to have a lawful reason for collecting people’s data, and failing to have a process in place to stop the data being retained indefinitely. Clearview received even larger fines at €20m apiece from data protection authorities across Europe.

“The fact they had so many fines gives a stark warning about using AI,” says Clayson. “The tech can be great but before you use it you have to think about how you are going to make sure you comply with the law. The new guidance from the ICO should be really useful in helping companies through that.”

Anna Browne, Head of Innovation and Legal Technology at Trowers & Hamlins, says "Companies should have a data strategy and effective policies on data governance to ensure that they have the compliant processes in place and to ensure they can leverage their data to make smarter decisions ". 

Richard Elson, Director for Information Services at Trowers and Hamlins says " Organisations need to take both strategic and tactical steps to ensure their approach is coordinated across the business. To this end an authoritative multidisciplined group reporting to the strategic board should be mapping these opportunities and threats and ensuring there is focus and attention to prioritising actions to actively manage both."

Part of the challenge for business leaders is that regulators themselves are at times struggling to keep pace with the speed at which the tech is developing in this area. A recent case involving the use of cryptocurrency, which is classed as AI because of the blockchain behind it, highlights the issue. The claimant, Tulip Trading, was the owner of some very high value bitcoin, the keys to which were apparently stolen in a hack. Tulip was unable to access the assets and move them to safety, so it argued the software developers owed fiduciary duties to customers using their network that extended to safeguarding their assets. 

In the first case in the English courts to consider the roles and duties of cryptoasset software developers, the Court of Appeal held that it is arguable that software developers do indeed owe fiduciary duties to owners of cryptoassets. Deciding it was a serious issue to be tried, the case is now expected to go to full trial in 2024.

Clayson says: “The Tulip case is just part of a wider issue of the law and regulation working to keep pace with developments in technology. On the regulation side, there are authorities that already exist to regulate different sectors, but they are working to make sure they can apply themselves appropriately to keep up with developments in the tech world. In the UK, AI is a key priority for government, so they are making a lot of noise about how they are going to regulate it and make it a safe space.”

In Europe, the Artificial Intelligence Act proposed by the European Union is the first law on AI by a major regulator anywhere, meaning it could become a global standard in determining to what extent AI has a positive or negative impact on our lives. 

For UK companies, there is a lot to get to grips with. Helen Briant, another partner in the dispute resolution team at Trowers, says: “When we first started talking to clients about cybersecurity, one of the key things we would focus on was human training to help staff recognise phishing emails, for example. But with the pace at which AI has developed, from a cybersecurity perspective phishing emails are now so much more realistic, with the ability to replicate people’s voices and ways of speaking that makes it even harder to recognise imposters.”

She adds: “AI is great but it is creating a completely new and much more sophisticated set of issues, making it even harder to put in the checks and balances that organisations need to protect themselves. Companies must weigh up the benefits of automation and cost savings versus the unknown risk parameters, which are sometimes quite difficult to articulate.”

To protect themselves against cyber attacks through the use of AI, organisations should:

  • Invest in AI-powered cybersecurity tools and solutions that can help detect, prevent, and respond to cyber-attacks.
  • Ensure that AI systems are secure and regularly tested for vulnerabilities.
  • Train employees on cybersecurity best practices, including how to identify and respond to cyber threats.
  • Develop an incident response plan that outlines the steps to be taken in the event of a cyber-attack.
  • Regularly review and update cybersecurity policies and procedures to ensure they are up-to-date with the latest threats and best practices.