How can we help you?

A Thinking Business publication

If you’re not already using artificial intelligence tools within your business, the chances are you are at least thinking about doing so. Ever since ChatGPT swept into our everyday conversation a year ago, there has been an arms race for the top spot in the AI community - Google's Bard, Meta's LlaMa and BingChat are all competing with Open AI at a rapid pace of innovation. The number of generative AI use cases has proliferated and it is likely that most of your employees have dabbled with testing one or two.

AI has unlimited potential across almost every industry. Human resource professionals are increasingly discussing the ability of AI tools to speed up candidate selection processes and identify potential high-performers. But beyond that, there are opportunities springing up that could enhance every company in every sector: restauranteurs are using AI to advise customers on the wines they will like, doctors are using it to identify when non-verbal patients are in pain, and elderly care homes can employ it to match residents with common interests so that they can enjoy social interaction.

Microsoft’s new AI assistant Copilot is now generally available to business customers, promising another wave of AI adoption. “Not every company will sign up to Copilot, but many will,” says Anna Browne, head of innovation and legal technology at Trowers & Hamlins. “With many suppliers integrating generative AI into their existing products, we can expect a big shift in the number of businesses embracing AI to help drive productivity and growth.”

The challenges arise when it comes to thinking about the many legal and commercial issues that might arise as a result of AI adoption – careful attention needs to be paid to making sure employees only use AI in a way that is sanctioned, and that no one inadvertently brings risks or disadvantages to the company’s door.

“Boards should start by thinking about the law on AI and the legal boundaries,” says Victoria Robertson, partner and commercial and data law specialist at Trowers. “The problem is that right now we don’t have any specific laws on AI in the UK. That was a big focus of the AI Safety Summit in the UK in November, which was the first global event of its kind looking at how best to manage the risks from advances in AI.”

At the same time, the UK government launched the world’s first AI Safety Unit to examine, evaluate and test new types of AI, and it has also significantly expanded its AI Taskforce by recruiting a growing team of researchers. But regulation has so far been unforthcoming and inconsistent all over the world – the European Union’s AI Act is the most advanced attempt at a rulebook but we do not know when it will come into force, and the US AI Bill of Rights is nothing more than guidance for now.

For now, businesses only really have the General Data Protection Regulation (and its UK equivalent, UK GDPR) to govern how personal data can be used in AI algorithms. 

Browne says: “Everyone wants to understand the parameters of what they can and can’t do, but because there are no specific AI laws to comply with it is tricky to navigate at the moment. Without clear rules, the need to set out clear guardrails for use within your organisation and to put in place strong internal governance to plug the regulatory gap is critical.”

So what practical steps can you take to protect your business from potential AI mis-steps? 

First, before you buy an AI tool, do your due diligence on suppliers. It is vital to dig into the detail of how the AI has been trained, whether the data being used is proprietary or if the tool is opensource and whether there has been an ethical approach to the harvesting of that data. 

Any AI tool should be implemented in line with the company’s data strategy. Protecting your data and that of your customers is essential. UK GDPR already dictates that data should be held by design rather than by default and the same approach should be taken when using AI.

Another consideration will be the extent to which any professional indemnity insurance covers the use of AI, and that should be looked into.

Your corporate AI strategy will need to be agile because both the technology and the regulatory environment are evolving fast. We would therefore recommend that clients set up AI governance groups to keep things under review and take responsibility for AI governance across the business. 

Members of that governance group should be your innovators, knowledge owners, likely including people that work with AI in their day-to-day roles, such as the IT director, knowledge management director and general counsel. People coming from different workstreams will also bring their own insights, so HR should also be involved to add input around the risk of discrimination that can be embedded with AI tools. 

It will be important to bring voices together at that table who will identify opportunities as well as risks and who will not be too risk averse. The adoption of AI should add additional resource and capability to the business, rather than simply being about cutting headcount or reducing cost.

Another practical point is to focus on employee engagement around AI, because it is really important to have open dialogue on this. Some members of staff will be more comfortable with new tools than others and some will be wanting to use it while others won’t – leaders should ensure there is proper, balanced oversight and that an environment is created in which people can ask questions and explore opportunities collaboratively. We are already seeing demands for more accountability in AI adoption, for example with public authorities asking for assurance that AI has not been used in procurement documents.

Robertson says: “We can already see that employee attitudes to AI are changing, and people are less frightened of losing their jobs and more focused on ways in which they can make use of ChatGPT and similar tools work for them. For businesses, it is important to set the boundaries of what is considered to be acceptable use within the workforce, and communicate those boundaries effectively whilst not stifling innovation.”

The vast majority of UK businesses are yet to put effective AI governance in place but there are many questions that organisations looking to future-proof their operations need to address. Trowers recently launched a new AI Strategy Toolkit to help leaders identify issues relevant to their businesses and create workplace policy guidance tailored to those needs. Please get in touch if you would like more information.