How can we help you?

As the use and deployment of AI continues to develop at pace, the Department for Science Innovation and Technology (DSIT) continues to keep the risks posed by AI under review. 

One risk which is high on the DSIT agenda, and affects businesses across all sectors, is that of cyber resilience. With cyber attacks making headline news on a frighteningly regular basis, those developing and relying on new technologies must ensure a robust approach to cyber resilience to minimise the vulnerabilities through which cyber attackers will strike.

Following the national Cyber Security Centre's (NCSC) Guidelines for secure AI development, published in November 2023, and the recent call for views on a Cyber Code of Practice, on 15 May 2024, DSIT sought views on its proposed voluntary code of practice for AI Cyber Security. 

Why do we need a Code of Practice?

The code of practice has been proposed to help protect businesses, individuals, and charities from the increasing threat of cyber breaches and attacks including those perpetrated by or with the assistance of AI. 

DSIT reported that in 2023-2024 50% of businesses and 32% of charities reported cyber breaches or attacks, with phishing remaining the most common type. The cyber security industry itself has grown 13% within the last year, to be worth around £12bn (which is on a par with sectors like the automobile industry). 

DSIT's ambition is for the new code to be part of a future global standard. DSIT plans to use the information received from the Call for Views to inform its policy in this space and intends to submit the updated code to the European Telecommunications Standards Institute (ETSI) in September 2024 (albeit this may be subject to change now the UK general election has been announced).

What has been proposed?

The Code of Practice is intended to be voluntary and focusses on 12 core principles targeting AI developers, systems operators and data controllers alike. The 12 principles are:

  1. Principle 1: Raise staff awareness of threats and risks. This aims to ensure that organisations developing, operating and using AI "establish and maintain a continuous security awareness program to educate their staff about the evolving threat landscape specific to AI systems".
  2. Principle 2: Design your system for security as well as functionality and performance. System operators should during decision making processes document the business requirements they are trying to address when designing an AI system.
  3. Principle 3: Model the threats to your system. Developers and operators of AI should model AI system threats during their risk management process.
  4. Principle 4: Ensure decisions on user interactions are informed by AI-specific risks. Developers and operators of AI should design and provide safeguards for AI outputs by non-AI means e.g. humans.
  5. Principle 5: Identify, track and protect your assets. All organisations developing or utilising AI should know when their assets are physically based and assess and manage the risks associated with those locations.
  6. Principle 6: Secure your infrastructure. Developers and operators should manage risks for each AI system environment and Application Programming Interfaces (mechanisms allowing two software components to speak to each other).
  7. Principle 7: Secure your supply chain. All organisations using or deploying AI should ensure their supply chain and suppliers adhere to the same security requirements that they do, including risk management policies.
  8. Principle 8: Document your data, models and prompts. Developers should provide and maintain a clear audit trail of the designs and maintenance practices.
  9. Principle 9: Conduct appropriate testing and evaluation. Developers should not release applications, systems, or models that have not been rigidly tested.
  10. Principle 10: Communication and processes associated with end-users. Developers and operators should be clear about which parts of risk management and security end-users are responsible for and communicate that to and support end-users, including transparency about where and how their data is stored.
  11. Principle 11: Maintain regular security updates for AI model and systems. Developers and operators should ensure when developing their project requirements that they conduct regular security audits and updates.
  12. Principle 12: Monitor your system’s behaviour. Developers and operators should log all inputs and outputs to/from the AI system to ensure auditing, compliance, investigation and remediation can be carried out where required.

How does the code fit into wider changes?

While the code is voluntary, it is clearly intended to fit into the wider regulatory and guidelines-based framework developing in the UK. DSIT's National AI Strategy was first published in 2021, and further built on with its White Paper "AI Regulation: a pro-innovation approach" published on 29 March 2023. UK regulators (including Ofcom, the ICO, Ofgem and the FCA among others) were required to publish their strategic approach to AI by April 2024, and which was published as outline guidance on 1 May 2024 by DSIT.

Regulatory developments including the Product Security and Telecommunications Infrastructure Act 2022 (PSTI Act) are intended to delineate clear responsibilities on relevant persons. Part 1 of the PSTI Act, which came into force on 29 April 2024, places legal responsibilities on those relevant persons (including manufacturers) to ensure internet-connectable products and embedded software are secure by design.

The code also sits alongside other voluntary codes of practice, including the Codes of Practice for App Store Operators and App Developers, the Cyber Governance code of practice, and a proposed code of practice for organisations developing and selling software B2B.

Embedding good cyber security practice into AI 

The code, and the Call for Views, are part of DSIT's evolving strategy towards supporting developing AI products in the UK whilst ensuing appropriate safeguards are in place. On 15 May, at the same time as the Call for Views was launched, DSIT published a series of research reports on cyber security of AI including surveys and literature reviews. The reports, which can be found here, are part of DSIT's on-going National Cyber Strategy. 

As part of the strategy, DSIT has recognised, in an increasingly digital-reliant world, that a global approach is required. The UK Government is trying to position itself at the forefront of this space, hosting the first AI Safety Summit in November 2023 with attendees from countries including the US and China present. At present, DSIT is not minded to specifically legislate to regulate AI, unlike counterparts in the EU. Rather, it is keen to keep to principles-based legislation. The code forms what seems to be an essential part of this strategy.

How to get involved and timescales

Interested parties can contribute to the call for views by responding to DSIT's online survey from 15 May to 9 August 2024, found here. A full list of the survey's questions is published on Gov.uk's website at Annex D of the Call for Views (Call for views on the Cyber Security of AI - GOV.UK (www.gov.uk)). 

Please do not hesitate to contact us if you have any questions or further enquiries on how to best protect your business and people. Our dedicated cyber and data protection team are available to discuss your cyber protection and live issues. Please contact Charlotte Clayson or Helen Briant.