How can we help you?

What is “physical AI”? 

Artificial intelligence is no longer confined to screens and servers. A new generation of AI systems is emerging - one that perceives, decides and acts directly upon the physical world. This shift from generative AI to what is increasingly termed "physical AI" marks a fundamental transformation not only in technology, but in law, ethics and governance.

Physical AI refers to artificial intelligence systems that do not merely generate text, images, or analysis, but act in and upon the physical world through robotics, embodied systems and connected devices. Unlike traditional software, these systems act independently: they sense their surroundings, make their own decisions and carry out physical actions with little human supervision.

This technology brings together advanced foundation models (including vision-language-action models), realtime sensor fusion (using LIDAR, cameras, and tactile sensors), reinforcement learning and imitation learning techniques, high-performance robotics hardware and actuation, and cloud or edge-based decision systems for fast, context-aware decision-making.

Examples are already in advanced development, from autonomous humanoid robots developed by Tesla and Figure AI to Boston Dynamics' logistics systems, AI-controlled manufacturing robotics integrated with digital twins, surgical robotics with AI-enabled decision support and autonomous vehicles from Waymo. The defining characteristic is that physical AI systems perceive, decide and act, marking a material shift from generative AI to agentic and kinetic AI.

How physical AI is developing

Convergence of three technology streams

Physical AI is advancing because three previously distinct technology streams are converging with remarkable speed. Large multimodal models now integrate vision, language and action in ways that enable contextual reasoning to be embedded directly into robotics systems.

Simulation and synthetic training environments have matured, allowing AI systems to be trained at scale in digital twin environments through reinforcement learning, with transfer learning techniques enabling capabilities developed in simulation to be carried across into real-world deployment. Hardware costs are dropping fast - sensors, actuators and other components that were expensive just a few years ago are now cheaper and widely available. Batteries last longer and edge computing allows robots to make complex decisions instantly without relying on the cloud.

Previously robotics depended on rigid, rule‑based systems. Today, embodied AI can understand spoken instructions, adapt to new environments, generalise across different tasks and learn from few examples. Progress is accelerating quickly and the development curve is far from linear.

What to expect in the next five years (2026 – 2031)

Between 2026 and 2031, physical AI will move from niche deployment to widespread integration across multiple sectors. Warehousing and manufacturing will see significant humanoid and mobile robotics deployment, with human-robot collaboration (“cobotics”) becoming normalised. UK advanced manufacturing hubs and port infrastructure are likely to be early adopters, accelerating uptake driven by labour shortages and efficiency demands.

Healthcare assistive robotics will expand beyond pilot programmes, construction automation will increase, and energy and infrastructure inspection robotics will scale. High-end domestic robotics will transition from aspirational products to practical tools, whilst elder-care robotics will emerge as a response to ageing societies. Autonomous defence systems will expand globally, placing the UK under pressure to address regulatory and ethical questions around autonomous lethal capability and compliance with international humanitarian law. The insurance sector will begin repricing physical risk exposure, developing robotic liability products, and demanding clearer accountability frameworks. 

Legal issues associated with physical AI

The UK Product Liability Regime, built around the Consumer Protection Act 1987 and evolving EU reform, is premised on defective products and is likely poorly suited to physical AI systems that blur the line between a product and a service. When harm occurs, it can be difficult to determine whether the fault lies in the hardware, the AI model, a post-sale software update, the training data or later system finetuning. The upcoming EU Product Liability reforms will extend strict liability to cover software and AI. The UK will need to decide whether to follow this approach or take a different path.

Negligence and the responsibility gap

Negligence law faces similar challenges. The traditional test of 'reasonable foreseeability' becomes significantly more complex when physical AI systems continue learning and evolving post-deployment, potentially behaving in ways that were not anticipated at launch. Where harm occurs, three key questions arise: first, was the decision to deploy the system negligent, given what was known or ought to have been known about its capabilities and limitations at the time? Second, was ongoing monitoring of the system's behaviour inadequate? Third, was human oversight reasonably required and, if so, was it in place at the material time? Traditional legal models assume that harmful actions can be traced back to a specific human decision-maker. Physical AI disrupts this assumption, creating a 'responsibility gap' where harm occurs but no clear actor can be held accountable.

To address this ethical and legal frameworks must establish clear accountability chains before deployment, require detailed logging of all AI driven decisions that lead to physical actions and create sector specific protocols for incident investigation. Deployers should also have a duty to explain how accountability is structured. In high risk settings, the law should presume that a natural or legal person ultimately carries responsibility.

The UK AI regulatory framework

Under the UK’s principles‑based AI framework (as articulated by the Department for Science, Innovation and Technology), regulators apply five cross-sectoral principles:

  • Safety
  • Transparency
  • Fairness
  • Accountability; and
  • Contestability.

Use of physical AI creates pressure on these principles as these systems make split‑second decisions, often in ways that cannot be easily explained or accessible and the resulting harm can be immediate and irreversible.

Human judgement should remain central to the deployment of physical AI: decisions should stay with people unless there is a clear and justifiable reason to delegate them to an autonomous system. Transparency and explainability are critical - individuals affected by the actions of a physical AI system must be able to understand why it acted the way it did, what data shaped its decisions, and how to challenge the outcome.

Sector regulators (including the Health and Safety Executive (HSE), the Medicines and Healthcare products Regulatory Agency (MHRA), the Civil Aviation Authority (CAA) and the Financial Conduct Authority (FCA)) will increasingly issue their own AI guidance, raising the risk of inconsistent rules and regulatory gaps.

Health and safety law

The Health and Safety at Work etc. Act 1974 imposes duties on employers to ensure safe systems of work, but autonomous technologies challenge basic concepts. Where robotic systems operate, key questions arise: who counts as the 'operator' of an AI system? Is the AI system a 'work equipment' risk under the Provision and Use of Work Equipment Regulations 1998 (PUWER)? What constitutes adequate training when the system's behaviour evolves after deployment?

Unlike traditional software, physical AI cannot simply be patched after a fatal error without risking lives. Technical safety measures must include reliable fail-safes such as emergency-stop functions, human override controls and safe-shutdown modes that activate if systems begin to fail. Physical AI must be stress-tested against edge cases, adversarial conditions and unpredictable environments, with ongoing validation to ensure it behaves as intended. Cybersecurity must also be built in from the ground up, since these systems can be targeted through hacking, spoofed sensor inputs or adversarial interference with potential for immediate physical harm.

High-risk systems should undergo independent third-party audits before deployment and be supported by mandatory safety cases demonstrating that risks have been identified and mitigated. Incident-reporting frameworks should be established to promote learning across the sector, and clear guidelines are needed to determine when human oversight is required.

There will be a growing expectation that employers conduct algorithmic risk assessments, prepare AI-specific safety cases and implement continuous monitoring frameworks to demonstrate compliance with their duties.

Data protection

Data protection concerns under UK GDPR are heightened. Using physical AI depends on computer vision, continuous environmental scanning and behavioural inference. All of which raise issues around lawful bases for real‑time data capture, biometric processing, proportional workplace monitoring and transparency duties. In public spaces, these systems may effectively operate as mass data‑collection infrastructure.

Bias in physical AI leads not just to unfair outputs, but to unfair actions with real-world consequences. Faulty facial recognition, for example, can result in wrongful detentions, and healthcare robots trained on non-representative data may deliver worse care to marginalised groups. Physical AI systems should therefore be tested for equality impacts before deployment and audited regularly for bias across different demographic groups, with input from the communities they are designed to serve.

Autonomy and criminal law

Autonomy raises pressing questions for criminal law. Where a physical AI system causes harm, fundamental issues of attribution arise: can intent, a core element of most criminal offences, be meaningfully attributed to an autonomous system? Could corporate criminal liability attach to the deployer, manufacturer or operator of the system? And does the doctrine of gross negligence manslaughter apply where a failure to adequately supervise or maintain an autonomous system results in death? English criminal law attributes responsibility to natural or legal persons, but fully autonomous systems complicate these attribution models by introducing decision-making processes that may not be traceable to any individual human act or omission. While the concept of "electronic personhood" has been raised in academic and policy discourse, it remains legally and politically unlikely in the UK, meaning that existing legal persons, whether individuals or corporations, will continue to bear criminal responsibility for the actions of the systems they develop, deploy and operate.

Contractual allocation of risk

Commercial use of physical AI will increasingly depend on detailed contracts that allocate risk, for example, indemnities for system failures and limits on liability for autonomous actions, warranty exclusions for adaptive learning drift, and obligations for model updates and safety patches. Beyond traditional contract structures, the market can also expect growth in AI-specific insurance riders, risk-sharing consortium structures between developers and deployers, and performance-based robotics contracts where liability follows operational outcomes rather than product specifications alone.

Public law and human rights

Where physical AI is deployed in the public sector, administrative law principles apply with full force. Decisions made or informed by autonomous systems must be lawful, rational and procedurally fair, and public bodies must ensure that delegation of decision-making functions to AI does not unlawfully fetter the discretion of the relevant decision-maker. In contexts such as policing, welfare distribution and public service allocation, physical AI introduces significant concerns under Article 8 of the European Convention on Human Rights, particularly around the right to privacy and the proportionality of surveillance, data collection and automated profiling. Public authorities deploying physical AI must therefore demonstrate that such systems are necessary, proportionate and subject to adequate safeguards, including meaningful human oversight and accessible mechanisms for individuals to challenge decisions that affect them.

Strategic outlook

Over the next five years, physical AI will move beyond industrial settings and into public infrastructure, healthcare, defence and domestic environments. The significance of this shift cannot be overstated: the law has historically regulated information systems, but is now called upon to regulate autonomous actors operating in the real world.

For UK policymakers and practitioners, three imperatives emerge. First, product liability must be modernised to account for adaptive systems that evolve post-deployment. Second, AI safety cases must be embedded within sector-specific regulation, ensuring that risks are identified, assessed and mitigated before systems are permitted to operate. Third, accountability chains must be clarified before large-scale deployment, so that where harm occurs, responsibility can be attributed swiftly and fairly.

Physical AI represents not merely a technological evolution, but a jurisprudential one - it will test the UK's ability to embed ethics, rights and accountability into the fabric of innovation. The next five years will be decisive: the UK can either lead by establishing credible, innovation-friendly governance that supports progress while protecting rights, fairness and sustainability, or it can fall into a reactive posture, responding to harm events only after they occur.