How can we help you?

You've probably heard about ChatGPT and similar AI tools that answer questions. 

But something new is happening that represents a fundamental shift, raising significant legal questions. OpenClaw represents the latest wave of "agentic AI"; systems that don't just respond to queries, but actually do things on your behalf. They connect to your email, book meetings, write messages, monitor feeds, and make decisions with minimal human oversight. As these systems move from tech curiosity to mainstream business tool, it's worth understanding what they are, what they can do, and where the legal risks lie.

So what exactly is OpenClaw? 

At its core, it's an autonomous agent (a piece of software) that acts on your behalf rather than just answering questions. It connects to your applications and services, reads and writes data, and executes multi-step tasks without constant human intervention. The platform has evolved into an ecosystem with a marketplace of extensions called "skills" that expand what the agent can do. This modular design allows rapid customisation, but as we'll see, it also introduces serious supply-chain security risks.

The platform's development has been characterised by remarkable speed and repeated rebranding from Clawdbot to Moltbot to OpenClaw in quick succession. This naming fluidity, combined with viral adoption, has created fertile ground for impersonation and fraud, complicating both user verification and regulatory oversight.

Three factors explain why OpenClaw has become a phenomenon:

  • First, there's Moltbook, an extraordinary agent-only social forum, where autonomous systems post, comment, and interact while humans simply watch. This spectacle of machine-to-machine conversation has captured public imagination, simultaneously fascinating and unsettling observers;
  • Second, the barrier to entry is remarkably low: users can rapidly deploy agents and connect them to everyday services through intuitive simplified interfaces, often faster than they can properly secure them. This ease of use has driven adoption but enabled hasty implementations with inadequate security controls; and
  • Third, opinion remains sharply divided: proponents view OpenClaw as the productive future, while security practitioners describe the current implementation as presenting avoidable but serious risks.

Real-world applications

Despite legitimate security concerns, OpenClaw is being deployed for genuine business purposes across sectors. Users employ agents as digital executive assistants, delegating routine tasks like inbox management, message composition, and calendar coordination. The appeal lies in offloading repetitive cognitive work to an always available system. Small organisations use agents to monitor workflows, triage incoming requests, generate draft communications, and coordinate processes across disparate tools. Technical users leverage agents to generate scripts, wire integrations, and scaffold applications rapidly; particularly powerful in environments where agents have execution permissions or file system access.

Typical use cases include agents handling daily administration through messaging apps, or monitoring feeds to post alerts and replies automatically. This latter pattern, automated responses to external data sources, represents both a valuable workflow and a major attack vector when the monitored source can't be trusted.

What makes this wave feel genuinely different from previous automation? 

The answer lies in several novel features. OpenClaw has effectively spawned a parallel digital ecosystem, with Moltbook functioning as a public square for autonomous agents where they publish content and engage in interactions while humans largely observe. This creates an unprecedented environment where machine-generated activity comprises the primary content layer.

Agents running continuously, adopting personas, and interacting with one another can create an uncanny impression of independent thought. It's worth emphasising, this isn't consciousness. It's emergent behaviour arising from language models, automation frameworks, and, crucially, human projection. The appearance of "sentient thinking" is largely theatrical, though psychologically compelling. There's even a parallel development where services position themselves as marketplaces where AI agents can hire human workers to perform physical tasks beyond their digital capabilities. Whether this represents genuine innovation or clever marketing, it signals an important trajectory; agent ecosystems are attempting to bridge digital autonomy into the physical world.

Principal legal risks

The fundamental security proposition is straightforward: an agent empowered to take autonomous action is inherently more dangerous than an assistant limited to conversation. This autonomy creates legal uncertainty absent in traditional automated contexts. Security researchers have documented significant volumes of malicious "skills" uploaded to OpenClaw's registry, masquerading as legitimate tools whilst delivering credential theft and malware. Installing a skill more closely resembles executing unknown software than adding a benign plugin. The lack of rigorous vetting, combined with rapid user adoption, creates ideal conditions for supply-chain compromise.

When agents connect to email, web content, or social feeds, attacker controlled text can become actionable instructions; sometimes invisibly embedded. An email containing hidden directives can cause an agent to exfiltrate data, modify records, or execute unauthorised transactions. Agents typically rely on tokens, API keys, and broad application permissions. Compromise of an agent often means compromise of every connected service (email, cloud storage, financial services, and internal business systems). The blast radius is extensive.

Rapid rebranding combined with viral growth creates optimal conditions for impersonation and typosquatting (attackers register domain names similar to popular, legitimate websites). Users seeking the legitimate platform may inadvertently download malicious variants. The prevailing view amongst security professionals isn't that agentic AI should be completely avoided, but rather deployed cautiously; don't run agentic AI on machines or accounts containing sensitive data unless you can sandbox effectively and maintain rigorous audit capabilities.

UK legislative framework

From a UK regulatory perspective, several frameworks engage with OpenClaw-style deployments. The data protection framework presents immediate friction points. Where agents process personal data (emails, contacts, calendars, documents) controllers must establish a lawful basis, provide clear privacy notices, and comply with data protection principles. The opacity of some agent operations complicates transparency obligations. If agents make decisions producing legal or similarly significant effects (employment screening, access control, creditworthiness), ICO guidance on automated decision-making becomes directly applicable. Organisations must demonstrate human oversight, explainability, and fairness. Deploying agents that leak data through insecure skills or succumb to prompt injection represents a governance failure; data breaches trigger notification obligations and potential enforcement action.

If an agent based service hosts user-generated content or facilitates user interaction, as Moltbook arguably does, platform duties under the Online Safety Act may apply, requiring illegal content risk assessments and systems to reduce harm. For organisations within scope of the Network and Information Systems Regulations 2018, deploying agentic AI without adequate controls could constitute a failure to manage security risks. As agentic AI matures into consumer products, compliance with security standards under regimes like the Product Security and Telecommunications Infrastructure Act 2022 will likely become mandatory.

Even where sector specific regulation doesn't neatly apply, established UK legal frameworks remain operative. Agent compromise leading to unauthorised access may engage the Computer Misuse Act 1990. Liability may attach not only to attackers but potentially to deployers whose negligence facilitated the breach. Lawyers, financial services professionals, and healthcare providers owe duties of confidentiality and professional competence. Deploying agents that inadequately protect client information or make unsupervised professional judgments risks regulatory sanction and civil claims.

Contract law 

The contract law implications are particularly interesting. Under current English law, AI systems lack legal personality and cannot themselves be parties to contracts. Contracts formed through AI agents are attributed to the people or companies who deploy them, based on established principles of agency law. But this framework becomes strained when AI systems operate with such autonomy that attributing their actions to human principals becomes conceptually difficult. The unpredictability of machine learning, which may produce outcomes their designers neither intended nor foresaw, challenges the traditional requirement that agents act within the scope of their authority.

Contract formation requires offer, acceptance, consideration, and an intention to create legal relations. When AI agents negotiate and conclude agreements, determining whether these elements are satisfied becomes complex. English courts have historically adopted an objective approach; asking whether a reasonable person would understand the parties to intend legal consequences. This objective test may accommodate AI-generated offers and acceptances, provided the system operates within parameters established by parties who do intend legal relations. Nevertheless, difficulties arise with wholly autonomous AI transactions where no human reviews the terms before conclusion.

When AI agents breach contractual obligations, liability falls upon the principals who deployed them. However, determining appropriate remedies becomes complicated when breaches result from autonomous AI decisions. The traditional measure of damages, compensating for losses reasonably foreseeable at contracting, may require reconsideration when AI systems create unforeseen consequences. The 'black box' problem raises particular difficulties for establishing causation and foreseeability.

Vendors shipping insecure agent platforms with inadequate warnings, and organisations deploying them without proper risk assessment, face potential negligence claims. Contractual liability for system failures, data breaches, or unauthorised actions performed by agents will be tested in the courts. As the Law Commission has recognised, English law's flexibility provides a foundation for addressing these challenges, but gaps remain; particularly concerning liability for algorithmic errors and the attribution of knowledge and intention.

Conclusion

OpenClaw and similar agentic AI systems represent a genuine evolution, offering productivity gains and novel capabilities. But the current landscape is characterised by immature security practices, supply-chain vulnerabilities, and legal uncertainty. Machines making actual decisions by themselves without human involvement take English law into new territory.

For businesses, the message is clear: agentic AI isn't inherently unlawful, but its deployment requires rigorous governance. Organisations must conduct data protection impact assessments, implement robust access controls, establish clear accountability frameworks, and maintain meaningful human oversight. The allure of automation must be balanced against the legal, security, and reputational risks inherent in delegating agency to systems that remain imperfectly understood and inadequately secured.

The regulatory landscape is evolving rapidly. What remains permissible experimentation today may become regulated activity tomorrow. English law will need to evolve, either by extending existing doctrines through judicial development or creating new statutory frameworks specifically for AI agents. Prudent organisations will adopt a compliance forward approach, treating agentic AI deployment as a material risk requiring board-level attention, not merely a productivity tool to be rolled out by enthusiastic IT departments.

The OpenClaw phenomenon offers a preview of the future. Whether that future is productive or risky will depend largely on how seriously we take the legal and security challenges today. Given English law's historical adaptability, a hybrid approach combining judicial innovation with targeted legislative intervention appears most likely to succeed. But the time to think carefully about governance is now; before your agent makes a decision that lands you in court.