INSIGHT
OpenClaw and agentic AI: what it means for your business09 February 2026
By Amardeep Gill and Matt Whelan
Welcome to the latest edition of Trowers Tech News.
This month, we examine pivotal developments in AI regulation, platform accountability and emerging technology risks. From Ofcom's investigation into X's Grok AI and the EU AI Act's critical compliance phase, to the ICO's new guidance on agentic AI, the regulatory and legal landscape is evolving at pace.
We begin with the Ofcom investigation into X following the use of its Grok AI tool to generate non-consensual intimate images at scale. The UK government has confirmed it will criminalise nudification apps and bring into force long-delayed offences, with international regulators taking parallel action under the Digital Services Act and other frameworks.
Next, we examine what to expect in 2026 for the EU AI Act as organisations move from planning to practical compliance. The European Commission's Digital Omnibus proposals are reshaping timelines for high-risk AI systems, while self-assessment requirements place legal accountability squarely on providers and deployers.
We also analyse the ICO's January 2026 Tech Futures report on agentic AI, which highlights both the transformative potential and novel data protection risks of autonomous systems that operate with minimal human oversight. The report underscores that organisations remain fully responsible for compliance when deploying such technologies.
Top tech trends: AI regulation


Victoria Robertson, Partner and Anna Horsthuis, Senior Associate
UK and international regulators are intensifying scrutiny of X after its AI tool, Grok, was used to produce non-consensual sexually explicit images at scale. An "edit image” feature allowed users to alter images posted by others, with requests for “nudified” images reportedly made at least once per minute by late 2025.
Ofcom has opened a formal investigation under the Online Safety Act 2023 to determine whether X failed to prevent or respond to illegal content. Penalties can include fines of up to 10% of global turnover or UK access restrictions. The UK government has confirmed it will criminalise “nudification” apps and bring into force offences making it illegal to create or request non-consensual intimate images. These offences will also become priority offences under the Online Safety Act.
Internationally, France has referred X to its media regulator under the EU Digital Services Act, the European Commission is examining CSAM allegations, Australia is tightening youth social media access for under 15s, and authorities in India and Malaysia have demanded explanations. This highlights increasing regulatory pressure on AI-enabled platforms and the limits of current safeguards.
The EU AI Act enters a critical phase in 2026 as organisations move from planning to practical compliance. The Act, in force since August 2024 and phased in from February 2025, is being shaped by the European Commission’s Digital Omnibus proposals, which aim to simplify rules while maintaining oversight of AI risks.
Key changes for 2026 relate to high-risk AI systems. High-risk AI systems must comply six months after relevant standards are published, with a final backstop of December 2027. High-risk systems embedded in regulated products have a 12-month timeline, with a backstop in August 2028. Certain transparency rules for AI-generated content have a grace period until 2 February 2027.
Self-assessment replaces national authority classification, placing legal accountability on organisations. Providers and deployers should adopt quality management systems under Article 17. The European AI Office will supervise general-purpose AI systems and AI integrated into very large platforms.
Other developments intersect with IP, data privacy, litigation and competition law. The UK is progressing AI copyright consultations, data privacy authorities are updating guidance on automated decision-making, and competition authorities are investigating algorithmic pricing and non-traditional mergers.
Penalties for non-compliance remain high - up to €35 million or 7% of global turnover for prohibited practices. 2026 represents a strategic window for organisations to align with evolving EU requirements, avoid enforcement risk and prepare for broader international AI regulation trends.
The ICO’s January 2026 Tech Futures report examines agentic AI - autonomous systems that combine generative AI with tools and decision-making capabilities to perform open-ended tasks with minimal human oversight. Unlike traditional AI, agentic systems can act across multiple platforms, learn from experience and execute complex workflows such as booking travel, managing HR processes or handling customer transactions. The report highlights applications across commerce, government services, cybersecurity and workplace productivity.
Agentic AI introduces novel data protection risks. Accountability can be unclear in multi-provider systems, complicating controller and processor determinations. Open-ended tasks create pressure to define processing purposes broadly, while agents may access data beyond what is necessary. The report also notes risks of error propagation across systems and increased cybersecurity vulnerabilities.
The ICO stresses that organisations remain fully responsible for compliance when using agentic AI and mitigation depends on design choices including access controls, logging, audit trails and monitoring. The ICO will update guidance on automated decision-making under the Data (Use and Access) Act 2025 and is developing statutory AI codes.
Businesses should review procurement and data protection impact assessments, map current automated decisions and embed privacy-by-design principles.
UK invests £25mn in Kraken software arm to support London listing.
Netflix cites low market share in $83bn Warner Bros Discovery acquisition review.
Gates Foundation and OpenAI invest $50mn in African healthcare AI deployment.
UK to enforce ban on non-consensual AI-generated intimate images and X could lose right to self regulate.
OpenAI launches adverts on ChatGPT free tier.
Google appeals landmark antitrust verdict over search monopoly ruling.
Snap settles social media addiction lawsuit; Meta, TikTok and YouTube face trial.
US investigates medical and industrial imports under national security provisions.
FCA opens cryptoasset licensing regime authorisation gateway in September 2026.
Treasury committee demands FCA AI guidance by end of 2026.
Treasury committee warns AI regulation gaps risk financial stability.
The UK Medicines and Healthcare products Regulatory Agency (MHRA) has published new guidance to help users safely choose and understand digital mental health apps and technologies.
Research by Morgan Stanley shows the UK has suffered the biggest AI-driven job losses among major economies. Despite productivity gains, there are rising concerns about growing unemployment and inequality.
The UK government is bringing leading AI researchers into public service roles to modernise essential systems and support services like transport and national security, backed by partnerships and investment in AI tools.
09 February 2026
By Amardeep Gill and Matt Whelan
09 February 2026
By Nasreen Alissa and Alex Ford-Cox
26 January 2026
By Jonathan Holloway and Rachel Johnson