On 3 February 2026, the Second International AI Safety Report was published as a global scientific assessment of advanced, general-purpose AI.
It was created to build on the 2023 AI Safety Summit mandate to support a shared evidence base for decision-makers. This was led by Turing Award winner Professor Yoshua Bengio and written by independent experts with input from over 100 AI specialists across more than 30 countries and international organisations, the report maintains editorial independence and does not make specific policy recommendations.
A narrower focus on emerging risks
The report focuses on 'emerging risks' from cutting-edge AI systems. It sets out specific scenarios and forecasts to help decision-makers understand possible outcomes. The report does this by drawing on new research conducted by the Organisation for Economic Co-operation and Development (OECD) and the Forecasting Research Institute.
The report notes significant capability advances since the first edition, especially in mathematics, coding and autonomous operation. This is driven by new training methods rather than simply making models larger. However, these systems can excel at difficult tasks while still failing at simple ones.
A key development is the rise of 'reasoning systems' and AI agents that can operate more independently using tools like memory, computer use and web browsing. The complexity of tasks these agents can complete is estimated to double roughly every seven months.
Four scenarios for 2030
Drawing on expert- and evidence-informed analysis developed by the OECD, the report presents four plausible scenarios for AI progress by 2030.
- Progress stalls, where capabilities plateau and reliability issues persist
- Progress slows, where AI systems become useful assistants with incremental gains but struggle to overcome limitations in learning and agency
- Progress continues rapidly, where AI systems become expert collaborators capable of performing professional tasks with high autonomy
- Progress accelerates towards human-level capability, where AI systems match or surpass human cognitive performance in remote work tasks and can autonomously pursue broad strategic goals.
The analysis suggests that by 2030, AI progress could plausibly range from stagnation to rapid improvement exceeding human cognitive performance.
Three categories of risk
The report organises emerging risks into three categories:
- Misuse (including deepfakes, influence campaigns, cyber threats and weapons development)
- Malfunctions (reliability failures and AI 'hallucinations'), and
- Systemic risks (impacts on jobs and risks to human decision-making)
Managing risks in an evidence dilemma
The report describes AI risk management as early-stage, noting that AI capabilities are advancing faster than evidence about harms and effective safeguards. While 12 companies published or updated AI Safety Frameworks in 2025, the report cautions that their real-world effectiveness remains uncertain due to limited external review and uneven compliance with voluntary commitments.
The report emphasises a 'defence-in-depth' approach combining multiple safeguards such as safety evaluations, technical controls and 'if-then' commitments that trigger additional protections when capability thresholds are reached. It highlights the persistent gaps, particularly for open-weight models where safeguards are easier to remove and monitoring is harder. It therefore warns that even closed models could become accessible through theft or leakage.
Building resilience
Beyond developer safeguards, the report calls for broader resilience measures such as incident response plans for cyberattacks, public awareness of AI-generated media, and human oversight to maintain critical functions when AI systems fail. This recognises that technical controls can fail and some risks only emerge through real-world use.
A foundation for informed decision-making
The report concludes that AI capabilities are improving faster than expected, evidence on risks has grown, and risk management is improving but remains insufficient. The overall direction of AI development remains uncertain.
This report provides a shared evidence base for policymakers and organisations navigating this rapidly evolving technology. For legal professionals, it offers valuable context for advising clients on AI governance, compliance and risk management.