AI's power and capabilities will continue to grow in 2025, with new models able to act autonomously, create self-replicas, and further blur human-machine boundaries. But as most governments opt for lighter-touch regulation and international cooperation falters, the risks and collateral damage from unbound AI will multiply.
In last year's Top Risk #4: Ungoverned AI, we cautioned that global efforts to establish AI guardrails would prove insufficient owing to politics, inertia, defection, and the pace of technological change. Some notable AI governance initiatives did come to fruition in 2024—including from the European Union, the Council of Europe, and the United Nations. But without strong, sustained buy-in from governments and tech companies, these will not be enough to keep pace with technological advances.
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. This website uses cookies. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence (if different), and our use of cookies as described in our Cookie Policy.