The year 2023 marked a turning point for Artificial Intelligence (AI) governance. As AI technologies rapidly evolve, their profound impact on society and the economy is leading to increasing needs towards a coordinated approach to governance, a challenge that can only be addressed globally. 2023 has been a testament to that need, with major strides taken internationally to address the promises and perils of AI. In this article, I provide a timeline of the main events that shaped policies around AI during the past year, providing a comprehensive record, chronicling pivotal events and developments that have shaped the global discourse on AI regulation and its ethical implications.
Not only global events such as the groundbreaking approval of the EU’s AI Act to the creation of the UN’s Advisory Body on AI, but also various nations and organizations are stepping up their own AI policies and calls for action for a more responsible and inclusive AI were heard from different directions. The timeline below illustrates how the world is understanding the need for regulation of the potential risks and opportunities presented by AI. It’s a global challenge, one that demands collective effort and vigilance. Each entry in this timeline represents a significant moment in the journey towards responsible AI governance. They reflect a world in rapid transition, grappling with the complexities of AI and striving to harness its potential responsibly and ethically.
Looking forward, we can expect further recognition of the global impact of AI. Rapid advancements in AI technology, as evidenced by the release of e.g. GPT-4 and Gemini, have instigated a sense of urgency among policymakers and industry leaders to establish a regulatory framework.
However, in my opinion this does not imply that a uniform, global approach to AI regulation is needed, or even desirable. It is important to understand and accept that different regions, countries and sectors have unique backgrounds, cultures and needs. Nevertheless, increased urgency towards collaboration across the globe is needed. In order to support shared understanding, build bridges and ensure that benefits and opportunities are accessible to all and inclusive of differences. The variety of regulations and guidelines proposed or enacted worldwide indicates that different regions tailor their AI governance strategies to their unique cultural, ethical, and socio-economic contexts, but initiatives like the G7 Hiroshima AI Process and UNESCO’s urgent call for AI rules, or UN’s efforts to define a global governance approach, highlight the importance of international dialogue for effective AI policies.
Finally, regulation and governance must be implemented and understood not as something that hinders innovation, but much more as a stepping stone encouraging AI innovation. Governance measures not only spur AI developments towards responsible and beneficial directions, but also open up new research and innovation opportunities to define, develop and implement methods, tools and frameworks to support governance.
As AI continues to evolve, so will the challenges and opportunities it presents. This requires governance frameworks to be adaptable and forward-thinking. Ensuring that AI governance is inclusive, considering the needs and voices of diverse populations, will be crucial for equitable and sustainable AI development. This requires an ongoing, open and transparent dialogue among governments, industries, academia, and civil society to navigate the complex landscape of AI governance effectively. Such efforts will undoubtedly shape the trajectory of AI development and its societal integration, emphasizing the need for thoughtful, inclusive, and dynamic governance strategies.
An annotated timeline of AI (policy) events in 2023
January
- 26: In the US, the National Institute of Standards and Technology (NIST) releases AI Risk Management Framework. This framework, developed in collaboration with the private and public sectors, is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
March
- 13: Canada publishes AI and Data Act “Companion Document” that sets the foundation for the responsible design, development and deployment of AI systems that impact the lives of Canadians
- 14: OpenAI releases GPT4. This multimodal model (accepting and emitting image and text) is their most current release in deep learning, which they claim ‘exhibits human-level performance on various professional and academic benchmarks’.
- 22: The Future of Life Institute (FLI) launches an open letter asking for a pause on “giant AI experiments”. The letter attracted much attention and has been signed by over 30000 to date, but at the same time, it was also contested by many that saw it as being an overly alarmist distraction from current and very real risk of AI use and development. It generated several reactions, such as ours.
- 26: The UK government publishes a White Paper on AI Regulation setting out the country’s commitment to a pro-innovation approach to AI regulation.
- 30: UNESCO urgent call to its 193 member states “implement AI rules NOW”. This call asks member states to fully implement its Recommendation on the Ethics of Artificial Intelligence as these provide all the necessary safeguards to the concerns raised in FLI’s open letter.
- 31: Italy bans ChatGPT: According to the Italian Data Protection Authority, OpenAI had no legal basis to justify the storage and collection of users’ personal data used to train the site’s algorithms, as per GDPR. The ban was lifted later in April.
April
- 11: China issues strict regulations on AI systems. These measures, which draft was open to public comments until 10 May, aim to govern generative AI service provision in China.
- 13: US Senate leader Chuck Schumer announces plans to legislate AI. The proposal focuses on building a flexible and resilient AI policy framework across the federal government that can adapt as the technology continues to advance. It aims to foster innovation and the continued US leadership in the development of AI, while enhancing security, accountability, and transparency.
- 17: EU legislators call for an emergency global summit. In a reaction to FLI’s ‘pause letter’, a group of members of the EU parliament urged world leaders to hold a summit to find ways to control the development of advanced artificial intelligence (AI) systems such as ChatGPT. Comment: Such a Summit has, as yet not materialised, having been superseded by the UK’s Safety Summit in October.
- 26-28: The World Economic Forum (WEF) holds a summit on Responsible AI that resulted on the Presidium Recommendations to guide technical experts and policy-makers on the responsible development and governance of generative AI systems.
May
- 4: US’s White House calls emergency meeting of leading AI CEOs during which the president stressed the need to mitigate both the current and potential risks AI poses to individuals, society, and national security, in order to realize the benefits that might come from advances in AI.
- 12: Brazil’s government proposes an AI Bill. This proposal, that follows a risk-based approach similar to the EU’s AI Act, aims to create rules for the operation of AI systems in Brazil, establishes rights for people affected by their operation, and provides for penalties for violations, as well as information regarding the supervising body. Correction by Dora Kaufman : There isn’t yet a proposal for an AI bill, but the Brazilian Senate constituted a commission of Senators to analyze proposal PL 2338.
- 16: OpenAI calls for governments to enact AI safety
- 25: European leading AI researchers meet at the EU parliament to discuss the role of Europe on AI development and fundamental researcher, stressing the need for sovereignty
- 30: G7 ‘Hiroshima AI process’ on global AI governance started aiming to aims to promote safe, secure, and trustworthy AI worldwide and provide voluntary guidance for actions by organizations developing the most advanced the rules for digital technologies, including the most advanced foundation models and generative AI systems, to ensure that these are in line with “our shared democratic values”.
- 30: The Center for AI Safety releases an open letter aiming at a less alarmistic and futuristic view than the FLI open letter, with the very short message: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
June
- 8: (entry suggested by Irakli Beridze) INTERPOL-UNICRI release a practical AI governance instrument for law enforcement. The instrument is now implemented in 15 countries across the world.
- 14: The European Parliament adopted its position on the AI Act with an overwhelming majority paving the way for the interinstitutional (trilogue) negotiations between EU Parliament, the European Commission and the the EU Council of Ministers, representing European governments.
August
- 15: China’s “Interim Measures” on Generative AI enter into force. The measures, that reflect the feedback from different stakeholders, are issued by the Cyberspace Administration of China (CAC) together with six other central government regulators. The measures aim at regulating the provision of generative AI services, such as ChatGPT, to the public of mainland China and have been formulated in accordance with existing laws and regulations. The AI Measures aim to ensure that a healthy environment can be fostered within China that allows for the responsible use of generative artificial intelligence without causing undue harm to the national security, social and public interest, and the legitimate rights and interests of the citizens, including legal persons and organizations.
September
- 27: Canada releases a voluntary code of conduct specific to generative AI. This proposal goes beyond risk mitigation, encouraging its signatories to promote and build a robust and responsible AI ecosystem in Canada. The code provides a set of measures that support upcoming regulation pursuant to AIDA, emphasizing developing and managing the operations of generative AI systems
October
- 27: The UN launches a High Level Advisory Body on AI to undertake analysis and advance recommendations for the international governance of AI.
- 30: President Biden issues the AI Executive Order. The Executive Order has the goal of promoting the “safe, secure, and trustworthy development and use of AI.” and establishes a pivotal role for NIST in the development of guidelines and best practices.
- 30: G7 leaders agree on the Hiroshima AI Process that sets international guiding principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI developers.
November
- 1: UK holds the AI Safety Summit aiming to set UK’s UK’s position as a world leader in AI safety. The summit, hailed as a diplomatic breakthrough after it produced an international declaration (the Bletchley Declaration on AI Safety) to address risks with the technology, as well as a multilateral agreement to test advanced AI Models. The event was however heavily criticised by a.o. civil society organisations for mostly including tech execs and government officials.
- 6: (entry proposed by Clara Lin Hawking): release of 01.AI’s open source large language model Yi-34B trained from scratch and finetuned for various chat usecases.
- 13-15: WEF’s AI governance summit: This initiative, following WEF’s meeting in April, focused on responsible generative AI, bringing together influential regional voices and global stakeholders to harness the benefits of generative AI systems and technologies while ensuring equitable and sustainable global impacts.
- 15: In the US, a bipartisan group of senators introduces the AIRIA Act.The AIRIA is the latest in the efforts of the US in establishing a safe and innovation friendly environment for the development and deployment of AI, with the dual aim of encouraging innovation while establishing a framework for accountability.
December
- 6: Google launches Gemini: Gemini is Google’s latest multimodal (text, image and video) large language model, and their answer to compete with OpenAI’s GPT4. Gemini, they claim, is their ‘most flexible model yet — able to efficiently run on everything from data centers to mobile devices’.
- 9: Europe’s trilogue agrees on the AI Act: A significant step on AI regulation, this world primeur, aims not only to enhance governance and effective enforcement of existing law on fundamental rights and safety, but also to promote investment and innovation in AI within the EU, and to facilitate the development of a single market for AI applications.
- 21: The UN Advisory Body on AI releases draft recommendations: this interim report calls for anchoring AI in international law, human rights, and the Sustainable Development Goals. It also identifies AI governance critical functions and principles. In 2024, the UN AI Body will explore options for institutionalizing these functions, through a program of consultations with diverse stakeholders worldwide.


Leave a comment