The future of Artificial Intelligence (AI) lies not just in its technical advancements but in its responsible governance, underpinned by human-centered principles and policies. As such, AI policy research is an urgently needed area of focus, not just AI research, not just policy research, but a deliberate intersection of the two. This realization was at the core of the recent AI Policy Summit, a collaborative platform bringing together researchers from around the world and co-organised by MILA and the AI Policy Lab that I have the previlege to direct. This was not just an event but a pivotal step toward shaping the trajectory of AI policy and governance. As AI technologies increasingly permeate every aspect of society, their potential to drive progress must be balanced with safeguards to ensure they align with human-centered values. This balance cannot be achieved by technical or legislative approaches alone; it demands the collaborative efforts of researchers, policymakers, and civil society.

The AI Policy Summit provided a unique platform for representatives from independent research organizations, spanning academia and civil society from diverse national contexts to engage in an open, informal environment that enable deep and heated exchange of ideas. A panel discussion with policymakers from multiple countries added depth and diversity to the discussions. Their contributions underscored the varying challenges and opportunities faced across different governance frameworks. Policymakers from Sweden, Tanzania, Canada, the Netherlands, and Portugal shared insights into their regional experiences with AI regulation, highlighting both shared objectives—such as transparency and accountability—and the unique cultural and legislative nuances that influence AI governance.
I was also especially encouraged by Marietje Schaake’s keynote, which highlighted the critical role of researchers in engaging with policymakers through building lasting relationships, providing actionable insights like policy briefs, and actively contributing to both the creation and implementation of legislation, all while acknowledging the challenges both sides face.

During the two days, exchanges between the participants emphasized the critical need for localized approaches to AI policy that are informed by global best practices. The summit fostered an environment where academic and civil society researchers could present evidence-based findings while gaining a firsthand understanding of the practical realities policymakers face. This interaction not only enriched the dialogue but also set the foundation for future collaborations aimed at shaping inclusive, effective, and context-sensitive AI policies.
Why AI Policy Research Matters
The development and governance of Artificial Intelligence (AI) are complex, interconnected challenges that demand a dedicated focus on AI policy research, a field distinct yet integrative of AI technology research and policy governance. This emerging discipline addresses gaps that neither AI research nor policy alone can resolve, ensuring that governance frameworks are not only informed by cutting-edge science but also aligned with societal needs and values. While AI research focuses on advancing technology and policy research on governance frameworks, neither can address the multifaceted impacts of AI in isolation:
- AI advancements without Governance: Left unchecked, rapid AI innovation can deepen societal inequalities, exacerbate environmental damage, and consolidate power among a few, undermining public trust and equitable access.
- Policy without AI research: Policies uninformed by empirical evidence or understanding of AI’s dynamic landscape risk becoming outdated, excessively restrictive, or misaligned with technological realities, stifling innovation and public benefits.
AI policy research as foundation for Responsible AI
Responsible AI begins well before algorithms are written or systems deployed: it starts with the fundamental questions: What problems are we solving? For whom? And with what consequences? What are the most suitable solutions? Is it AI? Addressing these questions requires a nuanced interplay between policy and research. The summit highlighted the growing need for this alignment to ensure that AI technologies foster societal progress, uphold human rights, and contribute to global sustainability goals.
At its heart, AI policy research navigates complex trade-offs. Fostering innovation while mitigating societal inequities requires a framework that ensures AI benefits are equitably distributed, particularly to those most vulnerable to its disruptions. AI policy research creates a vital bridge between these domains by focusing on actionable, evidence-based governance. It emphasizes transparency, accountability, and sustainability while ensuring equitable outcomes. By addressing issues such as inclusivity, environmental trade-offs, and regulatory foresight, AI policy research supports:
- Proactive governance: Anticipating the implications of AI advancements demands foresight-driven policies that anticipate potential risks and societal impacts before they arise. By proactively identifying challenges—such as biases, security vulnerabilities, or unintended social consequences—governance frameworks can mitigate harm and establish safeguards that evolve alongside technological innovation.
- Cross-sector collaboration: Effective AI policy requires a united effort from academia, industry, and government. Collaborative frameworks enable the sharing of expertise, aligning research insights with regulatory needs and industrial priorities. This synergy fosters the creation of policies that are both practical and evidence-based, ensuring comprehensive oversight and adaptability.
- Responsible innovation: Encouraging the use of AI only when its benefits outweigh costs and align with ethical standards. That is, AI should be deployed only when its advantages clearly outweigh associated costs and risks. Responsible innovation emphasizes ethical design, sustainability, and equitable access, ensuring that AI systems contribute to societal well-being without exacerbating inequalities or environmental harm.
The AI Policy Summit’s Contribution
The recent AI Policy Summit brought together global policymakers, academic researchers, and civil society actors to highlight this integrative approach. Discussions focused on immediate and long-term goals, such as fostering global accountability standards, developing foresight mechanisms, and crafting practical tools for inclusive governance. By emphasizing a shared roadmap and cross-sectorial expertise, the summit illuminated how AI policy research can drive actionable solutions for the responsible development of AI technologies. This collective effort underscores the urgency of AI policy research as a means to guide innovation and governance toward equitable, sustainable outcomes. It is a field poised not only to mitigate the risks of AI but to maximize its potential as a force for societal good. Building on insights from the summit, several ideas were proposed to solidify the role of AI policy research, including:
- Establish Visiting AI Policy Fellowships: These programs at different research institutes connect researchers with policymakers, fostering mutual understanding and collaboration.
- Launch an AI Policy Research Network: A global platform to share best practices, insights, and resources for evidence-based policymaking.
- Develop AI Policy Briefs: Translating research findings into actionable insights tailored for policymakers is essential for informed decision-making.
- Focus on Education and Capacity Building: Initiatives like student exchanges and Erasmus programs can cultivate a new generation of leaders at the intersection of AI and governance).

A Shared Responsibility
AI policy research is not just a necessity, it is an opportunity to ensure that AI serves humanity rather than shaping societies in ways that exacerbate inequities or environmental harm. By combining the rigor of scientific inquiry with the pragmatism of governance, this field provides a pathway to align AI innovation with ethical, human-centered values.
The AI Policy Summit marked the beginning of a critical journey, one that bridges the gap between technological innovation and governance to ensure AI serves humanity responsibly. This initiative is more than a conference or a network; it is a call to action for researchers, policymakers, and civil society to collaborate in shaping an equitable and sustainable AI future.
Looking ahead, the true measure of its success will be our ability to foster lasting impact. This includes creating actionable frameworks, building trust through transparency and accountability, and policy instruments that ensure that the benefits of AI are accessible to all. As AI continues to evolve, our collective efforts must remain grounded in shared principles of fairness, sustainability, and human-centered development.
The challenges are immense, but so too is our collective potential. By uniting diverse perspectives and expertise, we can navigate the complexities of AI with integrity and purpose. Together, we have the opportunity not only to mitigate risks but to redefine AI as a tool for societal good—one that reflects the values and aspirations of all. The journey is just beginning, but the urgency is clear. I welcome you all to join us to #InformAIpolicy, a joint commitment to building a future where AI contributes to societal progress, respects the planet, and ensures equity for all.

