AI Policy Lab Day 2025: Highlights and Reflections (Recording Available)

Date & Location: November 19, 2025, Umeå University, Västerbotten, Sweden

The AI Policy Lab Day 2025 was rich in insight and exchange.

Sennay Ghebreab delivered a keynote that grounded Question Zero in lived experience, reminding us that the decision to use, or not use AI is never a static checkpoint. He urged us to think in terms of Question Infinity: a continuous, reflective process in which risks and opportunities are held in tension rather than framed as opposites.

Daniel McQuillan‘s talk added a powerful systemic lens. By framing contemporary AI as a product of deeper structural failures, he challenged us to confront the material and social realities beneath technological optimism. His proposal of decomputing, a combination of degrowth, conviviality, and care, called us to imagine responses that prioritise collective well-being over speed or scale.

Our researchers’ posters reflected a striking level of maturity. Their work is rigorous, thoughtful, and already influencing wider debates on responsible AI. It was encouraging to see how confidently they engaged with participants and how deeply their projects connected to real societal needs (Rachele Carli, Petter Ericson, Jason Tucker, Tatjana Titareva, Themis-Dimitra Xanthopoulou, PhD Mattias Brännström).

Throughout the afternoon, participants brought curiosity, openness, and an eagerness to engage in discussions and informal exchanges between sessions.

The evening screening of Humans in the Loop added an emotional and narrative dimension that tied the day together. The dramatized story, rooted in the real experiences of data workers in India, wove together the daily realities of annotation labour with local culture, personal aspiration, and the power of lived experience. It captured the invisibility of this global workforce while honouring their agency and resilience. The discussion that followed made clear how crucial these perspectives are for any serious conversation on responsible AI.

A full day of insight, critical dialogue, and shared commitment.

Recordings

Slides

Could not load the image

Could not load the image

1 / 2

Responsible AI Self-assessment Workshop: Start with Question Zero


Date & Location: August 27, 2025, Umeå University, Västerbotten, Sweden

On 27 August 2025, more than 100 participants joined the AI Policy Lab workshop Responsible AI Self-Assessment: Start with Question Zero at Umeå University and online. Together, we tested and debated the Responsible AI Self-Assessment Tool, designed to help organisations pause, reflect, and ask why before moving into AI adoption.

You can explore the current version of the tool here:
Responsible AI Self-Assessment Tool (PDF)

Highlights from the discussions

The workshop brought together voices from academia, industry and the public sector, sparking vibrant conversations around responsible AI. Participants reflected on questions such as:

  • Should a clear AI clarification step be required before entering Question Zero (“Why do you plan to adopt an AI system?“)
  • Should organisations complete a process pre-assessment before starting with AI?
  • What kind of work should remain human-only?
  • How can transparency and ethics be maintained when deciding between automation and augmentation?
  • Why might other non-AI solutions not solve the problem at hand?

We are deeply grateful to everyone who joined, shared perspectives and challenged assumptions. Your input is vital to shaping a practical, responsible approach to AI adoption.

Next steps

The tool is still a work in progress. Feedback from this workshop will be implemented directly into the next version of the tool. Future workshops will continue to stress-test and evolve it, ensuring it meets the needs of diverse organisations working with AI.

As Virginia Dignum, Director of the AI Policy Lab, put it:

“Responsible AI isn’t AI-first, it’s people-first. It starts by asking why, not rushing to deploy.”

Interested in taking part in upcoming sessions? Keep an eye on our website and LinkedIn page for updates.

Global AI Policy Research Network Launched at UN IGF 2025 (Recording available)

Workshop #288: An AI Policy Research Roadmap for Evidence-Based AI Policy
Date & Location: June 26, 2025, Oslo, Norway

At the UN’s Internet Governance Forum (IGF) 2025 in Oslo, Norway, AI Policy Lab @Umeå University (Virginia Dignum, Jason Tucker, Tatjana Titareva and colleagues) and Mila – Quebec Artificial Intelligence Institute (Isadora Hellegren Létourneau, and colleagues), in cooperation with our partners including Alex Moltzau, Eltjo Poort, Neema K. Lugangira, and many others, launched the Global AI Policy Research Network (GlobAIPol). The network invites diverse stakeholders to share practical knowledge that supports ethical, transparent, and evidence-based practices for shaping inclusive and trustworthy AI policies. The session also encouraged global stakeholders to endorse the Roadmap for AI Policy Research.

Explore GlobAIPol
Endorse the Roadmap for AI Policy Research

Three key takeaways:

  • AI regulation requires agile, evidence-based approaches – technological policymaking is not set in stone.
  • Multiple complementary frameworks serve diverse regional needs better than universal governance approach.
  • Effective AI policy is not only about technology – it’s about equity, inclusion, and broader societal impacts.


The official session summary is now available:

Read the official session summary on the IGF website (tab “Report”)
Watch the full session recording

Key insights from our session:

“AI does not happen to us! AI is designed by humans. We make the choices.” – Professor Virginia Dignum’s keynote reminded us that before asking how to implement AI, we must ask Question Zero: Is AI the best option here? We need to shift from fragmented, reactive policies to coordinated, evidence-based strategies rooted in ethics and justice.

The interventions and discussion revealed critical lessons from global perspectives:

The EU is demonstrating promising approaches with the European AI Office expanding from 97 to 140 staff by the end of 2025, supporting regulatory sandboxes and international collaboration including a €5 million generative AI initiative with Africa.

In healthcare, we must move beyond treating AI as a “magic pill” and build upon existing regulatory frameworks – just as we trust paracetamol today because of rigorous oversight developed several decades ago.

Well-designed regulation stimulates innovation rather than slows it down. Different countries need diverse legislative approaches harmonised with local values, not a one-size-fits-all global AI governance structure.

The time to act is now. AI is shaping our collective future, and how we act today will define who benefits, who is heard, and who is left behind.

AI Technologies in Public Service: A Workshop for Identifying Needs, Challenges, and Solutions


Date & Location: April 9, 2025, Umeå University, Västerbotten, Sweden

The workshop was organised by the AI Policy Lab in collaboration with the AI Technologies for Sustainable Public Service Co-creation (AICOSERV) project members.

Overview

The workshop brought together more than 50 stakeholders from the public and private sectors, as well as academia, to explore the relationship between barriers to AI adoption in public services and the skills and expertise required to overcome them.

A central theme of the workshop was the “question zero”, the fundamental inquiry of whether AI should be used at all in a given context. As AI technologies continue to advance and expand into complex public sector tasks, the assumption that AI is always the right or necessary solution must be critically examined. The workshop challenged participants to consider not only how AI can be implemented, but to question whether it should be, emphasizing that responsible adoption begins with questioning the appropriateness and desirability of AI in specific domains.

This foundational concern set the tone for broader discussions about trust, governance, transparency, and the skillsets required to navigate the opportunities and risks of AI in public service.

Keynote Address

Professor Virginia Dignum, Director of the AI Policy Lab, opened the workshop with a keynote titled “Governing AI: Why, What, How?”

She addressed the societal and governance implications of AI, focusing on the need to critically evaluate when and how AI should be integrated into public service contexts. Her talk stressed the importance of not overlooking ethical, legal, and operational challenges in the rush to adopt AI.

Regional Case Study: AI in Västerbotten

Considering the wide range of public services open to AI adoption, a recurring set of challenges consistently emerges. Whether deploying AI-driven diagnostic tools in healthcare or implementing predictive analytics within smart city infrastructures, public and private sector actors, and community stakeholders face diverse barriers. Henry Lopez-Vega, fellow at the AI Policy Lab, presented on the challenges of AI adoption in the Västerbotten region in his session titled “What are the challenges with AI (in Västerbotten)?”

His research identified three core barriers to building a responsible AI ecosystem:

  • Technological infrastructure and processes within organisations
  • Organisational culture and resistance to change
  • Lack of clarity around AI governance and ownership

Group Discussions: Skills and Stakeholder Engagement

In the second half of the workshop, participants engaged in group discussions focusing on organisational challenges, key stakeholders, and barriers to implementation. Each group then mapped the skills and knowledge needed for responsible AI adoption in their contexts.

For example, in the case of AI-supported recruitment processes, participants identified several critical barriers:

  • Lack of transparency in AI decision-making
  • Biases in training data
  • Limited legal and ethical guidelines for automated hiring

To address these issues, participants emphasized the need for professionals with a blend of competences, including:

  • Knowledge of data protection and anti-discrimination legislation
  • Skills in evaluating and auditing AI systems
  • Awareness of ethical considerations in algorithmic decision-making

Findings and Framework

The increasing efforts to implement AI across various public sector domains have led to a critical question: what types of professionals, equipped with what specific skills and competences, should lead the integration of responsible AI? Defining the essential set of skills, knowledge, and professional competences required for the effective and ethical deployment of AI in both public and private sector services becomes a key priority.

A key outcome of the workshop was the identification of a recurring set of challenges affecting. These include:

  • Low levels of trust
  • Lack of transparency
  • Unclear ownership and responsibility
  • Insufficient stakeholder awareness
  • Limited AI literacy and governance skills

To address these challenges, we propose the conceptual framework depicted in Figure 1. This framework highlights the urgent need to cultivate professionals who combine technical expertise, ethical sensitivity, and domain-specific knowledge to lead responsible AI integration. It maps the interconnected layers that influence the deployment and responsible use of AI technologies in public services, including:

  • Contextual Challenges – such as low trust, resistance to change, and limited organisational readiness
  • Structural Barriers – including unclear project ownership, inadequate governance frameworks, and insufficient infrastructure
  • Skill and Knowledge Gaps – highlighting the lack of AI literacy, ethical awareness, and domain-specific competences
  • Stakeholder Roles – outlining the importance of identifying and engaging relevant actors (e.g. policymakers, IT professionals, legal advisors, and citizens) throughout the AI lifecycle

This framework is intended to guide structured reflection and planning around AI deployment, helping ensure that technologies are not only functional but also trustworthy, inclusive, and aligned with public values. As such, it can serve as a practical tool for organisations seeking to integrate AI technologies responsibly. It encourages a systemic perspective, one that moves beyond technical feasibility to consider broader organisational, social, and ethical dimensions.

By applying this framework, decision-makers and project leads can:

  • Identify context-specific challenges before adopting AI
  • Map key stakeholders and clarify roles and responsibilities
  • Recognise skill and competence gaps that must be addressed
  • Design AI initiatives that align with principles of transparency, accountability, and fairness

Figure 1. The framework summarising the workshop’s thematic discussions

In conclusion, the workshop underscored that a stakeholder-oriented, challenge-driven approach is key to enabling responsible AI adoption. By starting with specific domain needs and mapping corresponding skills and knowledge, organisations can more effectively navigate the complex landscape of AI integration.

Responsible AI Retreat at Lövånger 

On March 17-20, 2025 at Lövånger, AI Policy Lab (AIPL), Responsible AI group and Research Group for Socially Aware AI had a 3.5-day retreat with 15 participants from both units and Francien Dechesne from Leiden University, Netherlands.

Participants discussed responsible AI from multiple perspectives – technological, ethical, and social. Below we share the key messages from the retreat. We hope to inspire future multidisciplinary discussions, workshops and projects related to responsible AI research, literacy, and practical solutions for diverse stakeholders in Sweden and internationally.

Question Zero & Human Responsibility

When we hear about new AI ventures, we need to ask the question zero: “Is the adoption of an AI tool the best solution for our current problem?” with assessments including the environmental costs of running this AI system and understanding of what specific aspects of operations it improves. This includes critically assessing who is participating, what is the provenance and the management of data, what are the bases for modelling and design choices, how will results and impact be evaluated. AI does not happen to us, we [humans] design and/or adopt and use the AI systems. 

  • Responsible AI development should follow a compositional approach, where verified datasets and models with clear principles can be combined to create new systems. This framework emphasises the need to balance ethical considerations and accuracy while accounting for differences across sectors, industries, and global value systems. The approach prioritises sustainability, transparency, control, regulation, participation, inclusiveness, trust, and fairness, with a focus on measuring societal implications and ensuring equal representation and identification. 
  • True AI fairness goes beyond mathematical equality, considering historical disparities and contextual factors, and diverse definitions of fairness, as mathematical fairness alone can still create discriminatory outcomes in complex social settings. 
  • Participation, where people meaningfully contribute to (design) decisions, strengthens democracy and supports responsible design, development, and use of AI. However, participation alone does not guarantee better outcomes; thoughtful design is needed to prevent manipulation and address blind spots by incorporating diverse perspectives. 
  • Technological change is ecological, not additive, in that it is transformative and systemic rather than simply incremental, fundamentally reshaping existing environments and society. Drawing from Neil Postman’s (1995) framework, technological innovations fundamentally transform existing systems rather than simply adding new capabilities, creating trade-offs, winners and losers, and often leading to reframing how we perceive the world. 
  • Structural challenges. Several “traps” hinder responsible AI research: the framing trap (failure to model entire systems), portability trap (ignoring context sensitivity), formalism trap (oversimplifying social concepts), ripple effect trap (missing ecological impacts), and solutionism trap (overreliance on technological fixes). 
  • AI Art. The concept of “AI as a mirror” raises important questions about artistic expression, with significant differences between the inner experience of human artistic creation versus AI-assisted art generation, challenging traditional perspectives on IP rights, creativity (human only, genAI only, human-AI hybrid), intention, and artistic value. 

If the ideas above resonate with you, we encourage to check out the AI Policy Lab LinkedIn page for common cooperation opportunities. 

Bridging the Gap Between AI Research and Policy

The future of Artificial Intelligence (AI) lies not just in its technical advancements but in its responsible governance, underpinned by human-centered principles and policies. As such, AI policy research is an urgently needed area of focus, not just AI research, not just policy research, but a deliberate intersection of the two. This realization was at the core of the recent AI Policy Summit, a collaborative platform bringing together researchers from around the world and co-organised by MILA and the AI Policy Lab that I have the previlege to direct.  This was not just an event but a pivotal step toward shaping the trajectory of AI policy and governance. As AI technologies increasingly permeate every aspect of society, their potential to drive progress must be balanced with safeguards to ensure they align with human-centered values. This balance cannot be achieved by technical or legislative approaches alone; it demands the collaborative efforts of researchers, policymakers, and civil society.

The AI Policy Summit provided a unique platform for representatives from independent research organizations, spanning academia and civil society from diverse national contexts to engage in an open, informal environment that enable deep and heated exchange of ideas. A panel discussion with policymakers from multiple countries added depth and diversity to the discussions. Their contributions underscored the varying challenges and opportunities faced across different governance frameworks. Policymakers from Sweden, Tanzania, Canada, the Netherlands, and Portugal shared insights into their regional experiences with AI regulation, highlighting both shared objectives—such as transparency and accountability—and the unique cultural and legislative nuances that influence AI governance.

I was also especially encouraged by Marietje Schaake’s keynote, which highlighted the critical role of researchers in engaging with policymakers through building lasting relationships, providing actionable insights like policy briefs, and actively contributing to both the creation and implementation of legislation, all while acknowledging the challenges both sides face.

During the two days, exchanges between the participants emphasized the critical need for localized approaches to AI policy that are informed by global best practices. The summit fostered an environment where academic and civil society researchers could present evidence-based findings while gaining a firsthand understanding of the practical realities policymakers face. This interaction not only enriched the dialogue but also set the foundation for future collaborations aimed at shaping inclusive, effective, and context-sensitive AI policies.

Why AI Policy Research Matters

The development and governance of Artificial Intelligence (AI) are complex, interconnected challenges that demand a dedicated focus on AI policy research, a field distinct yet integrative of AI technology research and policy governance. This emerging discipline addresses gaps that neither AI research nor policy alone can resolve, ensuring that governance frameworks are not only informed by cutting-edge science but also aligned with societal needs and values. While AI research focuses on advancing technology and policy research on governance frameworks, neither can address the multifaceted impacts of AI in isolation:  

  1. AI advancements without Governance: Left unchecked, rapid AI innovation can deepen societal inequalities, exacerbate environmental damage, and consolidate power among a few, undermining public trust and equitable access?.
  2. Policy without AI research: Policies uninformed by empirical evidence or understanding of AI’s dynamic landscape risk becoming outdated, excessively restrictive, or misaligned with technological realities, stifling innovation and public benefits.

AI policy research as foundation for Responsible AI

Responsible AI begins well before algorithms are written or systems deployed: it starts with the fundamental questions: What problems are we solving? For whom? And with what consequences? What are the most suitable solutions? Is it AI? Addressing these questions requires a nuanced interplay between policy and research. The summit highlighted the growing need for this alignment to ensure that AI technologies foster societal progress, uphold human rights, and contribute to global sustainability goals.

At its heart, AI policy research navigates complex trade-offs. Fostering innovation while mitigating societal inequities requires a framework that ensures AI benefits are equitably distributed, particularly to those most vulnerable to its disruptions. AI policy research creates a vital bridge between these domains by focusing on actionable, evidence-based governance. It emphasizes transparency, accountability, and sustainability while ensuring equitable outcomes. By addressing issues such as inclusivity, environmental trade-offs, and regulatory foresight, AI policy research supports:

  • Proactive governance: Anticipating the implications of AI advancements demands foresight-driven policies that anticipate potential risks and societal impacts before they arise. By proactively identifying challenges—such as biases, security vulnerabilities, or unintended social consequences—governance frameworks can mitigate harm and establish safeguards that evolve alongside technological innovation.
  • Cross-sector collaboration: Effective AI policy requires a united effort from academia, industry, and government. Collaborative frameworks enable the sharing of expertise, aligning research insights with regulatory needs and industrial priorities. This synergy fosters the creation of policies that are both practical and evidence-based, ensuring comprehensive oversight and adaptability.
  • Responsible innovation: Encouraging the use of AI only when its benefits outweigh costs and align with ethical standards?. That is, AI should be deployed only when its advantages clearly outweigh associated costs and risks. Responsible innovation emphasizes ethical design, sustainability, and equitable access, ensuring that AI systems contribute to societal well-being without exacerbating inequalities or environmental harm.

The AI Policy Summit’s Contribution

The recent AI Policy Summit brought together global policymakers, academic researchers, and civil society actors to highlight this integrative approach. Discussions focused on immediate and long-term goals, such as fostering global accountability standards, developing foresight mechanisms, and crafting practical tools for inclusive governance. By emphasizing a shared roadmap and cross-sectorial expertise, the summit illuminated how AI policy research can drive actionable solutions for the responsible development of AI technologies?. This collective effort underscores the urgency of AI policy research as a means to guide innovation and governance toward equitable, sustainable outcomes. It is a field poised not only to mitigate the risks of AI but to maximize its potential as a force for societal good. Building on insights from the summit, several ideas were proposed to solidify the role of AI policy research, including:

  • Establish Visiting AI Policy Fellowships: These programs at different research institutes connect researchers with policymakers, fostering mutual understanding and collaboration?.
  • Launch an AI Policy Research Network: A global platform to share best practices, insights, and resources for evidence-based policymaking.
  • Develop AI Policy Briefs: Translating research findings into actionable insights tailored for policymakers is essential for informed decision-making.
  • Focus on Education and Capacity Building: Initiatives like student exchanges and Erasmus programs can cultivate a new generation of leaders at the intersection of AI and governance?).

A Shared Responsibility

AI policy research is not just a necessity, it is an opportunity to ensure that AI serves humanity rather than shaping societies in ways that exacerbate inequities or environmental harm. By combining the rigor of scientific inquiry with the pragmatism of governance, this field provides a pathway to align AI innovation with ethical, human-centered values.

The AI Policy Summit marked the beginning of a critical journey, one that bridges the gap between technological innovation and governance to ensure AI serves humanity responsibly. This initiative is more than a conference or a network; it is a call to action for researchers, policymakers, and civil society to collaborate in shaping an equitable and sustainable AI future.

Looking ahead, the true measure of its success will be our ability to foster lasting impact. This includes creating actionable frameworks, building trust through transparency and accountability, and policy instruments that ensure that the benefits of AI are accessible to all. As AI continues to evolve, our collective efforts must remain grounded in shared principles of fairness, sustainability, and human-centered development.

The challenges are immense, but so too is our collective potential. By uniting diverse perspectives and expertise, we can navigate the complexities of AI with integrity and purpose. Together, we have the opportunity not only to mitigate risks but to redefine AI as a tool for societal good—one that reflects the values and aspirations of all. The journey is just beginning, but the urgency is clear. I welcome you all to join us to #InformAIpolicy, a joint commitment to building a future where AI contributes to societal progress, respects the planet, and ensures equity for all.

ON EU’s plans for a scientific panel of independent experts

by: Virginia Dignum and Maja Fjaestad

The Artificial Intelligence (AI) Act envisages the establishment of a scientific panel of independent experts to advise on, and assist the AI Office and national market surveillance authorities with, implementing and enforcing the AI Act. The Commission is currently seeking public for input on implementing regulation establishing a scientific panel of independent experts. Here is our response to this request

The AI Policy Lab welcomes the opportunity to provide feedback on the European Commission’s draft regulation establishing a scientific panel of independent experts in artificial intelligence. This initiative is crucial in ensuring robust, transparent, and impartial oversight of AI systems, aligning with EU objectives to foster innovation while safeguarding fundamental rights. We commend the Commission’s focus on multidisciplinary expertise, diversity, and transparency in panel operations. However, to enhance effectiveness, we offer recommendations to streamline procedural workflows and strengthen data security protocols, ensuring the panel’s structure fully supports its mission in this rapidly evolving field.

Strong Points

We commend key features of the proposal that strengthen the panel’s credibility, flexibility, and proactive oversight:

  • Transparency and Conflict of Interest: Requirements for experts to make declarations of interest and to act in the public interest enhance the panel’s credibility and independence
  • Flexible Structure for Task Allocation: The document enables adaptability by allowing specific members to serve as rapporteurs for individual tasks, ensuring expertise aligns with task requirements?
  • Qualified Alerts for AI Risks: The ability of the scientific panel to issue qualified alerts to the AI Office is an innovative mechanism for highlighting potential AI risks.

Recommendations

Several areas for improvement could enhance efficiency, security, and impartiality in the panel’s operations. In particular, we suggest to address the following issues:

  • Complex Bureaucracy: The involvement of multiple administrative bodies, such as the AI Office, Joint Research Centre, and the Commission, could introduce delays and administrative bottlenecks in the panel’s operations. Streamlined procedural workflows and clarified responsibilities for each entity could enhance the panel’s responsiveness and effectiveness in providing timely guidance.
  • Strengthening Data Security and Confidentiality Measures: Although confidentiality is mentioned, the document could benefit from more detailed procedures on data handling to further mitigate risk related to data security?. Adding explicit guidelines for the secure storage, sharing, and destruction of sensitive information would strengthen the protocol for data handling, especially in cases involving sensitive AI data
  • Enhancing Panel Independence through Conflict of Interest Protocols: While the requirement for declarations of interest is a positive step, more rigorous conflict of interest safeguards – such as independent audits or periodic reviews – could reinforce the panel’s impartiality, especially given the rapidly evolving nature of AI and potential industry influences.
  • Streamline Procedural Steps: Simplifying interactions between the AI Office, Joint Research Centre, and the Commission could enhance efficiency without compromising oversight.
  • Equitable selection criteria: Equality is crucial to guarantee diverse input, to have democratic legitimacy, and to avoid bias. Article 3, par 5, Selection criteria and composition of the scientific panel would therefore benefit from a clearer formalation: Instead of “The Commission shall aim to ensure gender balance” a better formulation could be “the commission shall ensure gender balance”. 
  • Multidisciplinary relevance: The importance of humanities and social sciences expertise should be emphasized. For instance, in Article 3, paragraph 3, by removing “scientific or technical expertise” would broaden the focus, avoiding an unnecessary bias toward natural sciences and valuing multidisciplinary insights on rights, equality, and ethics in AI.

Proposals for consideration

Additional measures could increase the panel’s adaptability, responsiveness, and independence in handling evolving AI challenges, as follows:

  • To maintain the panel’s relevance across rapidly evolving AI fields, we propose supplementing the core panel with a flexible pool of specialized experts. These “on-call” experts would offer guidance on niche areas like ethical AI, quantum AI, or specific sectoral applications, allowing the panel to draw on targeted expertise without permanently expanding its membership.
  • Recognizing the potential risks posed by high-stakes AI applications, we recommend a dedicated Rapid Response Protocol within the panel. This would enable the panel to perform expedited assessments of AI models flagged as potentially harmful, particularly those impacting public safety or fundamental rights, ensuring that urgent cases receive timely and focused attention.
  • To safeguard the panel’s impartiality, we suggest enhanced conflict of interest protocols, including independent audits or periodic reviews of expert affiliations and potential biases. This would reinforce trust in the panel’s independence, especially important given AI’s sensitive and influential role across industries.
  • To promote transparency and public trust, the panel could introduce an AI Accountability Dashboard that provides the public with non-sensitive summaries of decisions, recommendations, and qualified alerts issued by the panel. This dashboard could track metrics like panel activity levels, time-to-decision for urgent alerts, and diversity statistics, thus allowing stakeholders to observe the panel’s impact on AI governance.