How Europe is Shaping AI for Human Rights

A Comparative Analysis of the EU AI Act and the Council of Europe Framework Convention

The “Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law” (CETS 225), approved in May and open for signatures today, 5 September 2024, creates a legal framework with a strong focus on safeguarding human rights, democracy, and the rule of law in AI development and usage. The convention emphasizes key principles such as transparency, accountability, risk management, and special protection for vulnerable groups, aligning in many ways with the European Union AI Act, which is more detailed in its categorization of AI systems by risk levels and introduces specific regulatory mechanisms for high-risk AI applications.

Based on our analysis, it can be said that the EU AI Act excels in its market-centric approach, providing clear regulatory guidelines that ensure a safe, innovation-friendly environment for businesses while protecting consumer rights. Its risk-based framework is well-defined, allowing for differentiated oversight based on the risk posed by AI applications, particularly in high-risk sectors like healthcare and transportation. This precision fosters compliance and encourages AI development within clear ethical boundaries. On the other hand, the Council of Europe Framework Convention, broader in scope, with its primary strength being a robust focus on human rights, democracy, and the rule of law. It emphasizes transparency, accountability, and inclusivity across all sectors, going beyond economic concerns to ensure that AI systems respect fundamental rights. The convention’s commitment to protecting vulnerable groups and fostering international cooperation for global AI governance is another key strength, ensuring AI development aligns with global human rights standards. In short, I describe the similarities and differences between the two approaches as follows:

Similarities

  1. Human Rights Focus: Both the EU AI Act and the Council of Europe Framework Convention emphasize the importance of safeguarding human rights in the development, deployment, and use of AI systems. This includes ensuring that AI systems do not infringe upon fundamental rights such as privacy, freedom of expression, and non-discrimination.
  2. Risk-Based Approach: Both frameworks adopt a risk-based approach to AI regulation. They require measures to be scaled according to the potential risks that AI systems pose to human rights, democracy, and the rule of law. This involves stricter oversight and requirements for high-risk AI systems.
  3. Transparency and Accountability: Transparency is a key principle in both documents, mandating that AI systems be designed and operated in a way that is understandable and explainable. They also emphasize accountability, requiring entities deploying AI systems to take responsibility for their impacts.
  4. Non-Discrimination: Both the EU AI Act and the Convention address the need to prevent and mitigate discrimination that might arise from the use of AI systems, particularly in vulnerable groups, including women, minorities, and people with disabilities.
  5. International Cooperation: Both frameworks recognize the importance of international cooperation in AI governance, aiming to create a harmonized approach across jurisdictions to address the global nature of AI technology.

Differences

  1. Legal Scope and Binding Nature: The EU AI Act is a regulatory framework specific to the European Union and its member states, establishing binding legal obligations for entities operating within the EU. In contrast, the Council of Europe Convention is a treaty that countries can choose to ratify, and it applies to a broader range of countries, not limited to the EU. As a treaty, the Council of Europe Convention provides a broad framework, leaving it to member states to determine how to implement its provisions into their national laws. This gives countries flexibility in adapting the convention to their legal systems while still adhering to its overarching principles.
  2. Focus on Democracy and Rule of Law: The Council of Europe Convention places a stronger emphasis on protecting democratic processes and the rule of law. While the EU AI Act also addresses these issues, it is more focused on market regulation and the safe integration of AI into the internal market.
  3. Implementation Mechanisms: The EU AI Act includes specific enforcement mechanisms, such as fines for non-compliance, and assigns responsibilities to existing national authorities for implementation. The Council of Europe Convention, on the other hand, establishes a Conference of the Parties to oversee implementation and foster cooperation among the signatories. While the Council of Europe document includes explicit exemptions for national security and defense, these are explicitly excluded from coverage in the EU AI Act.
  4. Definitions and Scope: The scope of what constitutes an AI system and the range of activities covered differ slightly. For instance, the Council of Europe Convention includes a broad definition of AI systems and explicitly covers their entire lifecycle, from development to decommissioning. The EU AI Act also refers to the AI lifecycle but is more focused on categorizing AI systems by risk levels.
  5. Public Consultation and Participation: The Convention explicitly requires public consultation and multistakeholder involvement in discussions about AI governance, which is less emphasized in the EU AI Act, where the focus is more on regulatory compliance by businesses and public sector entities. As such, The Council of Europe convention places a notably more emphasis on promoting digital literacy and skills across all populations, which is less prominent in the EU AI Act. Moreover, the Council of Europe convention explicitly calls for measures to address the rights of specific vulnerable groups, such as children and persons with disabilities, which is less explicitly stated in the EU AI Act.
  6. Remedies and Oversight: The Council of Europe convention explicitly calls for accessible remedies for violations of human rights caused by AI systems, which is detailed in Chapter IV. While the EU AI Act also emphasizes accountability, the approach to remedies might differ in terms of implementation mechanisms.

Both the EU AI Act and the Council of Europe Framework Convention provide strong foundations for regulating AI, but they leave certain gaps. One major shortcoming is their lack of specificity on how to adapt to rapid technological advancements.  AI evolves quickly, and both frameworks focus heavily on supporting current innovation, which, while beneficial in the short term, may undermine public trust and hinder broader adoption in the future if societal concerns and risks are not adequately addressed. Additionally, while they emphasize international cooperation, neither framework offers a clear path for integrating their approaches into a broader, global AI governance system. This lack of alignment could result in fragmented regulations across countries, making it harder to establish consistent ethical standards worldwide.  Another critical omission is the ethical use of AI in military and national security contexts. Both frameworks largely sidestep this issue, leaving a significant gap in ensuring that AI applications in these areas respect human rights and ethical principles. Lastly, while both stress accountability and oversight, there are challenges in implementing clear and practical enforcement mechanisms, particularly for cross-border AI applications and private actors outside direct government control. Addressing these issues would enhance the comprehensiveness and effectiveness of both frameworks in governing AI responsibly.

AI Policy Lab is a multidisciplinary research hub at Umeå University.

Subscribe to our newsletter and receive our very latest news.

Go back

Your message has been sent

Warning
Warning
Warning!

Leave a comment