AIPEX Publications

ARTICLES


  • A reappraisal of the AI Act in light of the qualitative requirements of the Law

    Inês Neves (Faculty of Law, University of Porto / CIJ - Centre for Interdisciplinary Research on Justice)

    Published on 29 June 2024

    As the Artificial Intelligence Act (‘AI Act’ or ‘Regulation’) approaches its entry into force, it is tempting to conclude that the “job is (well) done”. However, a closer examination of the final text (signed on 13 June 2024) reveals a more nuanced reality. In addition to being implemented gradually over time and extending until 2026 (Article 113), the Regulation forms an integral part of the “New Legislative Framework” (‘NLF’) (Recitals 9, 46, 64, 83, 84). As such, it operates similarly to a ‘framework’ law, setting out a minimum set of essential requirements and high-level obligations applicable to a defined group of AI systems (those deemed high risk and specific risk).
    As a result, before the Regulation is implemented in an administrative and jurisdictional manner, a process of normative creation must be initiated. In short, the actual law-making process – whereby the essential requirements of the AI Act will be specified and concretised through standards and other instruments, specifying the broad and open-texture wording of the AI Act through technical or quality specifications – is yet to commence (along with the governance structure and paraphernalia).

    Nonetheless, it would be erroneous to assume that no progress has been made thus far or that nothing existed before the AI Act. There are ethical guidelines, standards that have already been published or are currently under development, and of course, there is all the European Union (‘EU’) and national legislation that is not affected by the AI Act (in terms of consumer rights, product safety, labour law, privacy and data protection, competition law, digital services, financial services, among others). Indeed, the existing legislation even raises doubts about the usefulness of the AI Act, and the EU legislator is not agnostic about the expectations and challenges of misalignment and coherence (see, among others, Recitals 46, 64, 158, and Articles 8(2), 72(4)).
    While it is accurate to reflect that the AI Act is not intended to reinvent the wheel, the existing framework is deemed insufficient to provide a comprehensive framework of essential requirements specifically applicable to AI systems (high risk and specific risk). With this in mind, the AI Act follows the logic of product safety legislation, appearing as a hybrid between the need to promote trade and reduce barriers to innovation and the need to guarantee the protection of fundamental rights in the face of the specific risks of AI.

    The AI Act thus exhibits a shared genetic identity with other acts of the NLF. However, it can be neither summarised nor equivalated to product safety legislation. On the one hand, this is due to the evolution and dynamism of AI systems, which make them very different from products typically associated with this type of legislation (toys, radio equipment, medical devices, machinery, among others). On the other hand, in addition to concerns for safety and health, the Regulation expressly includes fundamental rights among its objectives.
    While the connection with fundamental rights does not extend to the point of turning the AI Act into an instrument that enshrines or guarantees new or specific fundamental rights per se, this link requires us to question whether greater demands on the applicable legal frameworks shall be ensured.

    As is well known, the principles of legal certainty and legality demand that the law prescribing duties and obligations fulfil specific requirements of accessibility, foreseeability and precision. These requirements ensure that those obliged to (binding) legal frameworks understand what is expected of them in order to comply with such demands. Of course, these qualitative or substantive requirements do not go so far as to demand absolute certainty, which is incompatible with the recurrence of indeterminate concepts and vague terms (and the pace of technology). Furthermore, they do not negate the potential need for legal counsel or case law to comprehend the obligations prescribed by the legal system comprehensively.
    However, if this is the case, there will be limits to the openness of the law, mainly when it is associated with burdens and requirements that restrict fundamental rights. This is the case of the freedom to conduct a business (Article 16 of the Charter of Fundamental Rights of the European Union), which is limited by the requirements and obligations applicable to the systems covered by the AI Act. Alternatively, the Regulation provides for sanctions that may be manifestly harmful to public interests, such as innovation and the very realisation of fundamental rights (whose realisation, at least in the current era, may depend on AI solutions).

    A review of the AI Act’s recitals reveals a clear awareness of the necessity for legal certainty (see, for example, Recitals 3, 12, 83, 84, 97, 139 and 177) and the importance of defining its terms and requirements in a way that is consistent with the broader regulatory framework. However, in contrast to the limited references to this general principle of EU law, searching for the qualifier “appropriate” yields 230 results, which clearly demonstrates the broad and generic way in which the requirements for operators and systems are laid down and designed in the AI Act.
    The necessity to reconsider the limitations of openness and the high-level approach adopted by the AI Act is highlighted by the fact that the AIA challenges the traditional remit of the requirements of legal certainty and the quality of the law, essentially on two levels.
    Firstly, by referring to guidelines, codes of conduct, standards and common specifications, the question arises as to whether this connection between basic-fundamental law and implementation, not just by the European legislator but by other bodies, including private ones, will comply with the requirements of legitimacy and substance imposed on law in the material sense.
    Secondly, insofar as the AI Act does not appear in a regulatory vacuum but instead leaves relevant European legislation (equally imposing obligations) untouched, it is essential to question whether the Regulation’s entry into force will have an impact on the predictability of what is expected of operators (in the face of perhaps contradictory expectations arising from different legal acts). It is believed that potential misalignments may be solved through a revision of the legislation in force, to identify conflicts and, if necessary, ensure that the combination of sectoral (old) and transversal (new) obligations and requirements does not result in greater entropy than advantage.

    In this piece, we focus on the first area of concern.
    Indeed, the Regulation leaves the implementation of a significant portion of requirements to the European Commission through a variety of means, including guidelines (Article 96), delegated acts (Article 97), and implementing acts (Articles 41(1) and 50(7)), among which codes of practice, where the AI Office acquires prominence (Articles 50(7) and 56).
    The Commission is but one of many relevant players, however. Among other actors (including at the national level), the European Artificial Intelligence Board (‘AI Board’) will have the task of issuing recommendations and written opinions on issues relating to the implementation of the AI Act, aimed at ensuring its consistent and effective application (Article 66(e)). Furthermore, the role of the Advisory Forum (Article 67) and the Scientific Panel of Independent Experts (Article 68) must not be neglected.

    In this regulatory landscape, harmonised standards must be recognised as primus inter pares (Article 40), as they will provide detailed technical specifications on how to comply with and meet the public interest objectives and the high-level requirements in the AI Act. The shortcomings of this co-regulation or delegation become evident when one considers that the acts in question are not those of EU institutions, bodies, offices, or agencies, and thus are not subject to judicial scrutiny (Article 263 TFEU). Additionally, fundamental rights are not a subject that can be technically specified. Finally, the fact that the institutions responsible for law-making are private entities and do not offer adequate guarantees of inclusion, representativeness and transparency creates the risk of regulatory capture and reinforces competitive foreclosure (which is already a consequence of the exclusion of AI systems already placed on the market or put into service (Article 111(2)).
    In addition to standardisation, the Regulation also provides for common specifications (Article 41) in the event of i) non-acceptance of the standardisation request by any of the European standardisation organisations; ii) insufficient coverage by the harmonised standard of fundamental rights concerns, or iii) failure by the standard to meet the requirements of the request (promptly).
    In both cases, voluntary compliance with harmonised standards or common specifications is associated with a presumption of conformity, particularly relevant for high-risk AI systems and general-purpose AI models. Although this presumption contributes to legal certainty, it merely guarantees a reversal of the burden of proof. The presumption is rebuttable, and the degree of legal certainty depends on the material content of the standards and specifications that will be adopted.
    The preceding analysis leads to the conclusion that, in terms of meeting the requirements of accessibility and foreseeability, the AI Act does not represent an exemplar piece of legislation. The model adopted here is collaborative, with the intervention of various other players, and where the law and technology are inextricably linked. This does not imply that it is inherently inadmissible.
    In fact, this way of co-regulation may be the only viable means of ensuring that the law aligns with reality. However, if this is indeed the case, the requirements for legitimising the processes for drawing up the intervening acts and actors should ensure high(er) levels of inclusion, participation, representativeness, and expertise. About the adoption of standards, it is crucial to bear in mind that the representativeness of smaller players and civil society may be impeded by factual inequalities (resources and expertise), even when abstract equality is provided for.

    The interconnection between the AI Act and the fundamental rights renders these demands particularly urgent. Indeed, in this regard, the EU Regulation can be seen as both a virtue and a vice. On the one hand, its concern with fundamental rights is a distinctive feature, setting it apart from other acts within the NLF. On the other hand, however, this approach presents a tremendous challenge, as it is not aligned with the complexity and non-technical nature of fundamental rights, which are best understood as positions of value that neither the vagueness of the AIA nor the loopholes of standardisation fully consider.


    In our view, there is hope in the subsidiary role of common specifications (Article 41(1)(a)(iii)). These are tailored to address the shortcomings of harmonized standards with regard to fundamental rights concerns and should be drawn up by the European Commission (an EU institution) in consultation with the consultative forum (Article 67). According to the provisions of the AI Act, this forum should represent a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia. Furthermore, it should be balanced about commercial and non-commercial interests, and within the category of commercial interests, with regard to SMEs and other businesses. In addition to the European Union Agency for Cybersecurity (‘ENISA’), the European Committee for Standardisation (‘CEN’), the European Committee for Electrotechnical Standardisation (‘CENELEC’), and the European Telecommunications Standards Institute (‘ETSI’), the Fundamental Rights Agency is also a permanent member of the advisory forum.

    In our view, common specifications provide an additional and important safeguard. Firstly, they ensure that gaps in protecting fundamental rights are filled. Secondly, their adoption addresses the lack of representation and inclusiveness in the standardisation process, including regarding the participation of business and civil society. Thirdly, implementing acts (just as delegated acts) are subject to judicial review by the Court of Justice of the EU, as a result of which there is no lack of control.

    As we can see, the Artificial Intelligence Act is not without flaws. It introduces particular challenges to the basic principles of the rule of law and the substantive requirements applicable to legal acts. This is not only because of the regulatory model it adopts but also because of the (lack of) legitimacy and democratic deficit of the players and procedures participating in the normative creation entailed by the AI Act.

    It is acknowledged that the law and legislative procedures are not impervious to technological advancement and that they must, therefore, be capable of adapting frameworks and embracing new procedures and instruments (of a technical nature if needed). The key is to ensure that, in this process of openness, the law is not conflated with technology, but that technology is shaped to align with the requirements and fundamental principles of the (rule of) law.

    How to cite this article

    Neves, I. (2024). A reappraisal of the AI Act in light of the qualitative requirements of the law. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/TJZR2589

  • Defining Responsible AI

    Roberta Calegari (Bologna University), Virginia Dignum (Umeå University)

    Published on 24 June 2024

    What is Responsible AI?

    Currently, for most practical uses, Artificial Intelligence (AI) is first and foremost a technology that can automatise tasks and decision making processes. However, considering its societal impact and need for human contribution, AI is much more than a technique but can best be understood as a socio-technical ecosystem, recognising the interaction between people and technology, and how complex infrastructures affect and are affected by society and by human behaviour. As such, AI involves the structures of power, participation, and access to technology that determine who can influence which decisions or actions are automated, which data, knowledge, and resources are used for learning, and how interactions between decision-makers and those impacted are defined and maintained.


    The main focus of Responsible AI is ensuring that AI systems are developed, deployed, and used in a manner that is ethically sound, respects human rights, and considers societal implications. This encompasses not just ethical and legal considerations, but also the socio technical aspects that ensure that accountability for the development and use of the AI system is guaranteed. Responsible AI practices often involve processes and guidelines that organisations follow during the design, development, and deployment stages of AI systems. This could include impact assessments, reviews, and monitoring of AI systems in real-world applications.


    Trustworthy AI emphasises the reliability, safety, and robustness of AI systems, as well as their ethical implications. The goal is to ensure that users and stakeholders can have confidence in AI systems’ decisions and behaviours. This might involve ensuring an AI system functions correctly under various conditions, is robust against adversarial attacks, and can explain its decisions in understandable terms. Trustworthiness often requires technical solutions, such as robustness testing, adversarial training, and explainability methods, in addition to governance and ethical guidelines.


    Generally, Responsible AI practices encompass Trustworthy AI requirements. A responsible, ethical, and trustworthy approach to AI will ensure transparency about how adaptation is done, responsibility for the level of automation on which the system is able to reason, and accountability for the results and the principles that guide its interactions with others, most importantly with people. In addition, and above all, a responsible approach to AI makes clear that AI systems are artefacts manufactured by people for some purpose, and that those which make these have the power to decide on the use of AI.


    In this sense, AI ethics is not, as some may claim, a way to assign responsibility to machines for their actions and decisions, thereby absolving people and organizations of their own responsibility. On the contrary, ethical AI imposes greater responsibility and accountability on the people and organizations involved: for the decisions and actions of the AI applications, and for their own decision to use AI in a given context.


    Guidelines, principles and strategies to ensure trust and responsibility in AI refer to the socio-technical ecosystem in which AI is developed and used. It is not the AI artefact or application that needs to be ethical, trustworthy, or responsible. Rather, it is the people, organisations and institutions involved that can and should take responsibility and act in consideration of an ethical framework such that the overall system can be trusted by users and society.

    In a nutshell, we can recap the main definitions as follows:
    “Responsible AI” refers to the concept of developing and deploying AI systems in a way that aligns with ethical principles, societal values, and legal requirements. Overall, responsible AI seeks to foster the development and adoption of AI technologies in a way that promotes ethical values, respects human rights, and contributes to the well-being of individuals and communities.


    “Trustworthy AI,” on the other hand, refers to the concept of developing and deploying AI systems that are reliable, ethical, lawful, and transparent, thereby earning the trust of users, stakeholders, and society at large. By embodying these principles and characteristics, trustworthy AI inspires confidence and trust among users, stakeholders, and society, facilitating the responsible adoption and utilization of AI technologies for the benefit of society.


    So in a way, trustworthy AI is the enabler for responsible AI. While the former is more focused on the technical aspects to build systems reliable, transparent, accountable, and ethical, thereby earning the trust of users, stakeholders, and society, the latter emphasizes the ethical and moral dimensions of AI development and deployment, aiming to promote ethical behavior, respect for human rights, and the well-being of individuals and communities.

    Keywords (comma separated):
    Socio-technical ecosystem, Ethical principles, Trustworthy AI

    How to cite this article:

    Calegari, R., & Dignum, V. (2024). Defining Responsible AI. AI Policy Exchange Forum (AIPEX). https://doi.org/10.63439/KWEU5144

    Leave a comment