The release of the “First Draft General-Purpose AI Code of Practice” signifies the EU’s commitment to establishing comprehensive regulatory guidance for general-purpose AI models.
This draft was developed through a collaborative effort, incorporating input from various sectors including industry, academia, and civil society. The initiative was spearheaded by four specialized Working Groups, each focusing on distinct aspects of AI governance and risk mitigation:
- Working Group 1: Transparency and copyright-related rules
- Working Group 2: Risk identification and assessment for systemic risk
- Working Group 3: Technical risk mitigation for systemic risk
- Working Group 4: Governance risk mitigation for systemic risk
The draft aligns with existing laws, such as the Charter of Fundamental Rights of the European Union, and considers international approaches. It aims to be proportional to risks and future-proof, anticipating rapid technological changes.
Key Objectives Outlined in the Draft:
- Clarifying compliance methods for providers of general-purpose AI models
- Facilitating understanding across the AI value chain to ensure seamless integration of AI models into downstream products
- Ensuring compliance with Union law on copyrights, particularly regarding the use of copyrighted material for model training
- Continuously assessing and mitigating systemic risks associated with AI models
A core feature of the draft is its taxonomy of systemic risks, detailing the types, natures, and sources of such risks. It identifies various threats, including cyber offenses, biological risks, loss of control over autonomous AI models, and large-scale disinformation. Recognizing the evolving nature of AI technology, the draft acknowledges that this taxonomy will need periodic updates to stay relevant.
As AI models with systemic risks become more prevalent, the draft emphasizes the necessity for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure proper risk identification, analysis, and mitigation throughout a model’s lifecycle.
The draft recommends that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks.
Proactive Stance on AI Regulatory Guidance
The EU AI Act, effective from 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative highlights the EU’s proactive approach to AI regulation, emphasizing the importance of AI safety, transparency, and accountability.
As the draft evolves, the working groups invite stakeholders to actively participate in refining the document. Their collaborative input will help shape a regulatory framework that safeguards innovation while protecting society from the potential pitfalls of AI technology.
While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a global benchmark for responsible AI development and deployment. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection.
This draft is open for written feedback until 28 November 2024.
The draft can be found here: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements_en