Skip to content

European AI Security and Reliability Framework

Requirement for Unbiased Safety Reviews

Enhancing trust and security in the realm of artificial intelligence within Europe
Enhancing trust and security in the realm of artificial intelligence within Europe

European AI Security and Reliability Framework

The European Union is taking a significant step towards ensuring the safety and trustworthiness of advanced AI products by adopting a system of mandatory independent assessments. This approach, inspired by the European Quality Infrastructure (EQI), aims to align with the EU AI Act and other international commitments.

Academic research has demonstrated that self-assessments deliver lower safety and security standards compared to accredited third-party or governmental audits. In response, the EU has mandated third-party conformity assessment services, encompassing testing, inspection, and certification (TIC) activities.

The certification of quality management systems examines the production processes and management structures in place. Product-testing evaluates the product itself through independent examinations, while periodical inspections ensure safety and proper functioning after commercial distribution.

The EU AI Act, the White House Executive Order, the G7 Hiroshima Process, and the Bletchley Declaration have made commitments to some external scrutiny and testing for the most advanced AI products. However, these are not yet mandatory or fully implemented.

In the EU, European companies have a strong track record in sectors requiring intensive testing or inspection, indicating a competitive advantage in adapting services to the AI sector. The complex system of laws, standards, certifications, inspections, and audits ensures safety and security in various sectors, except for AI.

Adopting regulation could mandate independent, third-party testing for AI models before market release, including adversarial testing and periodic assessments throughout the lifecycle. This approach aligns closely with the EU AI Act and its supporting instruments, including voluntary codes of practice and independent AI assessments.

The methodologies of the European Quality Infrastructure (EQI) can be applied to foundation models (general-purpose AI) by embedding risk-based, standards-driven, and assessment-focused practices from EQI into the AI development and deployment lifecycle. This ensures safety and public trust.

Specifically, EQI methodologies emphasize robust system and process standards, conformity assessment, certification, and metrology—all designed to ensure quality, safety, and reliability in products and services. Applied to foundation AI models, these translate into:

  1. Risk Assessment and Management: Foundation model developers and downstream vendors must perform deep, cross-functional risk assessments covering the entire AI lifecycle.
  2. Conformity to Harmonized Standards: Compliance with harmonized technical standards, including those promoted by ETSI, is expected.
  3. Certification and Independent Audits: EQI frameworks promote independent third-party certification and audits.
  4. Traceability and Documentation: EQI emphasizes metrology and documentation for traceability.
  5. Lifecycle Governance: EQI methodologies promote continuous oversight throughout the life of the product.

In summary, by applying EQI’s principles of standardized risk assessment, conformity evaluation, certification, traceability, and lifecycle governance, foundation model developers and deployers can meet the EU AI Act’s requirements and foster public trust and safety in general-purpose AI within the EU’s regulatory framework.

The independence and competence of conformity assessment bodies is ensured by accreditation in the European Quality Infrastructure ecosystem. Regulatory sandboxes and testing, experimentation facilities, and public funding can help rapidly roll out testing infrastructure in the EU.

AI models and their downstream applications lack the safety guarantees expected in other critical sectors. An ecosystem of independent experts to audit and inspect AI models is necessary, but requires significant resources and time to establish. Independent conformity assessments are essential for identifying potential dangers in AI models, such as data poisoning or violations of safety guidelines.

The EU's initiative on access to compute could be conditioned on contributing to the science of measurement and benchmarking for AI models, with safety, transparency, and information sharing as a prerequisite for access. AI developers above a certain size threshold should contribute funding to ensure a fair allocation of costs for establishing an assessment ecosystem. External scrutiny is essential to ensure that incentives are aligned with finding vulnerabilities and representing diverse perspectives.

Developing standardized measures for AI testing is a crucial but expensive and time-consuming process. No jurisdiction mandates independent, third-party testing to ensure that advanced AI products comply with safety rules. However, the potential impact of rapid AI development and deployment necessitates safety and efficacy testing to be done by independent parties, not just the companies releasing the products. An independent ecosystem of experts is crucial for ensuring the safety and trustworthiness of advanced AI products within the EU.

  1. To ensure the safety and trustworthiness of advanced AI products in the health-and-wellness sector, independent third-party testing could be a mandatory requirement, as demonstrated by the European Quality Infrastructure's emphasis on risk assessment, conformity evaluation, certification, traceability, and lifecycle governance.
  2. In order to prevent medical-conditions from being exacerbated by inappropriate AI behavior, technology-aided healthcare systems should implement the same stringent safety measures as other critical sectors, such as mandatory independent, third-party assessments inspired by the European Quality Infrastructure.

Read also:

    Latest

    Visions stemming from our gaze

    Vision through our eyes

    Vision serves as one of our primary sensory systems, providing us with crucial awareness of our surroundings. The eye, a vital organ, houses structures that enable it to recognize light, motion, and color contrasts. Similar to cameras, eyes function by capturing and interpreting visual information.

    Exchange with Cliff Kincaid

    Dialogue with Cliff Kincaid

    "Individual reports being heavily restricted in the U.S., allegedly by progressive factions and even 'moderate conservatives' who oppose my pro-vaccine stance. I've found myself in agreement with President Trump's Operation Warp Speed initiative, viewing it as a remarkable American effort."