Built to Scale|Custom Software · AI · Automation
PHASED ENFORCEMENT – Bans from Feb. 2025

EU AI Act
EU AI Regulation
Regulation (EU) 2024/1689

The EU AI Act is the world's first comprehensive AI law. It classifies AI systems by risk and defines binding requirements – from complete prohibition rules to full documentation obligations for high-risk AI.

Action needed now

Prohibited AI practices have been illegal since February 2025. GPAI model rules apply from August 2025. High-risk AI must be compliant by August 2026.

Regulation

Regulation (EU) 2024/1689

Feb. 2025

Bans in force since

EUR 35M

Max. fine

Aug. 2026

High-risk obligation from

4 levels

Risk classes

The four risk classes of the AI Act

The AI Act classifies AI systems into four categories. Obligations increase with risk potential – from voluntary measures to complete prohibition.

Unacceptable Risk – Prohibited

These AI practices have been prohibited since February 2, 2025. Violations: up to EUR 35M or 7% annual turnover.

  • Social scoring by authorities
  • Manipulation of unconscious behaviors
  • Real-time biometric mass surveillance
  • AI exploiting vulnerabilities

High-Risk AI – Strict Requirements

Applies from August 2, 2026. Comprehensive requirements: risk management, data documentation, human oversight.

  • Biometric identification
  • Critical infrastructure (energy, water)
  • Education & vocational training
  • Employment & HR decisions
  • Essential public/private services
  • Law enforcement & justice
  • Migration & border control

Limited Risk – Transparency Obligation

Users must be informed when interacting with AI. Applies to all AI-generated content.

  • Chatbots & virtual assistants
  • Deep fake generation
  • Emotion recognition
  • Biometric categorization

Minimal Risk – Voluntary

No specific obligations. Voluntary codes of conduct supported by the EU Commission.

  • AI-powered spam filters
  • AI in video games
  • AI recommendation systems
  • Production optimization

Requirements for High-Risk AI Systems

Articles 9–15 of the AI Act define binding requirements for all high-risk AI systems. These must be fully met before placing on the market.

Risk Management System

Continuous process for identifying, analyzing, and managing risks of the high-risk AI system throughout its lifecycle.

Data Governance & Datasets

Training, validation, and test data must meet quality criteria. Origin, selection, and processing must be documented.

Technical Documentation

Complete documentation before placing on the market. Includes: system architecture, training data, performance metrics, safety measures.

Logging & Traceability

Automatic logging of events during operation. Important decisions must be traceable.

Transparency & User Information

Clear information about the purpose, performance, and limitations of the AI system. Users must know they are interacting with AI.

Human Oversight

High-risk AI must be designed so that humans can monitor, understand, stop, or correct the system.

Phased Timeline of the EU AI Act

The AI Act is introduced in four phases. Each phase brings new obligations.

1
August 1, 2024

EU AI Act entered into force

Regulation (EU) 2024/1689 becomes legally effective. Phased application begins.

2
February 2, 2025

Phase 1: Bans in force

Prohibited AI practices (unacceptable risk) are now illegal. Immediate compliance required.

Currently valid – immediate action required
3
August 2, 2025

Phase 2: GPAI Models

Rules for general-purpose AI models (GPT, etc.) apply fully.

4
August 2, 2026

Phase 3: High-Risk AI (Annex III)

Full requirements for high-risk AI systems apply.

5
August 2, 2027

Phase 4: High-Risk AI (Annex I)

AI systems in safety-relevant products must be fully AI Act compliant.

How we help with AI Act compliance

From AI inventory to ongoing AI governance – structured and practical.

Analysis

AI Inventory & Risk Classification

We capture all AI systems in your company and classify them according to AI Act risk classes.

  • Inventory all deployed AI systems
  • Classification into risk classes
  • Gap analysis against high-risk requirements
  • Set priorities and roadmap
2–4 weeks · written risk overview
Implementation

Documentation & Compliance

We support the technical and organizational implementation of all AI Act requirements.

  • Set up risk management system
  • Create technical documentation
  • Implement logging & monitoring
  • Define human oversight mechanisms
  • Create transparency information for users
8–20 weeks per AI system
Support

Ongoing AI Governance

AI regulation evolves quickly. We support you permanently with new systems and legislative changes.

  • Introduce new AI systems AI Act-compliant
  • Regular compliance audits
  • Training for AI developers & management
  • Check GPAI compliance for third-party AI
Ongoing · quarterly review

Frequently Asked Questions about the EU AI Act

Using AI? Check AI Act compliance now

We help you classify all deployed AI systems and implement the required compliance measures in time.

Free AI compliance consultation

We will get back to you within 24 hours.

© 2025 THE BARK — Vedat EGE · Oberhausen · the-bark.de