The EU AI Act is here – and we’ll show you how to get started with it!

by | 24. Jun 2025 | Digitale Transformation, Latest, News

The EU AI Act:

What applies now if you work with AI

The EU AI Act is no longer a vision for the future, but applicable law. The AI Act came into force in June 2024, and the obligations apply in stages: Prohibited systems apply from the beginning of 2025, obligations for general-purpose AI (GPAI) from mid-2025, and regulations for high-risk systems and conformity assessments from 2026. It doesn’t matter whether you work in marketing, HR, customer service, or product development, because AI tools are now used almost everywhere. ChatGPT, Copilot, Firefly (Adobe), and other helpers have long been an integral part of our everyday work.
But the days of simply using them without a second thought are over. Now there are rules. And they apply not only to large tech companies, but also to start-ups, medium-sized businesses, project teams, and individuals who use AI professionally.

What exactly is artificial intelligence?

Essentially, it is about machines performing tasks that we normally associate with human intelligence—for example, writing texts, preparing decisions, generating images, or processing language. What exactly this entails depends on the tool in question and the risk associated with its use. This is precisely where the EU AI Act comes in.

General-Purpose AI

General AI, also known as general-purpose AI (GPAI), refers to AI systems that can be used for a wide range of tasks and applications. These systems are not limited to a specific task, but can be used flexibly in different contexts. Examples of GPAI include large language models such as ChatGPT, which can be used for translation, text generation, answering questions, and much more. GPAI systems are generally versatile and can be adapted to different use cases.

The EU AI Act contains specific provisions for GPAI models, particularly those that pose systemic risks. These models must meet strict requirements, including:

– Evaluation and adversarial testing: Providers of GPAI models must regularly evaluate their models and conduct adversarial testing to identify and mitigate potential risks.

– Incident reporting: Serious incidents occurring in connection with the use of GPAI models must be reported to the AI Office.

– Cybersecurity measures: GPAI models must meet adequate cybersecurity measures to ensure that they are protected against attacks and misuse.

Risk groups instead of flat-out bans

The legislator distinguishes between four risk groups: prohibited AI, high-risk AI, limited-risk AI, and minimal-risk AI.
Prohibited systems include tools that engage in social scoring, i.e., attempt to extract information about people from social networks without their knowledge. Or deepfakes that are so realistic that they are almost indistinguishable from real recordings.

High-risk AI includes systems that are used for important decisions, such as diagnosing serious illnesses, selecting applicants, or credit checks. Particularly strict requirements apply here, including risk management, traceability, security standards, and human control.
Whether a specific tool such as ChatGPT falls under limited risk depends on the specific use case. GPT-4 in a job applicant platform, for example, would be high risk. As a pure text tool for creative support, it is more likely to be “low” or “limited risk.”
And then there is minimal risk: tools such as Grammarly or DeepL, which correct or translate content but do not prepare decisions or generate new information.

What needs to be labeled?

As soon as you reuse AI content that is used for third parties or the public, the labeling requirement applies. This applies to texts, images, videos, audio, visualizations, or recommendations. It is therefore not sufficient to simply have ChatGPT generate a sentence in the workshop and leave it in the minutes without any reference. This must also be labeled if it is shared externally. The wording can be simple, for example: “This text was created with the support of an AI tool.” Or: “Image material generated with the help of artificial intelligence.”
There are no clear regulations (yet) for purely internal notes with no external impact – a “best practice” recommendation would be more precise here.
The key point is that the labeling must be directly visible and also readable by systems.

What does this mean for companies?

Clear structures are needed. The use of AI must no longer be a matter of chance. It must be defined which tools may be used, how AI content is handled, how labeling works, who is responsible internally, and how data protection is taken into account. Technical documentation requirements and conformity checks are also required, especially for high-risk AI, such as in the HR sector for automated applicant evaluation.
Anyone who ignores this is taking a real risk. Violations of the AI Act are punishable by fines of up to €35 million or 7 percent of global annual turnover, whichever is higher.

AI Regulatory Sandboxes

Each Member State must establish at least one regulatory sandbox for AI at the national level by August 2, 2026. These sandboxes provide a controlled environment in which companies can test AI systems under supervision to ensure compliance with regulations. The sandboxes are particularly useful for companies that want to develop and test innovative AI solutions without having to meet all regulatory requirements immediately. They enable companies to test their systems in a real-world environment while receiving feedback from regulatory authorities.

The sandboxes offer several advantages:

Promotion of innovation: Companies can test and further develop new AI technologies and applications in a secure environment.

Regulatory support: Companies receive support and guidance from regulatory authorities to ensure that their AI systems comply with the requirements of the EU AI Act.

Risk minimization: The controlled test environment allows potential risks and problems to be identified and resolved at an early stage before the systems are launched on the market.

The establishment of AI regulatory sandboxes is an important step toward promoting innovation in the field of AI while ensuring that the systems developed are used safely and ethically.

Support and guidelines

The European Commission has launched the AI Pact, a voluntary initiative designed to help companies familiarize themselves with the key obligations of the AI Act at an early stage and implement them. This is intended to facilitate the transition to the new regulatory framework.

Why all this?

Because AI brings not only opportunities, but also responsibility. Users have a right to know whether content comes from a human or a machine. And companies must ensure that AI is not used in an uncontrolled manner. The EU AI Act aims to create security for everyone who works with AI or comes into contact with its results.

gezeichnete sandsurfing Figur in Beduinenkleidung

What companies should clarify now:

– What tools do you use?
Make a complete overview. What is actively in use, officially and unofficially?

– Which risk group do your tools fall into?
Classify all AI systems used; this is the basis for all further measures. You can easily check this for each individual tool on the EU website.

– How do you label content?
Define clear, consistent labels for text, images, audio, and video. And make sure everyone uses them.

– Where do you document AI usage?
Determine who created what with which tool and where it is stored.

– Who is allowed to use which tools?
Define roles, approvals, and limits. Not everyone needs access to everything.

– Are your tools compliant with data protection regulations?
Clarify what data is entered, how it is processed, and whether it complies with the GDPR.

– Are your employees trained?
Training, awareness formats, and clear guidelines are mandatory.

– Who is the contact person for questions?
Appoint a responsible person or a small team internally for AI questions.

– Do you use high-risk AI?
Then further obligations apply: technical documentation, risk management, registration.

– Do you regularly check what is changing?
Tools evolve. Your rules must also be reviewed regularly.

Cooperation and harmonization

The EU AI Act aims to create a harmonized regulatory environment for AI across the European Union. This is to ensure that AI systems are used safely and ethically and to strengthen user confidence in these technologies. Cooperation between member states and the European Commission is crucial to achieving this goal.

Future developments

As AI technology is evolving rapidly, the EU AI Act will be regularly reviewed and updated to ensure that it addresses the latest technological developments and challenges. Companies should be prepared for continuous adjustments and updates to the regulations.

Conclusion: Don’t wait, get started

The EU AI Act is not a theoretical model, but reality, and yes, it does entail new obligations. But above all, it brings structure to a topic that has often been left to grow wild. Anyone who uses AI should do so not only technically, but also legally and ethically sound. Now is the right time to set up internal processes, raise awareness, and take responsibility.
AI is here to stay, and the AI Act helps us find the right way to deal with it.

Useful links:

  1. Here an Insight how we work with AI
  2. Official European Commission website on the EU AI Act: Here you will find comprehensive information and official documents on the EU AI Act.
  3. EU Artificial Intelligence Act – High-level Summary: A summary of the most important points of the EU AI Act.
  4. European Parliament – EU AI Act: Information from the European Parliament on the EU AI Act.
  5. EU AI Act Compliance Checker: A tool for checking compliance with the EU AI Act.