Explainer: How to Comply with the European Union’s Artificial Intelligence Act

Does anyone on your team use DeepL, ChatGPT, Copilot, or other artificial intelligence (AI) platforms for any activities in the European Union? If so, your company is subject to the European Union’s AI Act , which went into effect on 2 August 2024. This complex, wide-reaching legislation is back in the news recently because of the upcoming 2 August 2025 date when general-purpose AI (GPAI) obligations begin for systems like chatbots (ChatGPT, Gemini, Copilot, etc.).

How does this law impact your company? To put it simply, non-compliance could result in hefty fines or block your access to the European Union (EU) market. Plus, this legislation could also affect regulations in other regions, so staying informed will help your company prepare for potential future requirements outside the EU.

Read on to explore the highlights and find out what you need to do to avoid these risks and implement “human-centric and trustworthy artificial intelligence” that complies with the AI Act.

What companies does the AI Act apply to?

The regulation outlines seven categories of companies and individuals to which the Act applies. In this article, we’ll focus on the main two:

  • Companies that develop AI systems or models for use in the European Union, regardless of where they are headquartered (called “providers”)
  • Companies that use AI systems or output from these systems in the EU, regardless of their location (known as “deployers”)

What AI systems does the Act regulate?

Now that we’ve covered who the law applies to, let’s look at the types of AI it regulates. According to an At a Glance publication from the European Parliament Think Tank, “The Artificial Intelligence (AI) Act regulates AI systems according to the risks they pose, and general-purpose AI (GPAI) models according to their capabilities.”

The AI Act explicitly prohibits certain AI practices deemed unacceptable risks and defines a clear set of requirements for developers and users of high-risk AI systems that “could have a significant harmful effect on the health, safety or fundamental rights of individuals”.

The European Commission categorizes AI systems into four risk tiers:

  1. Unacceptable risk:
    Practices in this category are banned outright. They include manipulative AI, social scoring, predictive policing, emotion recognition in workplaces, and real-time biometric surveillance in public spaces.
  2. High risk:
    This designation covers strictly regulated systems used in critical areas like healthcare, autonomous driving, recruitment, law enforcement, finance, and education. Providers must conduct risk assessments, maintain documentation, implement human oversight, and undergo conformity checks.
  3. Limited risk:
    This includes chatbots (like ChatGPT) or content generators. The Act includes transparency obligations for these systems. For example, it must be clear to users when a photo or video has been generated or manipulated by AI.
  4. Minimal or no risk:
    The Commission says “the vast majority of AI systems currently used in the EU fall into this category” and have very low or no impact on rights or safety. They include things like AI-enabled video games or spam filters. The Act doesn’t regulate these systems but encourages providers and deployers to follow voluntary practices.

The Act designates AI systems in the following areas as high risk:

  • Biometrics
  • Critical infrastructure: digital, road traffic, supply of water, gas, heating or electricity
  • Education and vocational training
  • Employment
  • Essential private services and essential public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

What’s the timeline for compliance?

The AI Act’s requirements have gradually been going into effect since it became law in 2024. For example, high-risk practices were banned and new AI literacy obligations took effect on 2 February 2025.

The next big deadline is 1 August 2025, mostly focused on requirements for GPAI models. New GPAI models released after this date must comply with the AI Act, but existing models don’t have to fully comply until 2 August 2027. In the meantime, it’s not yet clear how much fine-tuning a company can do on a model before it is considered a new release and therefore immediately subject to the Act’s provisions.

Classification rules for high-risk AI systems and the corresponding obligations will apply starting 2 August 2027.

What should your company do?

Most companies fall into the “deployer” category, even if you customize an AI model for internal use. As a deployer, you have the following responsibilities:

  • Make sure your staff use the AI system in a safe, compliant way
  • Provide training on AI literacy (Article 4), human oversight (Article 14), and transparency (Article 13), depending on how the AI is used
  • Comply with additional obligations such as logging, monitoring, and risk management, if your system is in a high-risk category

Consider the following questions as a starting point to evaluate how the AI Act impacts your company.

  • How is AI used at our company? Who is using it? How is this documented?
  • What policies and procedures govern AI use at our company?
  • Do humans always review AI-generated content? Are generated photos and videos clearly labeled?
  • Are we in a high-risk segment? If so, how are we preparing for the high-risk AI system requirements?
  • Do we provide AI literacy training for employees? How do we document this training?

Not sure what category your company falls into and whether your AI systems are considered high-risk? Try the Future of Life Institute’s EU AI Act Compliance Checker.

What you need to know about AI literacy under the EU AI Act

  • AI literacy is now a requirement
    As of February 2, 2025, companies that develop or use AI systems in the EU must ensure that people interacting with AI are “sufficiently AI literate”.
  • It’s not just for developers
    AI literacy isn’t only for technical teams. It applies to employees, external partners, service providers, and even clients who use or are affected by your AI systems.
  • AI literacy means more than knowing how AI works
    People should understand what AI can and can’t do, its risks and benefits, and how to use it responsibly. That includes legal and ethical obligations like transparency and human oversight.
  • Training should be tailored
    There’s no one-size-fits-all approach. The level of training depends on:
    o Whether you’re developing or deploying AI.
    o The type of system involved—basic or high-risk.
    o The background and role of each group of users.
  • Documentation matters more than certification
    You don’t need formal training certificates, but you do need to keep internal records showing that people have received appropriate guidance or training.
  • Instructions alone are not enough
    Just handing out manuals isn’t sufficient—especially for higher-risk AI systems. Meaningful training or onboarding is expected.
  • You have until August 2026 before penalties apply
    Enforcement starts in August 2026, giving organizations time to put the right training and documentation in place—but the obligation is already in effect, so don’t wait.
  • Helpful resources are available
    The EU provides support like the AI Pact, training examples, webinars, and learning frameworks. These can help you build practical, role-based literacy programs.

What are the penalties for non-compliance?

  • Unacceptable risk violations: up to €35 million or 7% of global revenue
  • High-risk or GPAI violations: up to €15 million or 3% of revenue
  • False or misleading info provided to governing bodies: up to €7.5 million or 1.5% of revenue

Smaller companies and startups will receive proportionate fines and have access to support programs like regulatory sandboxes.

What is the tech industry saying?

On 3 July, top European tech firms including Mistral and other big businesses like Airbus and Carrefour wrote an open letter to the European Commission to voice concerns about this legislation and request that the EU delay enforcement of the Act. They say key obligations aren’t clear enough and that they need more time to meet the timelines. These companies argue that rushing enforcement may hurt innovation and global competitiveness if firms in other regions have looser or slower regulation.

However, on 4 July, European Commission spokesperson Thomas Regnier said, “’I’ve seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause,’” reported Reuters.

On 10 July, the Commission published a voluntary Code of Practice for general-purpose AI designed to help providers comply with the AI Act provisions going into effect on 2 August. According to the Commission, “AI model providers who voluntarily sign it can show they comply with the AI Act by adhering the Code. This will reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods.”

France-based Mistral was the first AI company to announce it would sign the Code of Practice, according to Politico. U.S.-based OpenAI (ChatGPT) announced on 11 July it has also decided to sign the Code. We’ll keep you updated on whether other big tech firms like Alphabet and Meta also sign on and the resulting impact on your company.

ITC Global: Your AI Act Compliance Partner

The AI landscape is complex and constantly evolving. Keeping track of how AI is used at your company and making sure everyone follows the rules is challenging. We’re here to help! Our R&D and Innovation Engineering team keeps up with the latest developments and makes sure our systems and processes fully comply with the AI Act so you can focus on reaching new markets with confidence.

We also help you eliminate other common risks linked to AI use in global content workflows:

  • Data security: When staff use free or personal AI tools without proper controls
  • Inconsistent content: When different teams translate the same material using different tools or methods
  • Accuracy issues: When AI-generated content isn’t carefully reviewed by a human
  • Brand reputation: When low-quality or unedited AI content affects how your company, products, and services are perceived

Contact us today to find out more about the AI Act and how ITC Global helps you responsibly use AI to empower your growth.

Share this post:

Related posts

Ready to work with us?

Speak the language of your customers, prospects, partners, and employees around the world with ITC Global’s full suite of solutions powered by our unique blend of talent and technology. Every language solution you need, from translation to AI technology.
Tailored to you. All in one place.