The EU AI Act: What Dutch Businesses Need to Know
The EU Artificial Intelligence Act is the world's first comprehensive law regulating AI. It entered into force on August 1, 2024, and its provisions are being phased in through 2027. For Dutch businesses using or planning to use AI, understanding this regulation is no longer optional.
But let us be honest: most coverage of the EU AI Act is written by lawyers, for lawyers. Dense, theoretical, and light on practical guidance. This article cuts through the jargon to give you what you actually need: which parts apply to your business, what you need to do, and when you need to do it.
What the EU AI Act Actually Regulates
The EU AI Act does not ban AI. It does not require you to stop using ChatGPT or shut down your automation workflows. What it does is create a risk-based framework that imposes different levels of requirements depending on how AI is used.
Think of it like food safety regulations. A street food cart has different requirements than a hospital kitchen, even though both serve food. The EU AI Act applies the same logic to AI: low-risk uses face minimal requirements, high-risk uses face significant obligations, and a small number of uses are banned outright.
The Four Risk Categories
Unacceptable Risk (Banned)
These AI practices are prohibited entirely:
- Social scoring by governments (like China's social credit system)
- Real-time biometric identification in public spaces (with limited law enforcement exceptions)
- AI that manipulates people's behavior in ways that cause harm
- AI that exploits vulnerabilities of specific groups (children, elderly, disabled)
- Emotion recognition in workplace and educational settings
- Untargeted scraping of facial images to build recognition databases
For most Dutch businesses, none of these apply. Unless you are building surveillance systems or social scoring platforms, the banned category is irrelevant to your operations.
High Risk
This is the category that matters most. AI systems are classified as high-risk if they are used in:
| Domain | Examples |
|---|---|
| Employment | AI for CV screening, job interview assessment, promotion decisions |
| Education | AI for exam scoring, student assessment, admissions |
| Critical infrastructure | AI managing energy, water, or transport systems |
| Essential services | AI for credit scoring, insurance pricing, social benefits |
| Law enforcement | Predictive policing, evidence evaluation |
| Border management | Risk assessment for travelers |
| Justice | AI for legal research that influences judicial decisions |
High-risk AI systems must:
- Implement a quality management system
- Conduct risk assessments and maintain documentation
- Ensure data quality for training datasets
- Provide transparency to users about the AI system
- Allow human oversight (a human can override or stop the system)
- Meet accuracy, robustness, and cybersecurity standards
- Register in the EU database of high-risk AI systems
Limited Risk (Transparency Requirements)
AI systems that interact with people must disclose that they are AI. This includes:
- Chatbots: Must tell users they are talking to an AI
- Deepfakes: AI-generated or manipulated content must be labeled
- Emotion recognition: If used (where permitted), users must be informed
- AI-generated text: Content created by AI must be identifiable as such
For most businesses, this means: if you deploy a customer service chatbot, it needs to clearly state that it is an AI, not a human.
Minimal Risk
Everything else. This includes the vast majority of business AI applications:
- AI-powered email sorting
- Workflow automation
- Document processing and data extraction
- Product recommendations
- Inventory forecasting
- Content generation (for internal use)
- Process optimization
Minimal-risk AI has no specific requirements under the EU AI Act, though general principles of transparency and fairness still apply.
What This Means for Dutch MKB
Let us be specific about what the EU AI Act means for typical small and medium businesses in the Netherlands.
If You Use AI for Workflow Automation
Risk level: Minimal
Automating invoice processing, email triage, report generation, or other operational workflows does not trigger high-risk classification. These are internal efficiency tools that do not make decisions about people's rights, employment, or access to services.
What you need to do: Nothing specific beyond general good practice. Use AI responsibly, be transparent with your team about what is automated, and maintain basic documentation of your AI systems.
If You Use AI Chatbots for Customer Service
Risk level: Limited (transparency requirement)
Customer-facing AI chatbots must disclose their AI nature. The user must know they are interacting with an automated system, not a human.
What you need to do:
- Add a clear disclosure that the chatbot is AI-powered (e.g., "You are chatting with our AI assistant")
- If the chatbot can hand off to a human agent, make the transition clear
- Do not design the chatbot to impersonate a real person
This is straightforward and most chatbot platforms already support these disclosures.
If You Use AI for HR Decisions
Risk level: High
This is where it gets serious. If you use AI to screen resumes, assess candidates, or make decisions about promotions, training access, or terminations, the AI system is classified as high-risk.
What you need to do:
- Conduct a conformity assessment before deployment
- Implement human oversight for all AI-influenced decisions
- Document the system's decision-making logic
- Ensure training data is representative and non-discriminatory
- Register the system in the EU database
- Maintain logs of the system's decisions for audit
Practical advice for MKB: If you use an AI recruitment tool (like AI resume screening), check with your vendor about their EU AI Act compliance. The compliance obligation falls on both the provider of the AI system and the deployer (you). If your vendor is not compliant, you bear the risk.
If You Generate Content with AI
Risk level: Minimal to Limited
Using AI to draft marketing copy, generate product descriptions, or create internal documentation is minimal risk. However, if the content could be mistaken for human-created work in contexts where that distinction matters, transparency requirements apply.
What you need to do:
- For internal use: No specific requirements
- For published content: Consider disclosing AI assistance, especially for content that could influence decisions (financial advice, health information)
- For synthetic media: AI-generated images, audio, or video must be labeled
The Timeline
The EU AI Act is being phased in gradually:
| Date | What Happens |
|---|---|
| August 2024 | Act enters into force |
| February 2025 | Banned AI practices prohibited |
| August 2025 | Rules for general-purpose AI models apply |
| August 2026 | High-risk AI system requirements take effect |
| August 2027 | Full enforcement, including for AI in regulated products |
Key date for most businesses: August 2026. That is when high-risk AI requirements become enforceable. If you are using or planning to use AI in any high-risk category, you have until then to ensure compliance.
For minimal and limited risk applications, there are no hard deadlines, but adopting good practices now is sensible.
Penalties
The EU AI Act includes significant penalties for non-compliance:
| Violation | Maximum Fine |
|---|---|
| Banned AI practices | EUR 35 million or 7% of global annual turnover |
| High-risk non-compliance | EUR 15 million or 3% of global annual turnover |
| False information to regulators | EUR 7.5 million or 1.5% of global annual turnover |
For SMEs and startups, the fines are proportionally lower, but still substantial enough to matter. The regulation explicitly states that fines should be "effective, proportionate, and dissuasive."
Practical note: Enforcement will initially focus on the most obvious violations (banned practices, clearly non-compliant high-risk systems). But as the regulatory infrastructure matures, expect broader enforcement. Getting ahead of compliance is always cheaper than reacting to enforcement actions.
The Dutch Angle: National Implementation
The Netherlands has been proactive on AI governance. Key points for Dutch businesses:
The Dutch AI Authority
The Netherlands is establishing a national AI authority (Algoritmetoezichthouder) under the Authority for Consumers and Markets (ACM). This body will be responsible for:
- Enforcing the EU AI Act in the Netherlands
- Providing guidance to businesses
- Handling complaints about AI systems
- Coordinating with other EU member state authorities
The Dutch Algorithm Register
The Dutch government already maintains a public algorithm register listing AI systems used by government agencies. While this is currently limited to public sector use, it sets a precedent for the transparency that the EU AI Act will require of high-risk AI systems.
Dutch Data Protection Authority (Autoriteit Persoonsgegevens)
The AP already enforces GDPR and will coordinate with the new AI authority on cases where AI and data protection overlap. If your AI system processes personal data (most do), you need to comply with both GDPR and the EU AI Act.
How the EU AI Act Intersects with GDPR
For Dutch businesses already compliant with GDPR, the EU AI Act adds another layer but does not replace existing obligations. Key overlaps:
| Requirement | GDPR | EU AI Act |
|---|---|---|
| Transparency | Privacy notices about data processing | Disclosure of AI involvement |
| Data quality | Accuracy of personal data | Quality of training data |
| Human oversight | Right not to be subject to automated decisions | Human oversight for high-risk AI |
| Documentation | Records of processing activities | Technical documentation of AI systems |
| Impact assessment | DPIA for high-risk processing | Conformity assessment for high-risk AI |
If you already have solid GDPR practices, you are halfway to EU AI Act compliance. The additional work is primarily around AI-specific documentation and transparency.
Practical Steps for Dutch MKB
Here is what we recommend you do now, regardless of your current AI usage:
Step 1: Inventory Your AI Systems
Make a list of every AI tool and system your business uses. Include:
- The tool name and provider
- What it does
- What data it processes
- What decisions it influences
- Who is affected by those decisions
This inventory is the foundation for everything else. You cannot assess risk if you do not know what AI you are using.
Step 2: Classify by Risk Level
For each AI system in your inventory, determine its risk category using the framework above. Most MKB applications will fall into minimal or limited risk. Flag anything that touches employment, credit, or other high-risk domains.
Step 3: Address Transparency Requirements
For any customer-facing AI (chatbots, AI-generated content), ensure proper disclosures are in place. This is low effort and should be done immediately.
Step 4: Plan for High-Risk Compliance
If any of your AI systems are high-risk, start planning for compliance now. The August 2026 deadline seems far away but conformity assessments, documentation, and system modifications take time.
Step 5: Choose Compliant Partners
When selecting AI tools and automation partners, ask about their EU AI Act compliance plans. A good partner should:
- Understand which risk category their tools fall into
- Have a compliance roadmap
- Be able to support your documentation requirements
- Offer EU-hosted options for data residency
Step 6: Train Your Team
Ensure that employees who use AI tools understand the basics of the EU AI Act. They do not need to be legal experts, but they should know:
- That AI disclosures are required for customer-facing systems
- That high-risk AI decisions require human oversight
- How to escalate concerns about AI behavior
The Competitive Advantage of Early Compliance
Here is the counter-intuitive truth: the EU AI Act is not just a cost. It is an opportunity.
Dutch businesses that get ahead of compliance can:
- Win enterprise clients who require vendors to be EU AI Act compliant
- Build customer trust by demonstrating responsible AI use
- Avoid the rush when enforcement begins and every business scrambles to comply
- Influence industry standards by establishing best practices before they are mandated
- Reduce liability by having documented risk assessments and oversight procedures
In a market where trust matters (and in the Netherlands, it does), being able to say "our AI systems are EU AI Act compliant" is a genuine competitive advantage.
The Bottom Line
The EU AI Act is not a reason to avoid AI. It is a framework that, when understood, actually makes AI adoption safer and more sustainable. For most Dutch MKB businesses, the requirements are manageable: be transparent about AI use, maintain good documentation, and ensure human oversight for consequential decisions.
The businesses that will struggle are the ones who ignore the regulation until enforcement begins. The ones who will thrive are the ones who integrate compliance into their AI strategy from the start.
Have questions about how the EU AI Act affects your AI plans? Get in touch and we will help you navigate compliance while building automations that deliver real value. Or read our guide on AI automation for Dutch MKB for a broader perspective on what is possible.