Skip to content
Regulation

EU AI Act: What Mid-Market Companies Need to Know Now

January 20, 20269 min read
EU AI Act: What Mid-Market Companies Need to Know Now

The EU AI Act has been in force since August 2025. Most companies have heard of it. Very few know what it concretely means for them. That's not due to lack of interest, but because the discussion has so far been dominated by lawyers and lobbyists. Here's the translation for everyone who doesn't discuss AI theoretically but deploys it practically.

What Is the EU AI Act?

The EU AI Act is the world's first comprehensive regulation for artificial intelligence. It classifies AI systems by risk categories and defines specific requirements for each category. The goal: enable innovation, but not at the expense of safety, transparency, and human control. Sounds reasonable. It is. The only question is: what does it mean for your specific automation project?

The Four Risk Categories

Unacceptable risk: Prohibited. Social scoring, manipulative AI, real-time biometric surveillance. Doesn't affect most mid-market companies. High risk: Strict requirements. AI in critical infrastructure, credit scoring, hiring, insurance, medical devices. This is where it gets relevant. Limited risk: Transparency obligations. Chatbots must identify themselves as AI. Deepfakes must be labeled. Minimal risk: No special requirements. Spam filters, AI-powered games, most recommendation systems.

What Affects Mid-Market Companies?

If your company uses AI for credit decisions, claims processing, hiring, or medical support — or plans to — you likely fall under the "high-risk AI" category. This means: you need a risk management system for your AI applications. You must document the training data. You need human oversight — not as a fig leaf, but as a demonstrable architectural component. You must register your AI system with the European registry. You need technical documentation and conformity assessments.

Concrete Steps: What You Should Do Now

Step 1: Inventory. Which AI systems do you currently use? Which are you planning? Classify each system according to the risk categories. Step 2: Gap analysis. Where do you already meet the requirements? Where don't you? Particularly critical: human oversight and documentation. Step 3: Human-in-the-loop as an architectural principle. Don't add human checkpoints after the fact — build them in from the start. Workflow orchestration with defined escalation points fulfills the AI Act requirements by design. Step 4: Build documentation. Training data, decision logic, confidence thresholds, escalation rules — everything must be traceably documented.

Human-in-the-Loop as a Compliance Solution

The good news: if you implement AI projects with human-in-the-loop architecture, you automatically fulfill a large portion of the AI Act requirements. Defined thresholds for human intervention? Check. Traceable decision chains? Check. Audit trail? Check. Human oversight? Check. The EU AI Act isn't an obstacle for AI in mid-market companies. It's a quality standard. And companies that take it seriously build better AI systems.

Ready for the next step?

Let's discuss your requirements — no commitment, concrete results.