explainable-aiexplainabilityai-actcompliance

Explainable AI Decision-Making: A Practical SMB Guide

April 24, 20267 min readPixel Management

This article is also available in Dutch

If your AI helps decide on a job applicant, a credit application, an insurance claim, or even a personalized price, the affected person can ask why the outcome is what it is. That's no longer an optional courtesy — under GDPR (Art. 22) and the EU AI Act, concrete obligations apply to make automated decisions explainable. And "the model said so" is not a valid answer.

For SMBs, this is often the most confusing piece of AI compliance. Explainability sounds like an academic concept, while what you really want to know is: what should I set up today so I'm not in trouble when the data protection authority or a customer asks? This article answers exactly that question.

The difference between explainability and transparency

First, a misconception worth clearing up: explainability and transparency are not the same. Transparency is about whether you disclose that you're using AI — that obligation we cover in AI transparency requirements for businesses. Explainability is about whether you can explain why that AI reached a specific outcome in a specific case.

That difference matters because the two obligations are enforced differently. A transparency violation typically results in a warning or a required website change. An explainability violation tied to an impactful decision — say, a rejected applicant — can lead to an obligation to revisit the decision plus a fine.

In practice, explainability is the harder of the two. Adding a transparency notice to your website takes an hour. Setting up an AI system so that for every decision you can name the determining factors — that requires architectural choices you have to make during the build, not at the end.

When is explainability legally required?

Not every AI decision falls under an explanation obligation. Three categories where it does:

1. Fully automated decisions with legal effect. GDPR Art. 22 — when AI alone, without meaningful human intervention, makes a decision that produces legal effects or "similarly significantly affects" the person. Examples: automatic credit denial, algorithmic per-customer pricing, AI-driven applicant screening without human review.

2. High-risk AI systems under the AI Act. Annex III of the AI Act lists these: HR tools (recruitment, evaluation), credit scoring, education admissions, biometric categorization, AI in critical infrastructure. (Real-time biometric identification in public spaces is largely prohibited under Art. 5, not classified as high-risk.) For the high-risk categories, an enhanced explanation obligation applies, plus documentation and logging requirements.

3. On any data subject request. Even if your AI application isn't "high-risk," the affected person can, under GDPR, ask why an automated processing reached this outcome. That right exists regardless of whether your use case sits on a list.

For a deeper dive into AI Act categories, see EU AI Act August 2026: what now?.

Three levels of explanation — pick the right one

Not every explanation has to be the same. Three common levels, in increasing complexity:

Level 1: Global explanation

What does this AI system do in general terms? Which factors does it weigh, what is it designed to achieve? This is the explanation that goes on your website or in your service terms. Example: "Our pricing model uses 12 factors including order volume, geographic location, and historical behavior. The model maximizes customer satisfaction within cost constraints."

For most SMB applications, this level is mandatory and sufficient for the transparency obligation.

Level 2: Local explanation per decision

Why did this specific customer get this specific outcome? Which factors contributed most? Example: "Your application was denied, primarily because: 1) the application period is shorter than 6 months (impact: 45%), 2) the company size falls outside our standard range (impact: 35%), 3) the sector classification requires additional review (impact: 20%)."

This level is mandatory once you fall under Art. 22 or are high-risk. For lower-risk use cases, it's strongly recommended — it's usually the explanation a person expects when they ask "why?".

Level 3: Counterfactual explanation

What would have needed to be different for a different outcome? Example: "Your application would have been approved if: 1) the application period had been at least 6 months, or 2) the requested amount had been below €5,000."

This level is rarely legally required but often gives the person the most satisfying answer — and prevents follow-up questions or complaints.

Practical techniques to build in explainability

Three approaches that work for SMB systems:

Pick inherently explainable models where possible. Decision trees, linear regression, and rule systems are explainable by construction — you can directly see which factor contributed what. For many SMB use cases (credit scoring, lead prioritization, simple risk classification), these methods are good enough and compliance becomes much simpler. Don't automatically jump to deep neural networks.

Use feature attribution for more complex models. Libraries like SHAP and LIME can retrospectively explain, for many machine learning models, which input variables contributed most to a decision. For classical tabular models the implementation is a few days of work; for neural networks or LLMs it takes more thought. In both cases you end up with reportable explanations per decision.

Build decision traces for LLM applications. With generative AI or LLM-driven decisions, feature attribution is technically harder. A different approach works here: log the prompt, the retrieval context (which documents the model used), the intermediate steps (what the model reasoned before reaching its conclusion), and the final output. That trace is your explanation.

Logging is an essential foundation here — see our piece on AI audit trail and compliance logging for the broader infrastructure. Without logs, you have nothing to explain from.

Save 8 hours per week on ad-hoc assembling an explanation when a data subject or regulator queries a specific decision

How to phrase a good explanation

Technical explainability is one thing; an explanation the affected person actually understands is another. Four guidelines for good explanations:

Avoid technical jargon. "The gradient boost feature importance of variable X was 0.34" is not an explanation for a normal customer. "The duration of your customer relationship was the most important factor in this decision" is.

Be concrete and specific. Vague explanations ("based on your profile") are not explanations. Concrete explanations ("the combination of your application amount and application period") are.

Give actionable handles. A good explanation tells not only why something happened but what the affected person can do about it — if anything can be done. This connects to the counterfactual explanation in level 3.

Offer appeal and human review. Under Art. 22 GDPR, the affected person has the right to object and to obtain human intervention. Make that process visible and easy — not buried in a 30-page privacy policy.

Learn more about AI consulting?

View service

Explainability and the rest of your AI stack

Explainability doesn't exist in isolation. It connects to the other parts of your AI compliance:

An AI application that's explainable but not logged is worthless in an audit. An application that is logged but where the explanation isn't generated comprehensibly fails the spirit of the law. The three elements — model choice, logging, and explanation generation — work as one.

What it costs

For an SMB with one to three AI applications where explanation is required:

ComponentCost
Inherently explainable model architecture (extra build time)€3,000–€10,000 one-off
Feature attribution tooling (SHAP/LIME) implementation€2,000–€6,000 one-off
UI/UX for explanation surfacing to data subjects€4,000–€12,000 one-off
Ongoing maintenance + audit readiness€300–€1,000/month

Year 1 total: €13,000–€40,000. Against a potential AI Act fine of up to €15 million or a GDPR fine of up to €20 million for breaching Art. 22 — plus reputational damage — this is insurance that typically pays for itself before you need it.

What you can do today

Three actions that don't require budget and concretely help explainability:

Inventory which of your AI systems fall under the explanation obligation. It's often fewer than you'd think — a chatbot answering product questions doesn't have an explanation obligation. An AI ranking candidates or qualifying leads possibly does. Start with the shorter list.

Test your explanation on a real human. Hand a developer or friend the hypothetical explanation of a decision and ask: do you understand why? If the answer is no, rewrite until the answer is yes. This sounds basic and surfaces a surprising amount of improvement.

Make one person accountable for the explanation architecture. Not "the IT team," not "compliance," but one name. As with governance, this is organizationally the difference between "we've thought about it" and "we can show it."

Explainability isn't a mathematically perfect problem. It's a promise to the people your system decides about, that you don't treat them as a number passing through an opaque machine. Those who keep that promise rarely have much to worry about from regulatory enforcement — and build the kind of trust that, in an AI-saturated market, is becoming worth more all the time.

Curious how much time you could save?

Request a free automation scan. We'll analyze your processes and show you where the gains are — no strings attached.