EU AI Act 2026: What Boards Must Decide Before August
- 5 days ago
- 4 min read
August 2, 2026 is four months away.
That is the date when the EU AI Act's requirements for high-risk AI systems become broadly enforceable — covering AI used in employment decisions, credit assessment, customer due diligence, fraud detection, and a range of other applications that are not hypothetical for businesses operating in financial services, iGaming, or commercial real estate.
The penalties are €15 million or 3% of global annual turnover for high-risk violations. Use of prohibited AI systems carries fines of up to €35 million or 7% of global turnover. For context, that is a penalty structure that exceeds the GDPR.
This is not primarily a technology problem. It is a governance problem. And it lands on the board.

Why the Board Owns This
The EU AI Act places explicit obligations on deployers — the companies using AI systems, not just the companies building them. Every business in a regulated industry that uses third-party AI tools for credit scoring, KYC automation, content moderation, fraud detection, or HR screening is a deployer under the Act's definition.
Deployers must implement human oversight mechanisms, maintain technical documentation, conduct conformity assessments for high-risk systems, and register applicable systems in the EU AI Act database before the August deadline. These obligations cannot be delegated to an IT department. They require sign-off at senior leadership level and, in many organisations, formal board-level approval.
The accountability structure the Act imposes maps directly onto existing board responsibilities for risk management and regulatory compliance. If your organisation operates under MiCA, PSD3, AML frameworks, or any sectoral financial regulation, you are already inside a governance structure where the board is accountable for how data is processed and acted upon. The AI Act extends that accountability to the systems doing the processing.
The finding that should alarm boards most is not the penalty structure. It is the 83% of organisations assessed that had no formal inventory of the AI systems they deploy. You cannot classify risk you have not catalogued.
What the August 2026 Deadline Actually Covers
A lot of commentary treats August 2026 as "the AI Act deadline" without distinguishing what becomes enforceable on that date. The distinction matters for compliance planning.
The Act has rolled out in stages. The prohibition on unacceptable-risk AI systems — social scoring, certain biometric uses — was enforceable from February 2025. General-purpose AI model obligations applied from August 2025. August 2, 2026 brings the full requirements for high-risk AI systems listed in Annex III: employment decisions, essential services access, credit scoring, law enforcement, migration, and administration of justice.
For financial services firms, iGaming operators, and property businesses, Annex III is the deadline that cuts closest. Credit assessment models, AML and KYC screening tools, fraud detection systems, and employee monitoring software are all in scope.
The European Commission's proposed Digital Omnibus package — which could defer Annex III obligations until December 2027 — should not be treated as a reason to slow down. If it passes, organisations that prepared will be ahead of the field. If it does not, those that assumed a delay will be exposed. The cost of betting on an extension and being wrong cannot be recovered in four months.
The Practical Compliance Gap
George Kakouras has sat on enough boards to know that compliance readiness rarely fails at the level of intent. It fails at the level of implementation detail — specifically, the gap between what a compliance team believes is in place and what can actually be evidenced to a regulator.
The AI Act is evidence-intensive. Conformity assessments must be documented. Risk classifications must be justified in writing. Technical documentation for high-risk systems must cover training data, accuracy metrics, known limitations, and human oversight mechanisms. None of this can be backdated once an enforcement action begins.
The steps between now and August are straightforward to list:
Map every AI system in use against the Act's four risk categories.
Classify each system with a documented rationale.
Conduct conformity assessments for those that fall in the high-risk category.
Complete CE marking and register applicable systems in the EU database.
Establish and document human oversight procedures.
Assign named accountability within the governance structure, at board level.
Organisations that treat this as an IT migration project will be the ones requesting extensions in July.
The Sector-Specific Dimension
For regulated industries, the AI Act layers onto existing sectoral obligations — it does not replace them. In financial services, MiCA, PSD3, and AML frameworks already impose governance requirements on automated systems. The AI Act adds documentation and classification requirements that interact with, but do not duplicate, those frameworks.
In iGaming, responsible gambling technology is specifically in scope. Automated systems that assess player behaviour and trigger intervention protocols — increasingly driven by ML models — will require classification and, where they meet the high-risk criteria, formal conformity assessment.
For commercial real estate businesses using automated valuation models or AI-driven tenant screening tools, the exposure is real and routinely underestimated. The practical approach is not to treat AI Act compliance as a separate project. It should be integrated into existing risk management and governance structures — mapping against what is already documented for financial regulators, then extended to cover the AI-specific obligations.
The Longer Signal
The August deadline is urgent. The longer signal is what it reveals about regulatory direction.
EU regulators are moving consistently toward requiring organisations to understand, document, and account for the automated systems they deploy. More documentation. More human accountability. More evidence that decisions with consequences have a human in the loop.
Boards that build the governance infrastructure to comply with the AI Act will be better positioned for every subsequent wave of digital regulation. Those that treat each deadline as a one-off exercise will run this race again in two years' time with a different regulation and the same underlying gaps.
The time to build the infrastructure is before the regulator asks for it.
How iGaming operators should classify and document automated player behaviour systems under the Act is a question that warrants a dedicated analysis. It will feature in a future article.
Comments