Artificial Intelligence (AI) is rapidly reshaping industries, from healthcare and finance to customer service and cyber security. However, along with its benefits come significant risks, including bias in decision-making, privacy violations, and the potential for unchecked surveillance. As AI systems become more integrated into daily life, governments worldwide are grappling with how to regulate their use responsibly.
The EU AI Act is the world’s first comprehensive legislative framework designed to regulate AI applications based on their potential impact on people and society. Unlike sector-specific regulations, this act introduces a risk-based approach, ensuring that AI systems that pose greater risks face stricter requirements, while low-risk AI applications remain largely unregulated.
With enforcement expected to begin in 2026, businesses and AI developers need to prepare now. Whether you’re an AI provider, a company integrating AI solutions, or an organisation concerned about compliance, understanding the key provisions of the EU AI Act is essential. In this blog, we break down the regulation, its risk classifications, compliance obligations, and the steps businesses must take to stay ahead.
What is the EU AI Act?
The EU AI Act is a legislative proposal introduced by the European Commission in April 2021 as part of the EU’s broader strategy for regulating emerging technologies. It seeks to balance innovation with the need to protect fundamental rights, safety, and transparency in AI applications.
Why is this regulation necessary?
AI systems are increasingly making decisions that affect people’s lives including determining credit worthiness, screening job applicants, and even diagnosing diseases. However, numerous incidents of biased AI models, algorithmic discrimination, and opaque decision-making have raised ethical concerns. High-profile cases, such as Amazon’s AI hiring tool discriminating against women or AI-powered facial recognition leading to wrongful arrests, highlight the urgent need for oversight.
The EU AI Act aims to:
- Establish clear rules for AI developers, providers, and users.
- Prevent harmful AI practices, such as social scoring or manipulative algorithms.
- Foster trust in AI technologies by ensuring transparency and accountability.
- Promote innovation by providing legal certainty for AI companies.
Why UK Businesses Should Care
The Act will apply not only to companies within the EU but also to any organisation deploying AI systems that impact EU citizens, similar to how GDPR has global reach.
Although the UK is no longer part of the EU, the EU AI Act holds significant relevance for UK-based organisations due to several factors:
UK Organisations Operating in the EU
Companies developing, selling, or using AI within the EU must comply with the Act to access its markets.
Equivalency Expectations
Following the example of GDPR and the UK Data Protection Act 2018, the UK may introduce a similar AI governance framework to align with international standards and maintain market competitiveness.
Global Leadership and Cooperation
The UK’s recent signing of the world’s first international AI treaty demonstrates its commitment to ethical AI development, human rights, and the rule of law in AI governance. By adhering to frameworks like the EU AI Act and international treaties, UK businesses can lead the charge in developing AI systems that are trusted globally.
Global Standards Alignment
Compliance with the EU AI Act and adherence to international AI treaties position UK companies as leaders in ethical AI practices, enhancing their reputation and global competitiveness.
Fact
A 2024 survey by McKinsey found that 72% of global businesses are actively deploying AI, with applications spanning from customer service automation to fraud detection. The EU AI Act, which entered into force in August 2024, aims to regulate these applications to mitigate risks while fostering innovation.
The Risk-Based Classification of AI Systems
One of the defining features of the EU AI Act is its risk-based classification model, which categorises AI systems based on their potential to harm individuals, businesses, and society. This ensures that the most intrusive and potentially dangerous AI applications face the strictest scrutiny, while less risky applications remain largely unaffected.
Unacceptable Risk – Prohibited AI
Some AI systems pose such severe risks to human rights, democracy, and personal freedoms that they are outright prohibited under the Act. These include:
• Social scoring systems that evaluate people based on behaviour (e.g., China’s credit scoring).
• Subliminal AI techniques that manipulate human behaviour in harmful ways.
• Real-time biometric surveillance in public spaces (except for narrowly defined law enforcement exceptions).
• Predictive policing AI, which uses profiling and behavioural data to pre-emptively classify individuals as likely criminals.
High-Risk AI – Strictly Regulated
AI applications that have a high impact on people’s rights or safety but are still legally permissible fall into this category. These systems must comply with strict regulatory requirements before they can be deployed.
Examples include:
• AI in hiring processes (e.g., resume-screening AI, automated interview analysis).
• AI in critical infrastructure (e.g., energy grids, air traffic control).
• Healthcare AI (e.g., AI-based diagnostics, robotic surgery).
• AI in financial services (e.g., automated credit scoring, fraud detection).
Businesses deploying high-risk AI must ensure:
• Human oversight is built into decision-making.
• AI models are trained on unbiased datasets to prevent discrimination.
• Robust cybersecurity protections are in place to prevent adversarial attacks.
Fact
Research by the World Economic Forum suggests that 65% of financial institutions use AI-driven fraud detection tools, many of which will now fall under “high-risk” regulations.
Limited Risk – Transparency Obligations
Some AI systems do not pose high risks but still require clear disclosure to users. These include:
• AI chatbots (users must be informed they are interacting with AI).
• Deepfake generators (AI-generated content must be labelled).
Minimal or No Risk – No Regulation
Most AI applications, such as spam filters, AI-powered recommendation engines, and video game AI, fall into this category and face no additional regulation.
Fact
The European Commission estimates that 90% of AI applications in use today will remain unregulated under the Act.
Key Compliance Requirements for Businesses
For companies operating in the AI space, compliance with the EU AI Act is non-negotiable. The most critical obligations include:
- Risk Management & Governance: Organisations must assess and mitigate AI risks before deployment.
- Data Governance & Bias Prevention: AI models must be trained on high-quality, unbiased datasets to prevent discrimination (e.g., biased hiring algorithms).
- Transparency & Explainability: Users must understand how AI decisions are made, especially in high-risk applications.
- Human Oversight: AI systems must allow human intervention to correct errors or override automated decisions when necessary.
- Cybersecurity & Robustness: AI models must be resilient against adversarial attacks, such as data poisoning or model manipulation.
Fact
A 2024 IBM report indicates that while there has been progress, a significant number of companies deploying AI still lack robust explainability and transparency frameworks which will become critical under the EU AI Act.
Penalties for Non-Compliance
Similar to GDPR, the EU AI Act includes severe penalties for violations:
Fines based on company turnover:
- Up to €35 million or 7% of global turnover for non-compliance with banned AI practices.
- Up to €15 million or 3% of turnover for failing to meet high-risk AI obligations.
- Up to €7.5 million or 1.5% of turnover for providing incorrect documentation.
Fact
Under GDPR, Amazon was fined €746 million for data protection violations—AI compliance failures could result in similar hefty penalties.
How to Prepare for the EU AI Act
For businesses leveraging AI, preparation is essential.
At Cyberfort we recommend all organisations undertake the following steps to ensure compliance:
Conduct an AI risk assessment: Identify AI models that fall under high-risk categories.
Implement AI governance frameworks: Establish policies for ethical AI use.
Ensure transparency and documentation: Maintain records of data sources, decisions, and human oversight processes.
Review vendor AI compliance: If using third-party AI tools, verify compliance obligations.
Engage legal & compliance experts: Stay updated on regulatory changes and enforcement timelines.
Fact
Gartner predicts that by 2026, at least 30% of large organisations will have dedicated AI governance teams to comply with new regulations.
Final Thoughts: Embracing Responsible AI
The EU AI Act marks a defining moment in AI regulation, setting a precedent for ethical AI governance worldwide. While compliance may be demanding, it also offers businesses the chance to build trust and transparency, essential for long-term success in an AI-driven world.
Organisations that proactively align with the EU AI Act will not only avoid penalties but also enhance their reputation, reduce AI risks, and gain a competitive edge in the global market.
For more information about the services we offer at Cyberfort to help you secure AI contact us at [email protected]