Cyber threats are evolving at an unprecedented pace, growing more sophisticated and harder to detect. In response, organisations are investing heavily in cutting-edge technologies, from firewalls and encryption to AI-powered threat detection systems. While these tools are essential, there is a growing tendency to rely too heavily on technology alone, overlooking a crucial element in the cyber security equation – people.

It is often said that humans are the weakest link in security, but this narrative is outdated and misleading. In reality, people can be the strongest line of defence, when they are properly trained, supported, and empowered. Cyber security is not just a technical challenge; it is a human one. The ability to recognise phishing attempts, follow secure practices, and respond swiftly to incidents often determines whether an attack succeeds or fails.

People are not the weakest link; they are the critical differentiator. At Cyberfort we believe it is time to shift the focus and invest in human resilience as much as technological strength.

Human Factor

According to the 2025 Verizon Data Breach Investigations Report (DBIR), approximately 60% of all confirmed breaches involved a human action, whether it was clicking on a malicious link, falling victim to social engineering, or making an error like misdelivering sensitive data. This statistic underscores a critical truth, while technology plays a vital role in cyber security, human behaviour remains a central factor in both risk and resilience. Rather than viewing people as the problem, organisations must recognise them as a powerful part of the solution. With the right training, awareness, and support, employees can become proactive defenders, identifying threats, reporting anomalies, and making informed decisions that technology alone cannot.

Culture and Behaviour

At the heart of a cyber resilient organisation is a culture that values open communication, psychological safety, and shared responsibility. These cultural traits shape the everyday behaviours that determine how effectively an organisation can prevent, detect, and respond to cyber threats.

Employees are encouraged, not punished, for reporting risks, mistakes, or suspicious activity. This openness ensures that potential threats are surfaced early and addressed quickly. Silence, often driven by fear of blame, is replaced with transparency and trust.

Mistakes are treated as learning opportunities. By shifting from a blame culture to a learning culture, organisations empower employees to speak up, share insights, and continuously improve. This mindset fosters resilience and agility in the face of evolving threats.

Cyber security is seen as everyone’s job, not just IT’s. When employees understand how their actions impact the organisation’s security, they are more likely to adopt secure behaviours and support one another in doing the same.

Human Judgement vs Tech

Even the most advanced AI systems cannot replicate human intuition. While automated tools are essential for detecting known threats at scale, they often lack the contextual awareness and critical thinking that trained employees bring to the table. A vigilant team member who questions a suspicious email or flags unusual behaviour can catch what algorithms might overlook. Their ability to escalate concerns quickly can mean the difference between a contained incident and a full-scale breach.

Humans provide reasoning, context, and prioritisation, qualities that machines cannot fully emulate. Cyber resilience is not just about identifying threats; it is about balancing risk, cost, and operational impact. These are nuanced decisions that require human understanding and judgment.

Technology is powerful, but it is people who make it effective. Empowered employees are not just part of the defence; they are the heart of it.

Cross Functional Collaboration

Cyber resilience is not the sole responsibility of the IT or security team; it is a shared effort that spans the entire organisation. Building a truly resilient posture requires cross-functional collaboration, bringing together departments like HR, Legal, Communications, Risk, and Operations. Each team plays a unique and vital role in preparing for, responding to, and recovering from cyber incidents.

• HR ensures that security awareness is embedded into onboarding, training, and culture.
• Legal helps navigate regulatory obligations, breach notification requirements, and liability concerns.
• Communications manage internal and external messaging during a crisis to maintain trust and transparency.
• Operations and Risk assess business impact and coordinate continuity plans.

One of the most effective ways to strengthen this collaboration is through crisis simulations and tabletop exercises. These simulations test not just technical responses, but decision-making, communication, and coordination across teams, turning theory into practice and exposing gaps before real threats strike.

Leadership

Leadership and management play a pivotal role in shaping an organisation’s cyber resilience culture. When leaders actively model good security behaviour, such as using strong passwords, reporting phishing attempts, and following data protection protocols, they send a powerful message – cyber security is everyone’s responsibility. Their actions set the tone from the top, influencing how employees perceive and prioritise security in their daily work.

This leadership commitment must extend to the board level, where cyber security is treated as a strategic business risk, not just a technical issue. Board-level accountability ensures that resilience is embedded into governance, risk management, and long-term planning. When directors ask the right questions and demand regular updates on cyber posture, it reinforces the importance of security across the organisation.

Buy-in from management is not just symbolic; it is strategic. Leaders must champion resilience initiatives, allocate resources for training, and integrate cyber security into broader business goals. They also play a key role in setting behavioural norms, reinforcing secure practices through communication, recognition, and consistent example.

When leadership leads by example, from the boardroom to the front line, cyber resilience becomes part of the culture, not just a compliance checkbox.

From Theory to Practice

Organisational Resilience
A well-trained workforce is not just a support function; it is a frontline defence and a cornerstone of cyber resilience. True resilience is achieved when cyber security is embedded into the values, behaviours, and everyday actions of everyone in the organisation, not just the IT or security teams. This means cultivating a culture where security is second nature, from how emails are handled to how data is shared and stored.

Embedding this mindset requires more than annual training modules. It involves ongoing education, leadership buy-in, and visible reinforcement of secure behaviours. For example, Microsoft has implemented a company-wide security culture program that includes regular phishing simulations, gamified learning experiences, and executive-led security briefings. These initiatives are tailored to different roles and risk levels, ensuring relevance and engagement across the board.

The result? Employees become active participants in defence, spotting threats early, responding appropriately, and reinforcing a culture of vigilance and accountability.

Engaging Training
Cyber security training must go beyond the traditional “check-the-box” approach. To be effective, it needs to be engaging, relevant, and continuous. This means using storytelling, real-world scenarios, interactive simulations, and up-to-date threat examples that resonate with employees’ daily experiences. When training is relatable and dynamic, it not only captures attention but also builds lasting awareness and practical skills.

Effective training empowers staff to detect and respond to threats quickly, reducing the risk of breaches and enabling them to contribute to the development and safe use of new technologies. It also fosters a culture where security is seen as a shared responsibility, not just an IT concern.

A standout example is Google’s Security and Privacy Training Program, which uses gamified learning, phishing simulations, and scenario-based exercises tailored to different roles. Employees are regularly tested with real-time challenges, and the program evolves with emerging threats, keeping security top of mind and skills sharp.

Recognition and Reward
Recognising and rewarding good cyber security behaviour is a powerful way to reinforce a culture of resilience. When employees feel that their efforts to stay secure are noticed and appreciated, they are more likely to remain vigilant and engaged. Celebrating individuals or teams who demonstrate strong cyber hygiene, such as reporting phishing attempts, following secure data handling practices, or contributing to awareness initiatives, helps normalise and encourage these behaviours across the organisation.

Recognition does not have to be complex. It can range from shout-outs in team meetings and internal newsletters to formal awards or incentives. The key is consistency and visibility.

A best practice example comes from an American company called Salesforce, which runs a “Security Champions” program. Employees across departments are nominated for their proactive security efforts and receive public recognition, exclusive training opportunities, and branded rewards. This not only boosts morale but also builds a network of internal advocates who help spread security awareness organically.

By celebrating the right behaviours, organisations reduce human error and strengthen their first line of defence, their people.

Review and Response
Cyber security is most effective when it is treated as a shared responsibility, not just an IT function. One of the most impactful ways to reinforce this is by regularly collecting feedback from employees on what is working, what’s unclear, and where improvements are needed. This two-way dialogue encourages ownership, reinforces learning, and helps build a culture of vigilance and continuous improvement.

Feedback mechanisms can include anonymous surveys, post-training evaluations, suggestion boxes, or open forums during team meetings. The key is to act on the feedback, showing employees that their insights lead to real changes.

A best practice example comes from a UK company called PwC, which integrates cyber security feedback loops into its broader risk culture program. After simulations or incidents, employees are invited to share their experiences and suggestions. This feedback is then used to refine training, update policies, and improve response plans. The result is a more engaged workforce and a security strategy that evolves with real-world input.

By listening to employees and responding meaningfully, organisations not only improve their defences but also foster a sense of collective responsibility and trust.

Deepfakes, a portmanteau of “deep learning” and “fake,” refer to synthetic media, primarily videos or audio recordings generated or altered using artificial intelligence (AI) to depict people saying or doing things they never actually did. While deepfakes began as an entertainment or novelty tool, their growing sophistication has positioned them as a credible threat in the world of cybersecurity.

As organisations strengthen their digital defences against traditional attack vectors such as phishing, malware, and ransomware, deepfakes represent a newer and less-understood frontier. One that leverages AI to manipulate perception, erode trust, and bypass existing safeguards. This article explores the role of deepfakes in cybersecurity, how they are used maliciously, the implications for trust and identity, and the emerging defences and detection strategies within the cyber community.

Deepfakes as a Cyber Threat

The most immediate cybersecurity risk of deepfakes is their use in social engineering attacks. Traditionally, attackers might rely on spoofed emails or fake websites to trick individuals into revealing credentials or transferring funds. Deepfakes take this to a new level by adding highly convincing audio or video to impersonate individuals with significant authority, such as CEOs, CFOs, or even political leaders.

For example, there have already been high-profile cases where attackers used AI-generated voice deepfakes to impersonate executives and instruct employees to transfer money or share sensitive information. In 2019, criminals reportedly used a voice-cloned recording of a CEO’s speech patterns to trick an executive into transferring €220,000 to a fraudulent supplier. The deepfake mimicked not only the voice, but also the tone and urgency typical of the real executive, making the attack highly believable.

This kind of deception can bypass traditional email filtering and spam detection technologies, as the attack may take place via phone call or embedded media within a trusted communication channel like Teams, Zoom, or Slack. The threat landscape now includes synthetic impersonation, where deepfake audio or video is used to facilitate business email compromise (BEC), account hijacking, and financial fraud.

Impact on Trust, Identity, and Verification

The emergence of deepfakes challenges one of the foundational assumptions of cybersecurity: trust in verified identity. In both the corporate and public domains, trust in identity is paramount, whether that’s a voice on a call, a face in a video meeting, or a recorded message from a government official.

As deepfake technology becomes more accessible and cheaper to produce, attackers can exploit the “assumed authenticity” of media formats that were once considered difficult to fake. This leads to increased scepticism around the legitimacy of communications, which can paralyse decision-making and slow down operations.

For instance, in crisis scenarios such as ransomware attacks or geopolitical events, misinformation campaigns powered by deepfakes could manipulate public sentiment, incite panic, or create confusion around who is saying what. The implications for information integrity are profound, especially for media organisations, government agencies, and election bodies.

Emerging Defence Mechanisms

Cybersecurity professionals are actively developing and deploying deepfake detection technologies. These typically rely on machine learning models trained to identify artefacts introduced during the synthesis process, such as unnatural blinking, visual inconsistencies, or odd audio intonations. However, this is an arms race. As detection methods evolve, so do the techniques used by attackers to create more seamless fakes.

To counter deepfake threats, organisations are also adopting more robust verification methods, such as:

• Multifactor authentication (MFA) that does not rely on voice or image recognition alone
• Watermarking of legitimate media, which can verify authenticity
• Behavioural biometrics, which consider unique patterns in typing, movement, and interaction
• Zero-trust models where no entity is assumed trustworthy based on one factor alone

Moreover, security awareness training is evolving to include recognition of deepfakes, helping employees spot red flags, such as unusual requests, voice delays, or background inconsistencies in video.

In the legal and regulatory domain, countries are beginning to address the misuse of synthetic media. Some governments have passed laws targeting the malicious creation and distribution of deepfakes, particularly where these cause reputational or financial harm.

Deepfakes as a Defensive Tool

Interestingly, deepfake technology isn’t solely a threat, it can also be used constructively in cybersecurity. For example, security training platforms have begun using synthetic media to simulate spear-phishing or vishing (voice phishing) attacks in a controlled environment. This allows employees to experience realistic threats without exposing organisations to real-world harm.

Additionally, researchers and red teams can use synthetic media to test the resilience of security controls or authentication mechanisms, uncovering vulnerabilities before attackers do.

Recognising deepfakes

Deepfakes present a rapidly evolving threat within cybersecurity, one that leverages artificial intelligence to attack not systems, but the very notion of trust and identity. Their use in fraud, misinformation, and impersonation can have significant financial, operational, and reputational impacts on organisations.

The cybersecurity community must respond by combining technological countermeasures, regulatory oversight, and human vigilance. While detection tools are improving, the best defence is a layered one. Pairing deepfake awareness with secure communications protocols, behavioural analytics, and identity verification beyond the visual or auditory.

In an era where seeing (or hearing) is no longer believing, resilience depends on recognising that authenticity is not a given – it must be proven.

So how do you prove it? How should you and your employees validate you’re talking to a real person, firstly give yourself time to think and question;  very little is urgent to the second, and nearly always, giving yourself time to think enables people to apply their analytical brains, often this happens after an incident “I thought it was bad…” “yes, I can see that now…” the trick is to give yourself that time to think before the impact!

Five simple steps to identify deepfakes

Think about whether the actions the person is asking you to do is within the realm of expected from this individual, whether they comply with your organisation’s policies, regulatory requirements, legal requirements and ethics

Think about the person’s style, are there nuances that aren’t present, do they always say Hi, or Good Morning, or do they always sign off a call with a particular phrase or statement, do they shorten your or others names?

Look carefully for facial anomalies, lip syncing issues or odd phrasing or words

Ask an unexpected question, or state a phrase or statement, if you randomly say “why is your t-shirt green” when its clearly black, a person will correct you, a deep fake will just continue

Above all, remember that technology is advancing at pace, so even if 1-4 all check out, if you are even 1% unsure, verify by calling the person on a known contact method and finding out if it was actually them

The human brain is a powerful anomaly detection tool, in most of these incidents, people have chosen not to use it and suspended their disbelief, don’t make that choice.

For more information about Cyberfort Detect and Respond services please contact us at [email protected].  

Data Sovereignty in a Cloud-Connected World – Where is your data really being stored, managed and processed?

Why organisations with complex and critical data compliance requirements should be exploring a colocation strategy in a world being increasingly dominated by public cloud. 

In today’s digital economy, data is the lifeblood of business. It fuels operations, customer engagement, and innovation. Yet, as organisations increasingly rely on global cloud services, the question of where an organisations data resides has never been more critical. 

Data sovereignty, the principle that data is subject to the laws of the country where it is stored, has emerged as a crucial consideration for UK businesses navigating a complex landscape of regulation, cyber threats, and geopolitical uncertainty. 

In this article, we explore why data sovereignty matters, the global risks of non-compliance, and how UK-based colocation datacentres provide a trusted foundation for secure, resilient infrastructure. 

The Global Risks of Data Residency – A Shifting Landscape 

The idea that “data is everywhere” may sound liberating, but it also carries significant risk. In a period of geopolitical instability, where your data lives dictates who has access, under what laws, and for what purposes. 

For example: 

The US Cloud Act allows US authorities access to data held by US-based providers, even if the data is stored in the UK or Europe. This has led to an increase in EU countries such as The Netherlands and Germany developing initiatives to move government infrastructure away from US hyperscale cloud services and reduce the reliance on American services. 

The fallout from Schrems II, The ruling by Court of Justice of the European Union (CJEU) in 2020 invalidated the EU-US Privacy Shield, has left many businesses scrambling to ensure data transfers outside the EU comply with GDPR. This has also been complicated by Brexit and the UK’s new Data (Use and Access) Bill, which is still being finalised through Parliament.  

Meanwhile, countries like China, Russia, and India have introduced strict data localisation laws, requiring that data be stored within their national borders. 

And let’s not forget the ever-growing threat of state-backed cyberattacks and supply chain compromises, where sensitive data may be exposed through third-party providers (as discussed in one of our recent articles. Overcoming Supply Chain Cyber Security challenges: Where organisations need to focus in 2025).

For UK businesses, relying on global cloud services or hyperscalers such as AWS, Azure or Google Cloud, this introduces potential exposure to foreign jurisdictions and extraterritorial access laws. Without careful planning, this can jeopardise compliance, increase risk, and erode trust. 

UK Data Residency – More Than a Checkbox exercise 

At first glance, keeping data in the UK might seem like a simple compliance tick-box. But it’s much more than that. It’s about control, resilience, and trust. Storing, managing and processing data outside the UK may be cheaper via a global cloud services provider but is it really satisfying data protection laws which have to be adhered to? Does it fit with your cloud data security and regulatory compliance strategy if your business is operating in the UK? 

The UK’s legal framework, including the Data Protection Act 2018 and UK GDPR, offers strong protections aligned with European standards while maintaining national sovereignty. UK data centres operate within a stable, predictable regulatory environment, unlike regions where laws can change overnight or data may be exposed to foreign surveillance regimes.  

By choosing a UK-based data centre, businesses gain:  

• Assurance that their data is governed by UK law
• Reduced risk of cross-border legal disputes or compliance breaches
• Simpler contractual terms and fewer complications from data transfer mechanisms
• Greater confidence when handling sensitive or regulated data, such as financial  records, healthcare information, or intellectual property

European Regulations on the Horizon: Why UK Businesses Must Pay Attention 

Even though the UK is no longer part of the EU, UK businesses operating across Europe, or serving EU clients, must stay alert to evolving regulations: 

• The Cyber Resilience Act (CRA) introduces strict cybersecurity standards for products with digital elements, impacting SaaS providers and critical services
• NIS2 Directive expands cybersecurity obligations to more sectors, including data centres, and tightens reporting requirements for incidents
• The Digital Operational Resilience Act (DORA) will regulate third-party ICT providers in the financial sector, requiring robust risk management and resilience

These regulations demand higher levels of cybersecurity, transparency, and accountability. UK data centres with strong governance frameworks (ISO 27001, ISO 22301, PCI DSS) are well-placed to help customers meet these challenges, offering infrastructure that supports both UK and EU compliance standards. 

Where to start? 

From our experience at Cyberfort we believe there are 5 key reasons why organisations need to start reviewing where their data is stored, managed and processed as part of a data sovereignty strategy within the Cloud they are: 

Compliance with UK Regulations
UK laws like the Data Protection Act 2018 and UK GDPR place strict requirements on how personal and sensitive data is handled. By ensuring data remains within UK jurisdiction, organisations simplify compliance and reduce exposure to international regulatory conflicts or oversight complexities. 

Mitigation of Legal Risk
Storing data outside the UK may expose businesses to foreign surveillance laws (e.g. the US Cloud Act), which can conflict with UK privacy standards. Keeping data within the UK helps avoid these jurisdictional tensions and mitigates the risk of unauthorised third-party access. 

Data Residency Assurance for Public Sector and Regulated Industries
Public sector bodies, financial services firms, and healthcare providers often have explicit or implicit mandates requiring data to remain within national borders. UK data centres ensure alignment with government procurement standards and sector-specific frameworks. 

Reduced Latency and Improved Performance
Hosting and processing data closer to end users in the UK can improve application performance and user experience, particularly for latency-sensitive workloads such as financial transactions, media streaming, or real-time analytics. 

Trust, Reputation, and Customer Assurance
Demonstrating a commitment to UK data sovereignty builds trust with customers, partners, and regulators. It reinforces transparency and responsible data stewardship which can be seen as a competitive differentiator in an era where digital trust directly impacts business value. 

But understanding the key reasons for reviewing data sovereignty compliance as part of a cloud strategy is only one area of the puzzle. Each organisation is different in terms of the volumes, types and access requirements to data being in the cloud.

At Cyberfort when we start a data sovereignty engagement as part of a cloud vs colocation strategy, we ask 5 key questions to help organisations understand what the risks, challenges and compliance measures are likely to be with their data stored in the cloud and if colocation could be the right way forward for their business, they are: 

Where is your organisations business critical data physically stored as part of a public cloud strategy, and under which legal jurisdictions does it fall? This helps to identify potential sovereignty conflicts and compliance risks. 

Does your cloud and SaaS providers guarantee UK-based data residency and processing? The answer to this will help to ensure contractual and technical alignment with data sovereignty requirements. 

How resilient are your UK-based data storage and processing solutions in the face of cyber threats, geopolitical disruption, or regulatory change? To assess operational risk and business continuity readiness. 

Are you able to maintain clear audit trails and access controls for data stored in or accessed from outside the UK?  To enhance governance, security, and compliance transparency. 

Does your current cloud strategy allow for flexibility if future regulation demands stricter data localisation or sovereignty requirements?  Future-proofing the infrastructure and avoiding costly migrations. 

By answering the above questions and embedding UK data sovereignty into digital and cloud strategies, businesses can better protect sensitive data, comply with domestic law, and build long-term resilience in an increasingly regulated digital environment. In some cases, following a review it is often found those businesses with complex and critical data management requirements need a supplementary strategy to Public Cloud. This is where colocation comes into play. 

Why Colocation is the Foundation for Sovereign, Compliant Infrastructure vs Public Cloud 

For organisations seeking control and compliance, colocation offers a powerful alternative to public cloud models. 

With colocation, your business retains ownership of hardware, software, and data, while benefiting from the physical security, power, cooling, and connectivity of a state-of-the-art UK datacentre run by experts. 

While public cloud offers flexibility and scalability, it’s not always the best fit for businesses with complex, critical, or highly regulated workloads. A colocation strategy, housing your infrastructure in a third-party data centre can provide a compelling alternative. From our experience at Cyberfort we have discovered customers with complex and critical data management requirements are choosing a colocation provider alongside public cloud for the following reasons: 

Control and Performance
With colocation, businesses retain full control over their hardware and software configurations. This is ideal for workloads requiring high performance, low latency, or specific hardware optimisations not supported in the public cloud. Ultimately you know exactly where your data is stored, who has access, and how it is managed. 

Security and Compliance
Colocation enables businesses to meet strict security, data residency, and compliance requirements, especially in industries like finance, healthcare, or government. Dedicated environments reduce exposure to shared infrastructure vulnerabilities found in multi-tenant public cloud platforms. This helps to meet sector-specific requirements (NHS DSP Toolkit, FCA, ISO standards) with audited, certified facilities.   

Predictable Costs
Unlike public cloud’s usage-based pricing, which can be difficult to forecast and prone to cost spikes, colocation offers predictable, long-term pricing. Enabling organisations to budget more effectively and avoid unexpected expenses. 

Hybrid and Legacy Integration
Colocation supports hybrid IT strategies, allowing businesses to integrate legacy systems with newer cloud services while keeping sensitive or resource-intensive workloads on dedicated infrastructure. 

Scalability Without Vendor Lock-in
As businesses grow, colocation offers scalability without being locked into a single cloud provider’s ecosystem. This opens the door to multi-cloud or hybrid models with greater flexibility and negotiation power. Additionally, as AI solutions become more integrated, accessible and advanced, there is a greater need of privacy and localised storage to provide increased protection.

In summary, colocation offers a secure, high-performance, and cost-predictable infrastructure model that complements or replaces public cloud for organisations with specific operational, regulatory, or technical needs.

Taking Action: A Data Sovereignty Checklist for UK Businesses 

To protect your business in a fast-changing regulatory and cyber risk landscape, all organisations with complex and critical data management requirements should consider these steps: 

Audit Your Data Flows
Map where your data is stored, processed, and backed up including SaaS and cloud services. 

Review Contracts and SLAs
Ensure data residency clauses align with your compliance obligations. 

Choose UK-Based Providers
Prioritise colocation, cloud, and managed services with physical infrastructure in the UK. 

Plan for Regulatory Change
Stay informed about EU and UK developments (CRA, NIS2, DORA) that could impact your business. 

Build Resilience into Your Architecture
Combine colocation with private cloud, direct network interconnects, and DR solutions for a robust, compliant environment. 

A major financial institution is hit by a cyber-attack that cripples its online services for days. Customers are locked out of their accounts and transactions grind to a halt, the impact starts spreading to the supply chain, other financial institutions, shareholders and government agencies become interested in the drama unfolding ; trust and reputation is beginning to slip away… Unfortunately, this isn’t just hypothetical, it’s a growing reality in today’s financial world.

Enter DORA, the Digital Operational Resilience Act – a landmark regulation from the European Union designed to ensure that financial entities can not only withstand cyber threats but also recover quickly and continue operating. DORA became operational on the 17th January 2025 and is set to reshape how financial institutions across the EU approach digital risk.

In this article, we’ll break down what DORA is, why it was introduced, and what it means for your organisation. Whether you’re a compliance officer, executive, or just curious about the future of cyber security in finance, this article will help readers to understand how to prepare for, and benefit from, DORA.

So what Is DORA?

The Digital Operational Resilience Act (DORA) is an EU regulation designed to enhance the digital operational resilience of financial entities (Digital Operational Resilience Act (DORA) – EIOPA). Its primary goal is to ensure that financial institutions can withstand, respond to, and recover from various ICT-related disruptions and threats. DORA applies to a broad range of financial entities, including banks, insurance companies, investment firms, and their critical third-party service providers. By implementing robust risk management frameworks, these entities will be better equipped to identify, protect against, detect, and respond to risks. Additionally, DORA mandates regular testing of digital operational resilience to demonstrate that potential disruptions can be managed.

DORA also requires financial entities to report major incidents to competent authorities and share information on cyber threats. This regulation also imposes stringent requirements for managing risks associated with third-party service providers. DORA came into force on January 16, 2023, and was fully applicable from January 17, 2025. By adhering to these requirements, financial institutions can safeguard their operations and contribute to a more resilient cyber security environment across the EU.

Key components of DORA

Robust risk management framework

Reporting incidents to competent authorities

Sharing information on threat intelligence and incidents

Regular testing of digital operational resilience

Comprehensive supply chain management


So what Is DORA?

Whilst traditional cyber security frameworks such as ISO 27001 and NIST CSF laid solid foundations for cyber security, the financial industry’s growing dependence on digital systems created operational vulnerabilities that could not be effectively managed; DORA was created to address these critical gaps and develop unified, enforceable standards across the EU.

Five key areas were identified for improvement by The European Commission’s DORA Directive:

• ICT risk management
• Operational resilience during disruptions
• Enhanced oversight of third-party providers
• Consistent resilience standards across EU markets
• Structured incident reporting for knowledge sharing

With a theme of “stronger together” and a collaborative and knowledge sharing approach to cyber security, especially around operational resilience, DORA aims to lift the industry cyber security posture standard.

The Five Pillars of DORA

Out of the key areas for improvement, DORA aims to improve cyber resiliency by strengthening five areas. Let’s take a look at each one in more depth:

Risk Management

DORA outlines the core requirements for financial entities to establish a comprehensive ICT risk management framework. Financial entities must:

  • Implement a well-documented risk management framework as part of their overall risk management system.
  • Include strategies, policies, procedures, protocols and tools to protect information assets, hardware assets, and physical infrastructure.
  • Have a control function to oversee risk management that is independent and has authority to challenge decisions and escalate issues.
  • Be proportionate to the size, complexity and risk profile of the financial entity (as defined in Article 4).
  • Must have a mechanism to continually improve their risk management practices.

Moving forward board members will need to be involved in the risk management frameworks of their organisations. As stated in the DORA framework, the Board of Directors are personally liable for cyber security governance and risk management. This means each board director will require an understanding of cyber threats to inform their decision-making.

They will also need to define and approve their organisations risk management framework, including third-party supplier strategy, showing the importance of informed decision-making to address emerging cyber threats effectively. But with board-level responsibility for cyber security steadily declining among businesses since 2021 (only 27% of businesses have a board member fully responsible for Cyber Security in 2025 vs 38% in 2021)  now is the time for financial services firms to take action and ensure board members are taking responsibility for aligning cyber security alongside business objectives.

So where should Financial Services organisations start with improving risk management and ensuring it is part of the board agenda?

Many financial services organisations have not undertaken a formal cyber security risk assessment in the past 12 months. It is estimated that only 48% of UK organisations have undertaken a formal cyber security risk assessment in the past year. This means board members of financial services firms and their cyber security teams could be making plans or reviewing their cyber security risk strategy with data that is not relevant, up to date or based on the latest NCSC guidance. Clearly this could not only be a business risk but could also be preventing wider business initiatives from being successfully undertaken in a secure, compliant and resilient manner.

Additionally, it should be noted that not all cyber risk assessments are the same. Unfortunately, many cyber security risk assessments are simply being seen as ‘tick box’ exercises without providing adequate detail or direction for how to improve.  At Cyberfort we believe the starting point for building a cyber risk strategy is to undertake an NCSC assured Cyber Resilience Audit and Review. The review based on NCSC best practices and guidance provides Cyber Security professionals and board members with a clear picture on their resilience posture vs industry benchmarks and highlights where improvements can be made. Furthermore, board members can use the cyber resilience audit and review to demonstrate back to regulatory bodies that they have undertaken due diligence and understand their responsibilities in relation to cyber security in the wider business context.

Incident Reporting

The incident management requirements under DORA aims to ensure that financial entities can detect, assess, and respond to incidents in a structured and effective manner. It also requires organisations to maintain detailed internal logs, conduct thorough post-incident reviews, and integrate lessons learned into their risk management practices. DORA highlights that Financial Services organisations should have the following in relation to Incident Management and Reporting:

Timely Detection and Classification
Have mechanisms to detect, classify, and prioritise incidents. Incidents must be assessed based on their impact on operations, data, and service continuity.

Structured Incident Reporting
Reported to the relevant national competent authority using standardised templates. Reporting must follow a strict timeline with the initial notification happening as soon as possible (having an expectation of within the same day), an intermediate report within 3 days, and a final report within a month. The final report should include root cause analysis and mitigation.

Internal Logging and Documentation
Maintain detailed internal logs of all incidents, including minor ones. Logs should support trend analysis and continuous improvement.

Post-Incident Review and Lessons Learned
A post-mortem analysis is required to identify root causes and improve controls. Findings must be documented and used to update risk management and response plans.

Communication and Stakeholder Management
Ensure clear internal and external communication during incidents. This includes informing customers, partners, and regulators as appropriate.

Integration with Business Continuity and Disaster Recovery
Plans should be tested regularly to ensure effectiveness under real-world conditions.

So what does this mean in reality?

From our experience at Cyberfort it means Financial Services firms must have tailored incident response plans in place to be able to detect and respond to cyber security incidents, while mitigating the impact on operations and reputation.

This is an area all UK businesses need to improve on. In the latest UK Government Cyber Security breaches survey 2025 it is estimated 53% of medium sized businesses and 75% of large businesses have formal tested Incident Response plans in place. These plans should include technical, communication, and legal playbooks. But those responsible for cyber security in their organisation should be asking themselves:

  • When was the last time the company incident response plan was truly tested?
  • Are cyber security teams and members of the board aware of gaps that may exist and potential impact if not addressed?
  • If gaps do exist in terms of knowledge, process or people skills how are these being addressed in a timely manner before a live incident occurs?

If the right expertise does not exist in house, then a specialist third party cyber security supplier who has knowledge of DORA and best practices in relation to Incident Response should be consulted so best practices can be adopted into the organisation.

Digital Operational Resilience Testing

A crucial part of DORA that extends past traditional cyber security is an organisations ability to operate despite an adverse cyber event, requiring a set of detailed and tested response plans that relate to the risks and prevalent threats, that will prove to be effective. This requires a:

  • Risk led comprehensive testing schedule and a range of testing methods
  • Independence and objectivity of the testing
  • Mandatory annual testing of critical systems
  • Remediation mechanism to classify, prioritise, and remediate any issues
  • Proportionality principle to determine the scope of the testing, based on size, complexity, and risk profile

Recognising the above steps is only the beginning. A proactive approach to cyber resilience needs to be implemented. By being proactive with Cyber Resilience financial services organisations can minimise disruptions to their operations and strengthen their ability to maintain operational continuity and protect sensitive data. By making cyber resilience a high priority, financial services organisations can ensure their defence against potential breaches and a culture of preparedness and responsiveness can succeed in a reactive cyber security world. This proactive approach will help to mitigate risks and position a financial services organisation as a trusted digital partner in their customers and suppliers minds.

Third-Party Risk Management

DORA absorbs the supply chain into the regulation by giving financial entities the responsibility to ascertain, assess, and monitor their third-party providers. DORA expects entities to identify all third-party service providers and classify them based on the criticality of the services they provide and maintain a comprehensive register of information. Then, for each, conduct risk assessments before entering into contracts, to understand their security posture, resilience capabilities, and compliance with DORA standards.

The contracts themselves must meet minimum contractual standards, for example, include specific clauses covering topics such as Service Level Agreements (SLAs), audit and inspection rights, ongoing monitoring and oversight, and termination and exit strategies.

The financial entity must continuously monitor the performance and risk exposure of their third parties, including regular reviews, audits, and updates to risk assessments, and ensure that third-party risk management is integrated into their overall risk governance framework, with clear roles and responsibilities at the management level.

In addition, in certain situations, DORA introduces EU-level oversight for critical third-party providers (e.g., major cloud service providers), ensuring they meet stringent operational and security standards.

This may sound simple in theory but the practical reality from our experience at Cyberfort is Supply Chain Cyber Security is complex and can be difficult to manage. This is demonstrated by the fact that only 14% of UK organisations have undertaken formal risk reviews of their supply chain security in the past 12 months. At Cyberfort we recommend all financial services firms to take action with the following 8 steps to improve their supply chain security:

  • Validate your own supply chain, often suppliers and sub suppliers go down in size and hence in cyber maturity.
  • Ensure your security controls are appropriate for the level of business risk you’re dealing with.
  • Migrate to SaaS where possible, utilise the security packages for an efficient and effective minimal effort approach to security management.
  • Validate and evidence the controls that your suppliers have in place, it’s not your effort but hold the supplier to account.
  • Make sure you have cyber essentials plus.
  • Keep on top of pen testing and Vulnerability Management and keep track of evidence.
  • Understand what your customer expects of you in security and compliance, and price this into your solution.
  • Ask your customer about their controls, likely targets and defences, find a trusted advisor/partner to help you extrapolate this to the threats you are likely to face.

Information Sharing

Financial entities are encouraged to voluntarily exchange cyber threat intelligence including indicators of compromise (IOCs), tactics, techniques, procedures (TTPs), cybersecurity alerts, and configuration tools. The goal is to enhance collective digital operational resilience by improving awareness, detection, and response capabilities.

These exchanges must occur within secure and structured environments to ensure that shared information is handled responsibly. Entities are required to uphold strict confidentiality and data protection standards, ensuring that sensitive business or personal data is not exposed or misused. Additionally, any formal participation in information-sharing arrangements must be reported to the relevant competent authorities, promoting transparency and regulatory oversight. The ultimate aim is to support proactive threat detection and coordinated responses across the financial sector.

Those who are responsible for cyber security in their financial services organisation should start by asking themselves if they are participating in information sharing schemes (e.g. ISACs), and have the tools in place to effectively process threat information which is shared with the organisation so knowledge can be shared and disseminated in a timely and appropriate manner.

The Strategic Business View of DORA

The Digital Operational Resilience Act (DORA) marks a significant shift in how financial institutions must approach digital risk. Rather than treating cyber security and ICT risk as isolated compliance tasks, DORA requires organisations to embed resilience into their core operations. This means preparing not just to prevent disruptions, but to detect, respond to, and recover from them swiftly and effectively. For many firms, this represents a move from reactive IT support to a proactive, strategic resilience posture.

One of the most notable changes is the increased accountability placed on senior leadership. DORA mandates that boards and executive teams take ownership of risk management, integrating it into the broader enterprise risk strategy. This shift demands greater visibility, governance, and cross-functional collaboration particularly between IT, compliance, legal, and business units. It also means that digital resilience is no longer just a technical issue; it’s a boardroom priority.

Implementing DORA may also require significant investment in technology and infrastructure. Legacy systems may need to be upgraded or replaced to meet the regulation’s requirements for monitoring, testing, and recovery. Additionally, organisations must reassess their relationships with third-party providers. DORA introduces strict oversight and contractual obligations for these vendors, especially those deemed critical, making third-party risk management a strategic concern.

Finally, DORA has global implications. While it is an EU regulation, its reach extends to any non-EU firm offering financial services within the EU. This is likely to drive broader alignment with DORA’s standards across international markets. For organisations that embrace this shift early, DORA offers an opportunity to build trust, enhance operational resilience, and gain a competitive edge in an increasingly digital financial ecosystem

Over the past decade, businesses have moved significant volumes of data and applications to public cloud services. Many organisations did this as they wanted easy access to scalable, flexible infrastructure at a low cost compared to traditional infrastructure and data storage options. However, many businesses are now realising that the public cloud isn’t always the best fit. Hidden costs, performance issues, compliance concerns, and security risks are driving a shift back to dedicated hosting solutions.

In this blog article Cyberfort Cloud and Data Centre professionals discuss why moving workloads from hyperscale public clouds to a specialist hosting provider can offer greater control, cost efficiency, and performance optimisation.

What is Cloud Repatriation?

Cloud repatriation has increasingly become a growing discussion point for IT teams over the past 12 months. This is because many businesses are realising due to the complexity and critical nature of their data being stored in the public cloud, the services they have chosen may not be as secure and compliant as they first envisaged.

So, what do we mean by cloud repatriation? In summary cloud repatriation means shifting the balance between the cloud and on premises hosting infrastructure. This type of migration can happen for many different reasons including wanting cost certainty, having dedicated specialist teams to address performance issues, and ensuring data centres where data is stored, is secure and compliant with country and industry regulations, or as the result of a business reassessing their overall cloud strategy.

It is important to note that cloud repatriation should not be viewed as a replacement of a cloud computing strategy. It’s a strategy to reflect the changing nature of IT decision-making, where businesses are evaluating and adjusting their technology models to align with changing business demands. It is also critical to address the misconception that cloud repatriation represents taking a step backwards. Some people may view on premises models to be secondary option to public cloud hosting, especially if an organisation previously had a ‘cloud first’ strategy in place. At Cyberfort we believe it is a strategic decision focused on optimising resource allocation, ensuring performance levels are met, and mitigating compliance and security risks.

Why organisations should be considering cloud repatriation

Based on our experience at Cyberfort and from discussions we have had with our customers over the past 12 months, there are 7 key reasons why businesses are considering cloud repatriation. In the next section of this article, we will explore each of the 7 areas to help readers decide if cloud repatriation is the right choice for their business.

Cost Certainty

One of the biggest myths with moving to the public cloud is that it always results in cost savings and cost management is easy to control. The pay-as-you-go model may seem attractive initially, but as businesses scale and their needs grow, cloud expenses can spiral out of control. Data egress fees, API call costs, and storage expenses can often lead to unpredictable pricing. Additionally, companies often end up paying for unused or underutilised cloud resources when committing to reservations or savings plans, further inflating their IT spend. It is estimated by a number of industry commentators that 30%+ of public cloud spend is wasted each year for example.

By repatriating workloads to a specialist hosting provider, businesses can benefit from fixed pricing models that align with their actual resource needs. Dedicated hosting solutions eliminate unpredictable expenses and provide greater visibility into long-term costs. Additionally, businesses can leverage ‘right-sized infrastructure’, ensuring they pay only for the resources they need. This approach not only brings financial stability but also allows for better budget forecasting, reducing the risk of unexpected operational costs. With the right hosting provider, companies can optimise their IT spending while maintaining high-performance infrastructure.

Performance and Latency Improvements

Public cloud environments operate on a shared infrastructure, meaning businesses often contend for resources with other tenants. This can result in unpredictable performance fluctuations, latency issues, and bottlenecks, especially for applications requiring real-time processing, high availability, or intensive workloads such as data analytics and machine learning.

Repatriating to a specialist hosting provider ensures businesses receive dedicated resources that are optimised for their specific use cases. This setup allows for greater consistency in application performance, as companies are no longer at the mercy of cloud provider traffic congestion or ‘noisy neighbours’ in multi-tenant environments. Specialist hosting providers also offer tailored network configurations, allowing businesses to optimise connectivity and reduce latency by placing workloads closer to end-users or integrating directly with private networks.

Additionally, dedicated infrastructure minimises downtime and enhances reliability. Hosting providers like Cyberfort can offer service level agreements (SLA’s) that guarantee performance thresholds, ensuring that data and applications remain highly available. With more granular control over hardware and network resources, businesses can make their IT environments ready for peak efficiency, ultimately improving user experience and operational effectiveness.

Enhanced Security and Compliance

Security concerns are among the top reasons organisations are reconsidering their reliance on public cloud providers. While hyperscale cloud platforms offer extensive security tools, they operate on a shared responsibility model, meaning businesses must still manage their own configurations, access controls, and compliance requirements. Misconfigurations, insider threats, and third-party dependencies introduce security vulnerabilities that can be challenging to mitigate in a complex cloud environment.

By moving workloads to a specialist hosting provider, businesses can leverage dedicated security architectures tailored to their specific regulatory needs. For example, at Cyberfort we offer fully managed security services, including firewalls, intrusion detection systems, data encryption, and dedicated security monitoring. Unlike public cloud platforms, which require businesses to implement their own security measures, specialist hosting providers like Cyberfort can include these protections as part of their service offerings.

Compliance is another critical factor. Industries such as retail, finance, and government must adhere to strict data protection regulations like GDPR, PCI-DSS and SOC 2. Specialist hosting providers often have expertise in regulatory compliance, ensuring businesses remain in alignment with industry standards while minimising the burden of managing complex compliance requirements internally.

Greater Control and Customisation

One of the main downsides of public cloud environments is their standardised approach to infrastructure deployment. While this model works well for companies seeking rapid scalability, it often forces businesses to adapt their applications to fit within a rigid framework. This lack of flexibility can lead to inefficiencies, as organisations may be unable to adjust their environments for optimal performance.

Repatriating workloads to a specialist hosting provider allows businesses to regain full control over their infrastructure. Companies can customise their hardware specifications, operating systems, and networking configurations to match their unique requirements. This level of control enables businesses to deploy mission critical applications with the exact requirements they need to deliver the right performance for end users, ensuring better resource utilisation and performance optimisation.

Additionally, specialist hosting providers will offer tailored service models, allowing IT teams to select the level of management they require. Whether a business needs fully managed hosting or just infrastructure support, they can work with providers to create a customised solution. This flexibility ensures that IT teams can focus on strategic initiatives rather than dealing with cloud platform limitations and vendor-imposed restrictions.

Data Sovereignty and Reduced Vendor Lock-In

Public cloud providers often use proprietary technologies and pricing structures that make migrating workloads complex and expensive. Vendor lock-in can severely limit an organisation’s ability to shift its IT strategy or adapt to changing business needs. Additionally, data sovereignty concerns arise when businesses operate in regions with strict regulations on where data can be stored and processed.

Repatriating workloads to a specialist hosting provider gives businesses more control over their data, ensuring compliance with regional regulations. Many hosting providers offer data residency options, allowing organisations to choose where their data is stored. This is particularly important for industries subject to legal restrictions on data movement, such as financial services, healthcare, and government.

Open-source and hybrid hosting solutions provided by specialist providers allow businesses to avoid reliance on a single cloud vendor. By maintaining infrastructure that is not tied to proprietary cloud technologies, organisations gain the flexibility to transition between hosting environments as needed. This reduces long-term risks and provides a strategic advantage by preventing cloud lock-in constraints from limiting future innovation.

Sustainability and Energy Efficiency

As organisations strive to reduce their environmental impact, the sustainability of IT infrastructure has become a critical consideration. While public cloud providers claim to operate energy-efficient data centres, their sheer scale results in significant energy consumption and carbon emissions. Businesses looking to enhance their corporate sustainability initiatives may find that repatriating workloads to a specialist hosting provider presents a greener alternative.

Specialist hosting providers often deploy energy-efficient hardware, optimise data centre cooling systems, and utilise renewable energy sources. Some providers also prioritise sustainable practices, such as carbon-neutral operations, server recycling programs, and lower overall power consumption. By working with environmentally conscious hosting providers, businesses can actively contribute to reducing their carbon footprint.

Having the ‘right-sized’ infrastructure plays a crucial role in energy efficiency. Unlike public cloud environments that encourage over-provisioning, specialist hosting providers design customised solutions that align with actual resource needs. This prevents unnecessary energy waste and ensures that IT resources are utilised as efficiently as possible. For organisations committed to sustainability, moving away from hyperscale public clouds can be a strategic step toward achieving environmental goals.

Improved Support and Service Quality

Public cloud providers serve millions of customers, making personalised support difficult to obtain. Many organisations struggle with slow response times, automated troubleshooting systems, and limited access to expert engineers. When critical applications experience issues, businesses may face delays that impact operations and customer experience.

Specialist hosting providers, by contrast, offer high-touch, customer-focused support. For example, at Cyberfort we have dedicated engineering teams available to each customer. Businesses benefit from direct access to experienced engineers, proactive monitoring, and customised service agreements tailored to their operational needs. Unlike the generalised support provided by hyperscale cloud providers, specialist hosting providers take a hands-on approach to problem resolution.

Specialist providers can also offer more flexible support models, including dedicated account managers and 24/7 monitoring services. This ensures that businesses receive timely assistance when issues arise, minimising downtime and improving overall reliability. For businesses that depend on mission-critical applications, high-quality support can make a significant difference in maintaining business continuity.


In 2025, businesses are managing constantly growing volumes of complex and critical data, making efficient and secure data management a ‘must have’. Organisations operating in industries such as finance, healthcare, transport, retail and manufacturing are facing increasing demands for data security, compliance, uptime, and scalability. Traditional on-premises datacentres and public cloud providers may not be able to support and manage the right environments required to store, manage and transit complex and critical data. This is where colocation with a specialist provider can become a strategic choice for managing data in a secure, resilient and compliant infrastructure environment.

In this blog article Cyberfort’s datacentre professionals discuss why businesses with complex and critical data management requirements should consider a colocation strategy in 2025.

What is colocation in a datacentre?

First of all, lets cover what colocation is and why it should be a key strategy consideration for IT teams. Colocation in a datacentre refers to the practice of renting physical space within a specialised facility to house and operate servers, networking equipment, and other IT infrastructure. Essentially, businesses place their own equipment in a datacentre provided and managed by a third-party colocation provider. 

Colocation is not a ‘one size fits all’ strategy. Some businesses simply want the space so they can manage their own equipment. Other businesses may want additional support and have dedicated datacentre professionals available to take care of everything for them. There is also a middle ground with some customers taking full responsibility of managing their own equipment but also require some technical support for certain tasks. 

Understanding Datacentre Colocation Requirements

Before beginning the search for a colocation provider, those responsible for data management in their business should conduct a thorough internal assessment of their organisation’s requirements. By taking this step, it will save time during the evaluation process and help prevent any misalignments later on when the solution is deployed.

To start with IT teams should examine current infrastructure requirements in terms of power, space, location and networking in addition to data management security and compliance. 

Consideration should be given to power needs carefully, not just what is being used today, but also what is likely to be needed as the organisation grows. Many organisations underestimate their future power requirements, leading to costly migrations or compromised operations later. Take the time to document your current kilowatt (kW) usage and project it forward based on your growth plans.

Next, review your space requirements. Thought should be given to the number of racks or cabinets you need today and how your footprint might expand over the next three years as a minimum. Consolidation through newer, more dense equipment can sometimes offset the need for additional space but may increase your power and cooling requirements. 

Network connectivity is another crucial part of any requirements analysis. Bandwidth needs, capacity requirements, and any specific carrier preferences all need to be assessed before deciding on a colocation facility. If you serve customers in particular geographic regions, you’ll want to factor in network routes and points of presence that align with your customer base. 

Selecting a Colocation Provider

Choosing the right colocation provider is a critical decision that will affect an organisation’s IT infrastructure, operational efficiency, and long-term scalability. Several key factors should guide the IT teams decision-making process, whether you are looking to build your own datacentre or partner with a colocation provider. Several critical factors can impact your IT infrastructure and operational efficiency when choosing a colocation provider. At Cyberfort we believe there are 6 key considerations when selecting the right colocation provider for a business they are:

Redundancy and Uptime
Ensure the colocation provider offers power, cooling, and network redundancy and strong SLA’s for uptime.

Scalability
The facility should accommodate space, power, and bandwidth growth.

Security and Compliance
It is important that the provider has strong physical security and compliance certifications.

Support
24/7 technical support and remote hands services are essential.

Cost
Consider both upfront and ongoing expenses with transparent pricing.

Location
Closer proximity to business operations reduces latency, can be quicker to access the datacentre for maintenance, while geographic redundancy ensures better disaster recovery.

Top 5 reasons why businesses with complex and critical data management requirements should be considering a colocation strategy with a specialist provider in 2025

Now we have covered the basics in terms of what colocation is, the key requirements to capture before deciding on a colocation strategy and considerations when selecting a colocation provider, the next part of the article will discuss the 5 key reasons why businesses should be exploring a colocation strategy in 2025.

Security and Compliance

It is no secret, Cyber security threats are evolving at an exceptional rate, and regulatory requirements are becoming more stringent. Businesses handling sensitive data must prioritise security and compliance to avoid legal repercussions and reputational damage. 

Many organisations have built their on-premises datacentres with legacy technology which can carry a variety of security and compliance risks especially if equipment is coming to end of life, is difficult to upgrade or the skills required to maintain are become scarce or costly. By moving to a colocation facility, security and compliance challenges can be mitigated as data will be stored and manged in a secure, resilient and compliant datacentre facility. 

So why should an organisation be evaluating a move to a specialist colocation provider if they are looking to improve security and compliance? From our experience at Cyberfort and discussing the key security and compliance requirements with our customers, we have found there are 6 key reasons why colocation facilities are chosen ahead of on-premise or public cloud solutions when security and compliance with data is crucial to business success. 

Physical security
Colocation facilities will have multiple layers of physical security to prevent unauthorised access and protect the equipment something that can be difficult to replicate for many on premises datacentres. This includes measures such as access controls, surveillance cameras, locked cabinets, dedicated secure sites, security guards, and restricted access to authorised personnel only.

Facility design
At Cyberfort we have specifically built our datacentres with security in mind. The datacentres are located in ex-military nuclear bunkers with reinforced walls and secure entrances to prevent unauthorised entry. Access points are monitored and logged, ensuring a record of individuals who enter and exit the facility.

Network security
Colocation providers should have established strong network security protocols to defend against cyber threats. At Cyberfort protective measures go beyond basic physical and network security. They include advanced systems such as firewalls, intrusion detection and prevention systems (IDPS), DDoS Protection, continuous traffic monitoring, and comprehensive security protocols to safeguard data integrity. These measures collectively safeguard the connectivity and data transmission within the facility, ensuring the integrity and confidentiality of the hosted infrastructure.

Surveillance and monitoring
Datacentres housing critical and complex data should employ advanced surveillance systems and 24/7 monitoring to keep a close watch on the facility. Surveillance cameras should be strategically positioned to monitor critical areas, and security personnel should continually monitor activity and respond to any potential security breaches or incidents.

Environmental controls
Datacentre facilities also need to maintain appropriate environmental conditions to ensure the optimal performance of the hosted equipment. This includes temperature and humidity monitoring and control systems, preventing overheating or other environmental factors that could adversely affect the servers and networking gear.

Compliance and certifications
Respected colocation providers uphold industry standards and regulations, demonstrating their dedication to security and compliance. For example, at Cyberfort we hold certifications such as ISO 27001, 9001, 14001 and 45001 as evidence of our commitment to maintaining robust security practices and meeting stringent compliance requirements. We also ensure we adhere to industry regulations such as GDPR, Cyber Essentials Plus and PCI DSS. This ensures our customers businesses remain compliant without investing heavily in in-house compliance management.

Guaranteed Uptime and Business Continuity

Downtime can be devastating for businesses relying on real-time data processing, e-commerce platforms, or critical applications. Colocation providers can offer redundant infrastructure, ensuring high availability and business continuity. Those who are responsible for their on-premises infrastructure and cloud computing should ask themselves if their current datacentre facilities have:

Multiple redundant power sources in case of a power outage
Datacentre facilities must ensure consistent and reliable power availability. For example, at Cyberfort we employ redundant power systems, including diverse incoming feeds from the grid, secure and resilient supply chains, backup generators and uninterruptible power supply (UPS) systems, to guarantee a continuous and uninterrupted power supply to the equipment housed within the datacentre, even during national grid power outages or disruptions.

Network redundancy in place with failover mechanisms to ensure uninterrupted connectivity
Utilise multiple connections and pathways to maintain connectivity even if a primary link fails. Failover mechanisms instantly reroute traffic, minimising disruption and maintaining seamless access to critical systems.

Built in disaster recovery and back-up solutions in place
These provide robust protection against data loss and downtime, ensuring swift recovery from unexpected incidents. Automated backups and replication processes guarantee business continuity by safeguarding critical data and systems.

Existing staffing levels which can guarantee 24/7 uptime
Availability of the on-premises datacentre with power generation backup in the event of a grid power failure ensures constant monitoring and rapid response to any potential issues, guaranteeing 24/7/365 uptime for services. 

A best-in-class colocation provider should be able to provide an organisation with all of the above. By leveraging colocation, businesses can mitigate the financial and operational risks associated with system outages and data loss.

Scalability and Flexibility to Support Growth

Organisations must be able to adapt to fluctuating demands and evolving data management needs. A colocation strategy can offer scalability and flexibility, providing businesses with the ability to adjust infrastructure without incurring significant capital expenditure.

At Cyberfort we have designed colocation facilities to accommodate rapid growth in our customers. Businesses can scale infrastructure as needed, whether adding more servers to meet demand or consolidating resources during quieter periods. This flexibility eliminates the physical space, power, and cooling constraints often associated with on-premises datacentres.

Colocation also supports highly customised infrastructure. Unlike public cloud solutions, which are largely standardised, colocation allows businesses to tailor their hardware, software, and network configurations to suit specific performance, compliance, or application requirements.

Additionally, companies with national operations benefit from a provider’s geographically distributed facilities, enabling localised deployments to serve different regions more effectively and reduce latency.

In summary, colocation with a specialist provider empowers businesses to respond to market demands quickly, scale efficiently, and future-proof their operations without the burden of continuous capital investment in physical infrastructure.

Cost Certainty and Predictable Expenses

Managing an in-house datacentre is expensive, with costs covering infrastructure, security, maintenance, power, cooling, and staffing. Colocation can significantly reduce these costs while providing predictable pricing models. Key financial benefits of a colocation strategy include:

Lower Capital Expenditure (CapEx)
Instead of investing hundreds of thousands or even millions in building and maintaining an on premise datacentre, businesses can leverage colocation providers’ infrastructure with an operational expense (OpEx) model.

Reduced Operational Costs
Shared power, cooling, and security costs make colocation more cost-effective than maintaining an in-house facility.

Energy Efficiency
Colocation providers utilise advanced cooling technologies, green energy solutions, and optimised power usage to lower electricity costs and environmental impact.

Transparent and Predictable Billing
Unlike cloud platforms with fluctuating costs, colocation offers fixed-rate contracts, allowing for more accurate budget forecasting.

For businesses managing complex data workloads, colocation presents a financially viable alternative to in-house datacentres or unpredictable cloud expenses.

Access to Expertise and Support

Managing a high-performance datacentre infrastructure requires specialised skills that many organisations do not have access to in-house. A colocation provider offers access to experienced professionals who ensure optimal performance, security, and efficiency. Key advantages include:

24/7 Monitoring and Support
Expert engineers and technicians provide round-the-clock monitoring, maintenance, and incident response.

Proactive Maintenance and Upgrades
Colocation providers continuously invest in cutting-edge technology, ensuring clients benefit from the latest advancements in infrastructure and security.

Network Optimisation
High-speed, high capacity network connectivity is managed by specialists, ensuring optimal data flow and application performance.

Hands-On Remote Support
Remote hands services allow businesses to troubleshoot and perform maintenance tasks without sending staff to the datacentre.

By partnering with a specialist colocation provider, businesses gain access to expertise that enhances efficiency, security, and overall IT performance without the burden of hiring and training internal staff.


Artificial intelligence (AI) is rapidly transforming industries, driving innovation, and creating new opportunities. However, it also presents unique challenges related to ethics, security, accountability, and compliance with emerging regulations like the EU AI Act. In this landscape, ISO 42001 has emerged as the cornerstone for responsible AI governance, aiming to provide organisations with a structured framework to mitigate risks, foster trust, and ensure ethical practices are being implemented.

In our previous blog, we delved into the EU AI act and discussed how its main goal is to regulate applications using AI by managing and mitigating risks, while fostering innovation.

Building upon that foundation, we now shift the attention to ISO 42001, a pivotal standard designed to guide organisations in meeting AI governance requirements like the EU AI act. In this blog, we explore the key components of ISO 42001, its role in managing AI risks, its alignment with complementary tools – such as the NIST AI Risk Management Framework (AI RMF) – and how Cyberfort is able to help organisations implement this vital standard effectively.

What is ISO 42001?

ISO 42001 is the first international standard specifically designed to address the governance and risk management needs of AI systems. It offers organisations a comprehensive framework to operationalise ethical, transparent, and secure AI practices, while complying with regulatory requirements. Providing guidelines for the entire AI lifecycle—from design and development to deployment and decommissioning—ISO 42001 helps organisations align their AI initiatives with stakeholder expectations and regulatory demands.

Key Components of ISO 42001

Operational Planning

· Establish an AI policy and clearly define the AI system’s objectives.

· Maintain a record to demonstrate the planning, execution, monitoring, and improvement of AI system processes throughout the entire AI lifecycle.

· Anticipate and plan for unintended changes or outcomes to preserve the integrity of the AI system.

Risk Management

· Proactively identify, assess, and mitigate risks across the AI lifecycle.

· Address potential biases, data security vulnerabilities, and ethical concerns.

· Enable organisations to prepare for and respond to emerging risks effectively.

Human Oversight

· Establish mechanisms to ensure critical AI decisions remain under human control.

· Foster accountability and prevent automated errors from escalating.

· Build trust by enabling human intervention when necessary.

Data Governance

· Maintain data accuracy, representativeness, and integrity to ensure fair outcomes.

· Develop protocols for ethical data acquisition, usage, and storage.

· Mitigate risks associated with biased or low-quality data.

Continuous Improvement

· Incorporate iterative evaluations to refine AI systems and governance practices.

· Use feedback loops and audits to adapt to regulatory updates and technological advancements.

· Foster resilience by embedding adaptive capabilities into AI systems.

The Role of ISO 42001 in AI Governance

ISO 42001 is more than a compliance tool; it is a strategic enabler for responsible AI development, providing a structured approach to risk management, accountability, and transparency. As AI systems become increasingly embedded in critical business processes, organisations need a scalable and adaptable governance framework that aligns with both regulatory mandates and ethical considerations. By implementing ISO 42001, organisations can:

Enhance Transparency and Trust
Provide stakeholders with clear visibility into AI processes and decision-making mechanisms, ensuring explainability and reducing concerns over opaque AI models.

Mitigate Ethical and Operational Risks
Proactively address challenges such as bias, security vulnerabilities, and unintended consequences through structured risk assessment methodologies.

Streamline Regulatory Compliance
Align organisational practices with stringent regulations like the EU AI Act, UK AI Code of Practice, and other emerging standards that mandate robust governance for high-risk AI systems.

Enable Scalable Governance
Adapt the framework to suit organisations of any size, from startups to multinational corporations, ensuring governance structures evolve alongside AI capabilities.

Demonstrate Compliance and Strengthen Reputation
Achieve ISO 42001 certification by successfully passing external audit assessments conducted by accredited certification bodies, positioning the organisation as a leader in responsible AI adoption.

Drive Continuous Improvement
Establish iterative monitoring and evaluation processes to refine AI governance, ensuring alignment with evolving risks, regulatory changes, and ethical standards.

NIST AI RMF: A Complementary Tool

While ISO 42001 provides a structured, standardised approach to AI governance, the NIST AI Risk Management Framework (AI RMF) complements it by offering a flexible, iterative framework for managing AI-related risks. The NIST AI RMF is particularly effective in dynamic environments where AI risks evolve rapidly, requiring continuous assessment and adaptation. When used together, these frameworks enable organisations to build resilient, responsible AI systems that align with global compliance requirements.

By integrating ISO 42001 and the NIST AI RMF, organisations can:

Govern AI Systems Holistically
Combine ISO 42001’s structured governance principles with NIST AI RMF’s adaptive risk identification and mitigation strategies, ensuring a well-rounded AI risk management approach.

Enhance Risk Adaptability
Leverage NIST’s “Map, Measure, Manage” functions to proactively detect and mitigate AI risks, ensuring AI systems remain secure, ethical, and aligned with both regulatory and operational needs.

Achieve Comprehensive Compliance
Align both frameworks to meet global standards, such as the EU AI Act, UK AI Code of Practice, and OECD AI Principles, ensuring AI governance remains robust and future-proof.

Improve AI Resilience and Security
Apply NIST AI RMF’s iterative risk evaluation process to reinforce ISO 42001’s security mandates, strengthening defences against adversarial threats, data breaches, and unintended AI failures.

Support Ethical and Explainable AI
Utilise NIST’s transparency and explainability guidelines alongside ISO 42001’s governance principles to ensure AI systems are interpretable, fair, and accountable.

The combination of ISO 42001 and NIST AI RMF provides organisations with both structure and agility, enabling them to proactively manage AI risks while fostering innovation and compliance.

ISO 42001 and the UK AI Code of Practice

While the EU AI Act is a legally binding regulatory framework, the UK AI Code of Practice serves as a voluntary set of principles designed to help organisations adopt AI responsibly. Although the UK has opted for a more flexible, industry-led approach to AI governance, the UK AI Code of Practice aligns closely with global AI standards and emerging regulatory trends, making it a valuable guide for businesses seeking to future-proof their AI strategies.

The UK AI Code of Practice shares many objectives with ISO 42001, particularly in areas such as:

Transparency
Ensuring AI decision-making processes are explainable, auditable, and fair. Both frameworks promote algorithmic accountability, requiring organisations to document AI development processes and provide stakeholders with clarity on how AI-driven decisions are made.

Accountability
Assigning clear responsibility for AI system outcomes. ISO 42001 formalises governance structures, while the AI Code of Practice encourages businesses to designate AI ethics officers, compliance leads, or governance committees to oversee AI deployment.

Risk Management
Encouraging organisations to assess and mitigate AI-related risks proactively. The AI Code of Practice recommends continuous risk assessments, aligning with ISO 42001’s structured risk management framework to ensure AI remains ethical, unbiased, and secure.

The Business Case for UK Organisations

For UK businesses, aligning with ISO 42001 and the UK AI Code of Practice provides a competitive advantage, demonstrating a commitment to responsible AI use, ethical decision-making, and regulatory preparedness. Key benefits include:

Regulatory Readiness
Although voluntary today, AI governance standards may become mandatory in the future. Proactively adopting ISO 42001 and the AI Code of Practice prepares businesses for potential future UK regulations.

Global Market Access
UK companies developing, selling, or deploying AI in EU markets must comply with the EU AI Act. Aligning with ISO 42001 ensures seamless regulatory alignment across multiple jurisdictions.

Enhanced Trust and Brand Reputation
Organisations that demonstrate strong AI governance are more likely to gain stakeholder confidence, reduce compliance risks, and strengthen their brand’s credibility in AI-driven industries.

As AI governance continues to evolve, businesses that align with established best practices will be well-positioned to lead in ethical AI adoption while maintaining compliance with both UK and international standards.

Cyberfort: Your Trusted Partner in AI Governance

While it can be challenging to mitigate AI relevant risks entirely, ISO 42001 and NIST AI RMF can both be utilised to demonstrate their commitment to privacy, security accountability, reliability, and compliance across their organisation, reducing AI risks and building trust with stakeholders. However, how well an organisation builds this trust is dependent on its understanding and ability to effectively use these tools for compliance. This is where Cybefort comes in.

Cyberfort specialises in implementing ISO frameworks and helping organisations navigate complex regulatory landscapes. It has multiple certifications across the ISO library, demonstrating its ability to understand and navigate around information security, including AI systems.

With a proven track record in secure-by-design practices and AI governance, Cyberfort is uniquely positioned to:

Deliver Tailored Solutions
Design and implement ISO 42001-based governance structures that align with your organisational goals.

Integrate Complementary Tools
Seamlessly combine ISO 42001 with NIST AI RMF to create a robust governance ecosystem.

Ensure Compliance Excellence
Guide organisations in meeting the EU AI Act’s requirements while fostering innovation and operational efficiency.

Future-Proof AI Systems
Embed adaptive governance practices that evolve with regulatory and technological advancements.

Artificial Intelligence (AI) is rapidly reshaping industries, from healthcare and finance to customer service and cyber security. However, along with its benefits come significant risks, including bias in decision-making, privacy violations, and the potential for unchecked surveillance. As AI systems become more integrated into daily life, governments worldwide are grappling with how to regulate their use responsibly.

The EU AI Act is the world’s first comprehensive legislative framework designed to regulate AI applications based on their potential impact on people and society. Unlike sector-specific regulations, this act introduces a risk-based approach, ensuring that AI systems that pose greater risks face stricter requirements, while low-risk AI applications remain largely unregulated.

With enforcement expected to begin in 2026, businesses and AI developers need to prepare now. Whether you’re an AI provider, a company integrating AI solutions, or an organisation concerned about compliance, understanding the key provisions of the EU AI Act is essential. In this blog, we break down the regulation, its risk classifications, compliance obligations, and the steps businesses must take to stay ahead.

What is the EU AI Act?

The EU AI Act is a legislative proposal introduced by the European Commission in April 2021 as part of the EU’s broader strategy for regulating emerging technologies. It seeks to balance innovation with the need to protect fundamental rights, safety, and transparency in AI applications.

Why is this regulation necessary?

AI systems are increasingly making decisions that affect people’s lives including determining credit worthiness, screening job applicants, and even diagnosing diseases. However, numerous incidents of biased AI models, algorithmic discrimination, and opaque decision-making have raised ethical concerns. High-profile cases, such as Amazon’s AI hiring tool discriminating against women or AI-powered facial recognition leading to wrongful arrests, highlight the urgent need for oversight.

The EU AI Act aims to:

  • Establish clear rules for AI developers, providers, and users.
  • Prevent harmful AI practices, such as social scoring or manipulative algorithms.
  • Foster trust in AI technologies by ensuring transparency and accountability.
  • Promote innovation by providing legal certainty for AI companies.

Why UK Businesses Should Care 

The Act will apply not only to companies within the EU but also to any organisation deploying AI systems that impact EU citizens, similar to how GDPR has global reach.

Although the UK is no longer part of the EU, the EU AI Act holds significant relevance for UK-based organisations due to several factors:

UK Organisations Operating in the EU
Companies developing, selling, or using AI within the EU must comply with the Act to access its markets.

Equivalency Expectations
Following the example of GDPR and the UK Data Protection Act 2018, the UK may introduce a similar AI governance framework to align with international standards and maintain market competitiveness.

Global Leadership and Cooperation
The UK’s recent signing of the world’s first international AI treaty demonstrates its commitment to ethical AI development, human rights, and the rule of law in AI governance. By adhering to frameworks like the EU AI Act and international treaties, UK businesses can lead the charge in developing AI systems that are trusted globally.

Global Standards Alignment
Compliance with the EU AI Act and adherence to international AI treaties position UK companies as leaders in ethical AI practices, enhancing their reputation and global competitiveness.

The Risk-Based Classification of AI Systems

One of the defining features of the EU AI Act is its risk-based classification model, which categorises AI systems based on their potential to harm individuals, businesses, and society. This ensures that the most intrusive and potentially dangerous AI applications face the strictest scrutiny, while less risky applications remain largely unaffected.

Unacceptable Risk – Prohibited AI 

Some AI systems pose such severe risks to human rights, democracy, and personal freedoms that they are outright prohibited under the Act. These include:

Social scoring systems that evaluate people based on behaviour (e.g., China’s credit scoring).
Subliminal AI techniques that manipulate human behaviour in harmful ways.
Real-time biometric surveillance in public spaces (except for narrowly defined law enforcement exceptions).
Predictive policing AI, which uses profiling and behavioural data to pre-emptively classify individuals as likely criminals.

High-Risk AI – Strictly Regulated

AI applications that have a high impact on people’s rights or safety but are still legally permissible fall into this category. These systems must comply with strict regulatory requirements before they can be deployed.

Examples include:

AI in hiring processes (e.g., resume-screening AI, automated interview analysis).
AI in critical infrastructure (e.g., energy grids, air traffic control).
Healthcare AI (e.g., AI-based diagnostics, robotic surgery).
AI in financial services (e.g., automated credit scoring, fraud detection).

Businesses deploying high-risk AI must ensure:

• Human oversight is built into decision-making.
• AI models are trained on unbiased datasets to prevent discrimination.
• Robust cybersecurity protections are in place to prevent adversarial attacks.

Limited Risk – Transparency Obligations

Some AI systems do not pose high risks but still require clear disclosure to users. These include:

AI chatbots (users must be informed they are interacting with AI).
Deepfake generators (AI-generated content must be labelled).

Minimal or No Risk – No Regulation

Most AI applications, such as spam filters, AI-powered recommendation engines, and video game AI, fall into this category and face no additional regulation.

Key Compliance Requirements for Businesses

For companies operating in the AI space, compliance with the EU AI Act is non-negotiable. The most critical obligations include:

  • Risk Management & Governance: Organisations must assess and mitigate AI risks before deployment.
  • Data Governance & Bias Prevention: AI models must be trained on high-quality, unbiased datasets to prevent discrimination (e.g., biased hiring algorithms).
  • Transparency & Explainability: Users must understand how AI decisions are made, especially in high-risk applications.
  • Human Oversight: AI systems must allow human intervention to correct errors or override automated decisions when necessary.
  • Cybersecurity & Robustness: AI models must be resilient against adversarial attacks, such as data poisoning or model manipulation.

Penalties for Non-Compliance

Similar to GDPR, the EU AI Act includes severe penalties for violations:

Fines based on company turnover:

  • Up to €35 million or 7% of global turnover for non-compliance with banned AI practices.
  • Up to €15 million or 3% of turnover for failing to meet high-risk AI obligations.
  • Up to €7.5 million or 1.5% of turnover for providing incorrect documentation.

How to Prepare for the EU AI Act

For businesses leveraging AI, preparation is essential.

At Cyberfort we recommend all organisations undertake the following steps to ensure compliance:

Conduct an AI risk assessment: Identify AI models that fall under high-risk categories.

Implement AI governance frameworks: Establish policies for ethical AI use.

Ensure transparency and documentation: Maintain records of data sources, decisions, and human oversight processes.

Review vendor AI compliance: If using third-party AI tools, verify compliance obligations.

Engage legal & compliance experts: Stay updated on regulatory changes and enforcement timelines.

Final Thoughts: Embracing Responsible AI

The EU AI Act marks a defining moment in AI regulation, setting a precedent for ethical AI governance worldwide. While compliance may be demanding, it also offers businesses the chance to build trust and transparency, essential for long-term success in an AI-driven world.

Organisations that proactively align with the EU AI Act will not only avoid penalties but also enhance their reputation, reduce AI risks, and gain a competitive edge in the global market.

For more information about the services we offer at Cyberfort to help you secure AI contact us at [email protected]

Supply Chain cyber security attacks have been in the news throughout the last 12 months. Latest research suggests 47% of organisations suffered a disruptive outage over the last year from a breach related to a vendor. In this blog post Cyberfort cyber security professionals discuss where organisations need to focus in 2025 to improve their supply chain cyber security strategies and how they can make themselves more resilient to attack.

What are the main types of supply chain cyber security attacks?

From our experience at Cyberfort there are two main different types of supply chain cyber-attack. Both should be considered high risk, although for different reasons. While both meet the definition of supply chain attack (compromising or damaging an organisation by targeting less secure elements in the supply chain) each type of attack typically has different targets and threat actor capabilities and need to be considered when discussing supply chain cyber security. 

Software supply chain attack
Where a piece of technology purchased by the organisation is compromised, this is typically not a targeted attack at an individual end user (though in extreme cases it could be) but rather an opportunity to operate a one-to-many breach. This could include activities such as embedding an exploit into the vendors software, this can be used either by the creators of the breach, or by other malicious actors that have purchased the use of the exploit to gain access into organisations that utilise this technology or compromising a third-party data store to gain access to multiple company’s data stored there.

Direct supply chain attack
In the event that a malicious actor wants to gain access to an organisation that is known to have mature processes and cyber security tooling, they may instead seek to compromise a supplier (for example a marketing agency producing the annual report, a cleaning company providing facilities, or a manufacturer making a small part of an overall solution). These attacks are typically more targeted and have specific goals in mind, for example compromising a defence prime through a small manufacturer providing a specialist item – the prime will have stringent controls, monitoring and policies, the sub may well be less mature, or at least there may be some human or system trust as this is a normal way for data and interactions to flow.

Just how big an issue is the threat of cyber-attacks stemming from the supply chain, as a result of an attack on a supplier? Do businesses put enough emphasis on this?

Industry reports suggest software supply chain attacks cost around $46Bn in 2023 and are predicted to increase by 200% in the next decade. 

The one-to-many payback approach, and the delay between breach and activity make this an attractive area for malicious actors. Even when made aware of the risk, many businesses have only considered the risk for new procurements and haven’t adopted the same rigour with existing solutions. 

Direct supply chain attacks are harder to quantify in a value number, but anecdotally from our incident response activities at Cyberfort around 40% of incidents we’ve dealt with recently have had some element of supply chain compromise. Even if this was simply spear phishing from a company email that worked together with the victim, and hence both technical (e.g. domains were trusted and emails whitelisted) and human (e.g. “I know joe, so of course I’ll click on this link) controls were bypassed. 

On a more sophisticated level, we have seen facilities contractors asked to admit individuals, plug in chargers with usb malware in them, and other seemingly harmless activities that underpinned a breach.

What are the main risks here for organisations? How might a cyber-attack on a supplier cause issues for customers?  

The risks here are many and varied, any kind of software can have vulnerable exploitability, any service provider can have weaknesses that are exploited, and any subcontractor can be compromised. 

The risks range from ransomware and extortion, through data exfiltration and compromise of networks to sensitive data leaks and denial of service – meaning business disruption, reputational damage and regulatory fines are all a potential outcome.

What can organisations do to reduce the risk, both internally and through working with suppliers?  

The first stage is to understand the suppliers you have in both areas, their cyber maturity and the requirement for them to disclose incidents. Especially in the case of smaller companies, controls are often lacking and there is too much trust placed in employees, with security being an “add-on” job for IT.

Secondly assess, validate and evidence the controls that your supply chain has in place. A simple way to do this is to assess the access they have to your people and environments, and then insist on similar controls being evidenced. Make this a key component of every procurement, whether software or services.

Additionally, make the disclosure of any cyber security incident within the supplier a contractual obligation. Request evidence of penetration testing, vulnerability management and user awareness training (where you can’t get this data, consider the risk before you purchase). Key steps to reduce supply chain security risks should include:

Create ring fenced and surrounding controls for supply chain access, such as segregated landing zones, highlighting in email messages, and strict policies around supply chain “helping”.

Validate your emergency patching and crisis scenario testing scenarios to include both software supply chain and direct supply chain attacks.

Include suppliers email addresses in your phishing testing, as the senders, get your organisation used to the fact that breaches can (and do) occur this way.

Sign off any new procurements with an individual security assessment, conducted with evidence outside of the procurement team.

What steps should suppliers have in place as a minimum? Should this be part of a due diligence process when selecting and reviewing suppliers?  

From our experience at Cyberfort we advise all organisations to take action with the following 8 steps:

Validate your own supply chain, often suppliers and sub suppliers go down in size and hence in cyber maturity.

Ensure your security controls are appropriate for the level of business risk you’re dealing with.

Migrate to SaaS where possible, utilise the security packages for an efficient and effective minimal effort approach to security management.

Validate and evidence the controls that your suppliers have in place, it’s not your effort but hold the supplier to account.

Make sure you have cyber essentials plus.

Keep on top of pen testing and VM (see SaaS point above) and keep track of evidence.

Understand what your customer expects of you in security and compliance, and price this into your solution.

Ask your customer about their controls, likely targets and defences, find a trusted advisor/partner to help you extrapolate this to the threats you are likely to face.

How can organisations go about monitoring suppliers (and the wider supply chain) to reduce the risk that they will be impacted? Can AI help?  

The challenge with monitoring suppliers (and there are a number of solutions that purport to do this) is that they are typically focused on either: 

Forms completed by the supplier (and the smaller they are, the more likely they are to either deliberately or through a lack of knowledge not be completed correctly). 

Systems that look only at external posture. This is important as indicators of risk can be extensive externally but massively reduced through surrounding controls. For example, a supplier having credentials available publicly seems very bad, however if this is mitigated through MFA, security baselines, certificated logins and device management, the potential risk is reduced. Similarly, if a piece of custom software is in use that communicates in an unusual or legacy way, this may not be recognised as a risk.

AI or machine learning can help here but it is not the “silver bullet”. It can help through trend analysis of connections and anomalies for example, but this requires human investigation and analysis of the anomaly.

The best answer is a combination of validated and evidenced checking, standard accreditations (such as cyber essentials plus) automated software where available and in use, controls and mitigations in the customer, and contractual requirements to continue to comply and evidence alignment to the required risk levels. However, this can be an arduous task so this should be combined with appropriate risk governance for every contracted software or purchase, and segmentation, controls and training for the customers networks and resources to identify, report and mitigate the risk.

For more information about our Supply Chain Cyber Security Services, please contact us at [email protected]

It is no secret AI is at the forefront of technological innovation, reshaping industries, driving efficiencies, and unlocking unprecedented opportunities. From healthcare breakthroughs to personalised customer experiences, AI is transforming how we live and work.

According to Statista, the Artificial intelligence (AI) market is projected to grow from £4.8bn to £20.68bn within the next 5 years, reflecting a compound annual growth rate of 27.6%.

However, alongside  this growth and AI’s potential, AI can introduce significant risks—ethical dilemmas, data privacy concerns, and the potential for harm if left unchecked. This dual nature of AI has made governance a critical focus for businesses and regulators alike.

This blog explores the transformative potential of AI, the associated risks, and why governance is essential to ensure AI remains a force for good. It also sets the stage for understanding emerging regulatory frameworks, including the EU AI Act and standards like ISO 42001, designed to guide responsible AI adoption.

What is Artificial Intelligence?

Think about the human brain – a vast, complex,  intricate network of billions and trillions of neurons working together. These neurons communicate to process information, store memories, and as a result, enable critical thinking. Through past experiences and knowledge it acquires, the human brain is able to make decisions and come up with predictions by identifying patterns observed over the course of a lifetime.

Now, consider developing a machine that mimics the human brain’s ability to decide based on reasoning, facts, emotions, and intuition. This is where AI comes into play. Instead of neurons, AI relies on sophisticated algorithms and computational models to think, plan, and make decisions. The algorithms are designed to solve problems and make decisions, while the computational models are there to simulate a particular process based on the AI design purpose, such as mimicking how the brain works.

With the availability of powerful technologies, AI is capable of enhancing the brain’s functionalities by processing large sets of data, executing tasks at a faster rate, all with greater accuracy. It reduces errors and automates tasks, improving efficiency for both companies and people’s lives. While it falls short in emotional decision-making, abstract reasoning, and intuition, the emotional AI market is also witnessing a significant growth that is expected to reach £7.10bn within the next 5 years – According to Markets and Markets – with giant companies like Microsoft exploring its potential.

The Rise of AI: Opportunities and Challenges

AI as a Transformative Force

AI is no longer the technology of tomorrow — it is here today, powering innovations across multiple sectors, fundamentally reshaping how businesses operate and how societies function. Recent examples of AI’s power in transforming different sectors include:

Healthcare
AI-driven diagnostics are enabling earlier detection of diseases, personalising treatment plans, and optimising resource allocation in hospitals. For example, AI systems are being used to predict patient outcomes, reducing strain on healthcare providers. Stanford Medicine’s study demonstrates that AI algorithms enhance the accuracy of skin cancer diagnoses.

Finance
Fraud detection systems powered by machine learning can identify suspicious transactions in real-time, while automated trading platforms leverage AI algorithms to execute trades with precision and speed. Juniper Research forecasts significant growth in AI-enabled financial fraud detection, with cost savings reaching $10.4 billion globally by 2027. Whilst, MarketsandMarkets projects the overall global AI usage in finance to grow from USD 38.36 billion in 2024 to USD 190.33 billion by 2030, at a CAGR of 30.6%.

Retail
AI enhances customer experiences by using predictive analytics for inventory management, dynamic pricing, and personalised recommendations based on shopping behaviours. McKinsey highlights that embedding AI in operations can lead to reductions of 20 to 30 percent in inventory.

Manufacturing
Predictive maintenance powered by AI minimises equipment downtime by identifying potential failures before they occur. Deloitte’s infographic outlines the benefits of predictive maintenance, including substantial downtime reduction and cost savings. Automated quality control systems ensure consistent production standards. Elisa IndustrIQ explains how AI-driven quality control enhances product quality and consistency in manufacturing.

Transportation
Autonomous vehicles and AI-driven logistics solutions are optimising supply chains, reducing costs, and improving delivery efficiency. PwC’s 2024 Digital Trends in Operations Survey discusses how AI and other technologies are transforming operations and supply chains.

These applications demonstrate AI’s potential to revolutionise industries, boost productivity, and drive economic growth, while addressing complex challenges such as resource optimisation and scalability.

Risks of Unchecked AI

Despite the transformative potential of AI, there are ethical and pragmatic concerns related to AI that can have widespread implications, if not addressed effectively. Some of these risks currently exist, while others remain to be a hypothesis of the future.

Data Privacy Concerns
High-profile breaches have highlighted vulnerabilities in systems that lack robust security measures. AI often requires a large collection of data, potentially including personal information, to function effectively. This raises concerns around consent, data storage, and potential misuse, with high risks of data spillover, repurposing, and long-term data persistence.

Bias and Discrimination
AI systems rely on data for analysis and decision-making. If the data is flawed or biased in any way, then the outcome will reflect those inaccuracies. Poorly trained AI systems can unintentionally reinforce or amplify existing biases, particularly in sensitive areas like hiring, lending, or law enforcement.

Lack of Transparency
Complex AI models, often referred to as “black boxes,” produce decisions that are difficult to interpret. This opacity can erode trust, especially in high-stakes applications such as healthcare diagnostics and criminal justice.

Security Vulnerabilities
AI systems, if not properly secured, can be exploited by cyber criminals to cause operational disruptions, gain unauthorised access to sensitive information, and could affect human life. Adversarial attacks, where malicious actors manipulate AI inputs to alter outcomes, are a growing concern. At the Black Hat security conference in August 2024, researcher Michael Bargury demonstrated how Microsoft’s AI system, Copilot, could be manipulated for malicious activities. By crafting specific prompts, attackers could transform Copilot into an automated spear-phishing tool, mimicking a user’s writing style to send personalized phishing emails. This highlights the susceptibility of AI models to prompt injection attacks, where adversaries input malicious instructions to alter the system’s behaviour.

Ethical Dilemmas
The deployment of AI in areas such as surveillance or autonomous weaponry raises ethical questions about accountability, societal impact, and potential misuse. A 2024 study highlighted that the integration of AI into autonomous weapons systems poses significant risks to geopolitical stability and threatens the free exchange of ideas in AI research. The study emphasises the ethical challenges of delegating life-and-death decisions to machines, accountability issues, and the potential for unintended consequences in warfare.

Emerging Regulations: Setting the Stage for Responsible AI

AI is intended to drive innovation while safeguarding individuals and organisations from potential harm. With the growing awareness of risks and vulnerabilities in AI technology, governments and international bodies are recognising the need for robust AI governance frameworks. The introduction of regulations like the EU AI Act is a testament to the growing focus on balancing innovation with accountability.

This section provides a brief overview of the EU AI Act, which we will explore in greater detail in the next blog of this series, focusing on its goals, risk-based framework, and implications for businesses.

What Is the EU AI Act?

The EU AI Act aims to establish a harmonised regulatory framework for AI, addressing risks while advancing AI technology responsibly. It categorises AI systems into risk levels and sets stringent requirements for high-risk applications. This regulatory approach ensures AI systems operate in ways that respect human rights, societal values, while fostering safe innovation and sustainable growth.

Compliance Timelines for the EU AI Act

April 2021: The European Commission published the draft EU AI Act, marking the start of the legislative journey.
December 2023: The Act was formally adopted by the European Council and Parliament.
Early 2024: Finalised legal text expected to be published in the EU Official Journal.
Mid-2024: The entry into force of the Act, initiating the countdown to compliance deadlines.
2025–2026: A transitional period allowing organisations to prepare for full compliance. Most requirements will likely become enforceable by mid-2026.

These timelines are critical for businesses to understand and plan their AI compliance strategies accordingly.

UK Post-Brexit – Does the EU AI Act Apply?

The EU is not alone in prioritising AI governance. Countries like the UK, US, and Canada are also exploring regulatory initiatives. The UK’s recent signing of the world’s first international AI treaty highlights its commitment to managing AI risks on a global scale, reflecting a shared understanding of the importance of governance in AI development and expressing support for the EU as a leader in promoting trustworthy AI.

Despite Brexit, UK businesses need to be aware of this Act as it can impact their ability to engage with consumers, an area which we will explore further in blog 2.

The Role of Standards in AI Governance – Introducing ISO 42001 and NIST AI RMF

Standards like ISO 42001 and the NIST AI Risk Management Framework (AI RMF) are emerging as key tools for organisations to implement robust governance practices. The ISO 42001 is a structured approach to managing AI risks, focusing on accountability, transparency, and continuous improvement. The NIST AI RMF on the other hand, is a flexible, iterative methodology for identifying, assessing, and mitigating risks throughout the AI lifecycle.

Both standards complement each other and could be used simultaneously for a more holistic approach to managing AI security. By adopting these standards, organisations can:

  • Proactively address risks and align with emerging regulations.
  • Embed ethical principles into AI systems from inception.
  • Demonstrate a commitment to responsible AI practices, enhancing stakeholder trust.

Cyberfort
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.