Supply Chain cyber security attacks have been in the news throughout the last 12 months. Latest research suggests 47% of organisations suffered a disruptive outage over the last year from a breach related to a vendor. In this blog post Cyberfort cyber security professionals discuss where organisations need to focus in 2025 to improve their supply chain cyber security strategies and how they can make themselves more resilient to attack.

What are the main types of supply chain cyber security attacks?

From our experience at Cyberfort there are two main different types of supply chain cyber-attack. Both should be considered high risk, although for different reasons. While both meet the definition of supply chain attack (compromising or damaging an organisation by targeting less secure elements in the supply chain) each type of attack typically has different targets and threat actor capabilities and need to be considered when discussing supply chain cyber security. 

Software supply chain attack
Where a piece of technology purchased by the organisation is compromised, this is typically not a targeted attack at an individual end user (though in extreme cases it could be) but rather an opportunity to operate a one-to-many breach. This could include activities such as embedding an exploit into the vendors software, this can be used either by the creators of the breach, or by other malicious actors that have purchased the use of the exploit to gain access into organisations that utilise this technology or compromising a third-party data store to gain access to multiple company’s data stored there.

Direct supply chain attack
In the event that a malicious actor wants to gain access to an organisation that is known to have mature processes and cyber security tooling, they may instead seek to compromise a supplier (for example a marketing agency producing the annual report, a cleaning company providing facilities, or a manufacturer making a small part of an overall solution). These attacks are typically more targeted and have specific goals in mind, for example compromising a defence prime through a small manufacturer providing a specialist item – the prime will have stringent controls, monitoring and policies, the sub may well be less mature, or at least there may be some human or system trust as this is a normal way for data and interactions to flow.

Just how big an issue is the threat of cyber-attacks stemming from the supply chain, as a result of an attack on a supplier? Do businesses put enough emphasis on this?

Industry reports suggest software supply chain attacks cost around $46Bn in 2023 and are predicted to increase by 200% in the next decade. 

The one-to-many payback approach, and the delay between breach and activity make this an attractive area for malicious actors. Even when made aware of the risk, many businesses have only considered the risk for new procurements and haven’t adopted the same rigour with existing solutions. 

Direct supply chain attacks are harder to quantify in a value number, but anecdotally from our incident response activities at Cyberfort around 40% of incidents we’ve dealt with recently have had some element of supply chain compromise. Even if this was simply spear phishing from a company email that worked together with the victim, and hence both technical (e.g. domains were trusted and emails whitelisted) and human (e.g. “I know joe, so of course I’ll click on this link) controls were bypassed. 

On a more sophisticated level, we have seen facilities contractors asked to admit individuals, plug in chargers with usb malware in them, and other seemingly harmless activities that underpinned a breach.

What are the main risks here for organisations? How might a cyber-attack on a supplier cause issues for customers?  

The risks here are many and varied, any kind of software can have vulnerable exploitability, any service provider can have weaknesses that are exploited, and any subcontractor can be compromised. 

The risks range from ransomware and extortion, through data exfiltration and compromise of networks to sensitive data leaks and denial of service – meaning business disruption, reputational damage and regulatory fines are all a potential outcome.

What can organisations do to reduce the risk, both internally and through working with suppliers?  

The first stage is to understand the suppliers you have in both areas, their cyber maturity and the requirement for them to disclose incidents. Especially in the case of smaller companies, controls are often lacking and there is too much trust placed in employees, with security being an “add-on” job for IT.

Secondly assess, validate and evidence the controls that your supply chain has in place. A simple way to do this is to assess the access they have to your people and environments, and then insist on similar controls being evidenced. Make this a key component of every procurement, whether software or services.

Additionally, make the disclosure of any cyber security incident within the supplier a contractual obligation. Request evidence of penetration testing, vulnerability management and user awareness training (where you can’t get this data, consider the risk before you purchase). Key steps to reduce supply chain security risks should include:

Create ring fenced and surrounding controls for supply chain access, such as segregated landing zones, highlighting in email messages, and strict policies around supply chain “helping”.

Validate your emergency patching and crisis scenario testing scenarios to include both software supply chain and direct supply chain attacks.

Include suppliers email addresses in your phishing testing, as the senders, get your organisation used to the fact that breaches can (and do) occur this way.

Sign off any new procurements with an individual security assessment, conducted with evidence outside of the procurement team.

What steps should suppliers have in place as a minimum? Should this be part of a due diligence process when selecting and reviewing suppliers?  

From our experience at Cyberfort we advise all organisations to take action with the following 8 steps:

Validate your own supply chain, often suppliers and sub suppliers go down in size and hence in cyber maturity.

Ensure your security controls are appropriate for the level of business risk you’re dealing with.

Migrate to SaaS where possible, utilise the security packages for an efficient and effective minimal effort approach to security management.

Validate and evidence the controls that your suppliers have in place, it’s not your effort but hold the supplier to account.

Make sure you have cyber essentials plus.

Keep on top of pen testing and VM (see SaaS point above) and keep track of evidence.

Understand what your customer expects of you in security and compliance, and price this into your solution.

Ask your customer about their controls, likely targets and defences, find a trusted advisor/partner to help you extrapolate this to the threats you are likely to face.

How can organisations go about monitoring suppliers (and the wider supply chain) to reduce the risk that they will be impacted? Can AI help?  

The challenge with monitoring suppliers (and there are a number of solutions that purport to do this) is that they are typically focused on either: 

Forms completed by the supplier (and the smaller they are, the more likely they are to either deliberately or through a lack of knowledge not be completed correctly). 

Systems that look only at external posture. This is important as indicators of risk can be extensive externally but massively reduced through surrounding controls. For example, a supplier having credentials available publicly seems very bad, however if this is mitigated through MFA, security baselines, certificated logins and device management, the potential risk is reduced. Similarly, if a piece of custom software is in use that communicates in an unusual or legacy way, this may not be recognised as a risk.

AI or machine learning can help here but it is not the “silver bullet”. It can help through trend analysis of connections and anomalies for example, but this requires human investigation and analysis of the anomaly.

The best answer is a combination of validated and evidenced checking, standard accreditations (such as cyber essentials plus) automated software where available and in use, controls and mitigations in the customer, and contractual requirements to continue to comply and evidence alignment to the required risk levels. However, this can be an arduous task so this should be combined with appropriate risk governance for every contracted software or purchase, and segmentation, controls and training for the customers networks and resources to identify, report and mitigate the risk.

For more information about our Supply Chain Cyber Security Services, please contact us at [email protected]

It is no secret AI is at the forefront of technological innovation, reshaping industries, driving efficiencies, and unlocking unprecedented opportunities. From healthcare breakthroughs to personalised customer experiences, AI is transforming how we live and work.

According to Statista, the Artificial intelligence (AI) market is projected to grow from £4.8bn to £20.68bn within the next 5 years, reflecting a compound annual growth rate of 27.6%.

However, alongside  this growth and AI’s potential, AI can introduce significant risks—ethical dilemmas, data privacy concerns, and the potential for harm if left unchecked. This dual nature of AI has made governance a critical focus for businesses and regulators alike.

This blog explores the transformative potential of AI, the associated risks, and why governance is essential to ensure AI remains a force for good. It also sets the stage for understanding emerging regulatory frameworks, including the EU AI Act and standards like ISO 42001, designed to guide responsible AI adoption.

What is Artificial Intelligence?

Think about the human brain – a vast, complex,  intricate network of billions and trillions of neurons working together. These neurons communicate to process information, store memories, and as a result, enable critical thinking. Through past experiences and knowledge it acquires, the human brain is able to make decisions and come up with predictions by identifying patterns observed over the course of a lifetime.

Now, consider developing a machine that mimics the human brain’s ability to decide based on reasoning, facts, emotions, and intuition. This is where AI comes into play. Instead of neurons, AI relies on sophisticated algorithms and computational models to think, plan, and make decisions. The algorithms are designed to solve problems and make decisions, while the computational models are there to simulate a particular process based on the AI design purpose, such as mimicking how the brain works.

With the availability of powerful technologies, AI is capable of enhancing the brain’s functionalities by processing large sets of data, executing tasks at a faster rate, all with greater accuracy. It reduces errors and automates tasks, improving efficiency for both companies and people’s lives. While it falls short in emotional decision-making, abstract reasoning, and intuition, the emotional AI market is also witnessing a significant growth that is expected to reach £7.10bn within the next 5 years – According to Markets and Markets – with giant companies like Microsoft exploring its potential.

The Rise of AI: Opportunities and Challenges

AI as a Transformative Force

AI is no longer the technology of tomorrow — it is here today, powering innovations across multiple sectors, fundamentally reshaping how businesses operate and how societies function. Recent examples of AI’s power in transforming different sectors include:

Healthcare
AI-driven diagnostics are enabling earlier detection of diseases, personalising treatment plans, and optimising resource allocation in hospitals. For example, AI systems are being used to predict patient outcomes, reducing strain on healthcare providers. Stanford Medicine’s study demonstrates that AI algorithms enhance the accuracy of skin cancer diagnoses.

Finance
Fraud detection systems powered by machine learning can identify suspicious transactions in real-time, while automated trading platforms leverage AI algorithms to execute trades with precision and speed. Juniper Research forecasts significant growth in AI-enabled financial fraud detection, with cost savings reaching $10.4 billion globally by 2027. Whilst, MarketsandMarkets projects the overall global AI usage in finance to grow from USD 38.36 billion in 2024 to USD 190.33 billion by 2030, at a CAGR of 30.6%.

Retail
AI enhances customer experiences by using predictive analytics for inventory management, dynamic pricing, and personalised recommendations based on shopping behaviours. McKinsey highlights that embedding AI in operations can lead to reductions of 20 to 30 percent in inventory.

Manufacturing
Predictive maintenance powered by AI minimises equipment downtime by identifying potential failures before they occur. Deloitte’s infographic outlines the benefits of predictive maintenance, including substantial downtime reduction and cost savings. Automated quality control systems ensure consistent production standards. Elisa IndustrIQ explains how AI-driven quality control enhances product quality and consistency in manufacturing.

Transportation
Autonomous vehicles and AI-driven logistics solutions are optimising supply chains, reducing costs, and improving delivery efficiency. PwC’s 2024 Digital Trends in Operations Survey discusses how AI and other technologies are transforming operations and supply chains.

These applications demonstrate AI’s potential to revolutionise industries, boost productivity, and drive economic growth, while addressing complex challenges such as resource optimisation and scalability.

Risks of Unchecked AI

Despite the transformative potential of AI, there are ethical and pragmatic concerns related to AI that can have widespread implications, if not addressed effectively. Some of these risks currently exist, while others remain to be a hypothesis of the future.

Data Privacy Concerns
High-profile breaches have highlighted vulnerabilities in systems that lack robust security measures. AI often requires a large collection of data, potentially including personal information, to function effectively. This raises concerns around consent, data storage, and potential misuse, with high risks of data spillover, repurposing, and long-term data persistence.

Bias and Discrimination
AI systems rely on data for analysis and decision-making. If the data is flawed or biased in any way, then the outcome will reflect those inaccuracies. Poorly trained AI systems can unintentionally reinforce or amplify existing biases, particularly in sensitive areas like hiring, lending, or law enforcement.

Lack of Transparency
Complex AI models, often referred to as “black boxes,” produce decisions that are difficult to interpret. This opacity can erode trust, especially in high-stakes applications such as healthcare diagnostics and criminal justice.

Security Vulnerabilities
AI systems, if not properly secured, can be exploited by cyber criminals to cause operational disruptions, gain unauthorised access to sensitive information, and could affect human life. Adversarial attacks, where malicious actors manipulate AI inputs to alter outcomes, are a growing concern. At the Black Hat security conference in August 2024, researcher Michael Bargury demonstrated how Microsoft’s AI system, Copilot, could be manipulated for malicious activities. By crafting specific prompts, attackers could transform Copilot into an automated spear-phishing tool, mimicking a user’s writing style to send personalized phishing emails. This highlights the susceptibility of AI models to prompt injection attacks, where adversaries input malicious instructions to alter the system’s behaviour.

Ethical Dilemmas
The deployment of AI in areas such as surveillance or autonomous weaponry raises ethical questions about accountability, societal impact, and potential misuse. A 2024 study highlighted that the integration of AI into autonomous weapons systems poses significant risks to geopolitical stability and threatens the free exchange of ideas in AI research. The study emphasises the ethical challenges of delegating life-and-death decisions to machines, accountability issues, and the potential for unintended consequences in warfare.

Emerging Regulations: Setting the Stage for Responsible AI

AI is intended to drive innovation while safeguarding individuals and organisations from potential harm. With the growing awareness of risks and vulnerabilities in AI technology, governments and international bodies are recognising the need for robust AI governance frameworks. The introduction of regulations like the EU AI Act is a testament to the growing focus on balancing innovation with accountability.

This section provides a brief overview of the EU AI Act, which we will explore in greater detail in the next blog of this series, focusing on its goals, risk-based framework, and implications for businesses.

What Is the EU AI Act?

The EU AI Act aims to establish a harmonised regulatory framework for AI, addressing risks while advancing AI technology responsibly. It categorises AI systems into risk levels and sets stringent requirements for high-risk applications. This regulatory approach ensures AI systems operate in ways that respect human rights, societal values, while fostering safe innovation and sustainable growth.

Compliance Timelines for the EU AI Act

April 2021: The European Commission published the draft EU AI Act, marking the start of the legislative journey.
December 2023: The Act was formally adopted by the European Council and Parliament.
Early 2024: Finalised legal text expected to be published in the EU Official Journal.
Mid-2024: The entry into force of the Act, initiating the countdown to compliance deadlines.
2025–2026: A transitional period allowing organisations to prepare for full compliance. Most requirements will likely become enforceable by mid-2026.

These timelines are critical for businesses to understand and plan their AI compliance strategies accordingly.

UK Post-Brexit – Does the EU AI Act Apply?

The EU is not alone in prioritising AI governance. Countries like the UK, US, and Canada are also exploring regulatory initiatives. The UK’s recent signing of the world’s first international AI treaty highlights its commitment to managing AI risks on a global scale, reflecting a shared understanding of the importance of governance in AI development and expressing support for the EU as a leader in promoting trustworthy AI.

Despite Brexit, UK businesses need to be aware of this Act as it can impact their ability to engage with consumers, an area which we will explore further in blog 2.

The Role of Standards in AI Governance – Introducing ISO 42001 and NIST AI RMF

Standards like ISO 42001 and the NIST AI Risk Management Framework (AI RMF) are emerging as key tools for organisations to implement robust governance practices. The ISO 42001 is a structured approach to managing AI risks, focusing on accountability, transparency, and continuous improvement. The NIST AI RMF on the other hand, is a flexible, iterative methodology for identifying, assessing, and mitigating risks throughout the AI lifecycle.

Both standards complement each other and could be used simultaneously for a more holistic approach to managing AI security. By adopting these standards, organisations can:

  • Proactively address risks and align with emerging regulations.
  • Embed ethical principles into AI systems from inception.
  • Demonstrate a commitment to responsible AI practices, enhancing stakeholder trust.

Cyberfort
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.