By Rob Vann, Chief Solutions officer at Cyberfort

26th May 2025

AI and cybersecurity are now intrinsically linked in the transportation sector. AI systems must be protected like any other critical asset and used defensively to enhance cyber resilience. Whether safeguarding autonomous vehicles, securing logistics chains, implementing zero-trust frameworks, or preparing for new regulations, the message is clear: cybersecurity is not optional in the age of AI-powered transport. Organisations that embrace this reality will not only avoid the devastating impact of cyberattacks but will also gain a competitive edge by building systems that are secure, intelligent, and resilient by design.

Vulnerability Assessment of Autonomous Vehicles 

Autonomous vehicles (AVs) are among the most high-profile applications of AI. These systems rely on complex sensor arrays, deep learning models, and edge computing to make split-second driving decisions. Ironically, this life-saving mitigation of human error through enabling support and autonomy also introduces novel vulnerabilities, requiring different and iterated defensive approaches and techniques to be intertwined with the traditional “do the basics well” approach.

Before we even consider sophisticated attacks, AVs interacting with our physical world face a number of environmental and malicious risks. For example, an attacker physically interfering with road signs on a new road could create subtle manipulations of sensors where GPS data is not available to confirm an upcoming hazard through other systems. Autonomous parcel delivery systems regularly “fall” off kerbs and ironically must be “rescued” by kind-hearted human passersby. This is especially a challenge within the UK, where environments such as kerb heights and pavement widths are often many years old and hence do not have the consistency of more modern planned town layouts in the US.

Laws and environmental changes mean that AVs must be trained, consider, understand, and respond to the actual geographies that they are currently operating in.

Furthermore, in a simple system, “falling back” to a baseline operating level is often an option. When you lose GPS, you follow the signs; when an assisted vehicle is operating in heavily adverse weather conditions, the system notifies the driver it is no longer steering, braking, or accelerating, and the driver resumes control. In a truly autonomous transportation system, these failsafe’s must be carefully considered to encompass all possible outcomes safely and effectively.

The transition to fully autonomous vehicles is another area for consideration. As drivers, we are taught to avoid confrontation; if people are tailgating us, we let them pass and drop back. However, as the percentage of autonomous vehicles increases, inconsiderate or dangerous drivers may “force” AVs to take evasive action, which then creates a chain reaction across the AVs surrounding it.

Beyond these “simple” potential challenges (which must be understood, predicted, tested, and mitigated), there’s also the risk of remote code execution to manipulate AVs’ driving behaviours. If a malicious actor gains access to over-the-air update systems, navigation modules, or vehicle-to-infrastructure (V2I) communication channels, they could cause significant disruptions.

To safeguard autonomous vehicles, a layered security approach is essential. This includes AI robustness testing, where models are trained and tested against adversarial inputs across all geographies, systems, and environments they may interact with. Code signing and secure bootloaders ensure that only authenticated software is installed or run. Real-time anomaly detection uses AI to monitor vehicle behaviour for anomalies such as unexpected lane changes, deviations from logistics routes, loss of centralised connectivity, or communication spikes. Additionally, isolating safety-critical systems by separating AI decision-making from infotainment, customer announcements, and other third-party applications limits exposure and enhances security.

Ultimately, cybersecurity for AVs must be treated as a primary secure-by-design requirement, not an afterthought.

Securing AI-Powered Supply Chains 

Supply chain attacks span all areas of technology, from compromising core code in autonomous vehicles to transport-specific workflows. Modern logistics networks rely heavily on AI for forecasting, routing, inventory management, and robotic automation, creating extended attack surfaces and inconsistent security postures. AI models trained on sensitive data can be targeted for model inversion attacks, leading to compromised decision-making that reroutes shipments, delays deliveries, or triggers fraudulent inventory movements. Attackers may exploit IoT endpoints, such as connected sensors in logistics warehouses, to deploy ransomware or other malicious agents.

Defensive strategies for AI-powered supply chains include understanding the vast and sprawling nature of supply chains, where small businesses often manufacture critical components but lack enterprise-level defences. End-to-end encryption ensures all data in transit is secure. Federated learning distributes AI training across multiple devices, decentralising data to reduce exposure and improve privacy.

Continuous model monitoring by both human and AI systems helps identify data drift, unforeseen use cases, and malicious inference. Threat intelligence sharing among logistics partners enhances detection of supply chain-wide threats. Cybersecurity in supply chains is now a boardroom priority that impacts physical operations.

Zero-Trust Architecture for Connected Transport 

In zero-trust, no device, user, or system is inherently trusted—even if it’s inside the network. Instead, access is granted based on continuous verification of identity, device posture, and contextual risk. With vehicles, roadside infrastructure, and control centres all becoming interconnected, the need for a zero-trust approach in transportation is critical. Traditional perimeter-based security models are no longer sufficient, as the perimeter is constantly shifting, difficult to secure, and hard to monitor and respond to.

Zero Trust is a concept that is often discussed but rarely fully implemented. Critical components for transportation systems include micro-segmentation, which involves dividing networks and systems into separate zones to reduce lateral movement in case of a breach and mitigate the risk of single incidents spreading to impact the entire ecosystem. Identity-centric controls, such as multi-factor authentication (MFA), least-privilege access, and identity governance, are essential. Behavioural analytics play a key role in Zero Trust by using AI to identify deviations in access patterns or operational behaviour.

Policy automation, driven by AI, dynamically adapts access controls based on real-time risk assessments, ensuring a robust and responsive security posture. In transportation, we should remember that a Zero Trust Approach doesn’t just improve a system’s defensive posture, it boosts operational resilience and improves performance by containing incidents as early as possible, before they spread across complex transport ecosystems.

Regulatory Horizons

As the transportation industry evolves, so too does its regulatory landscape. Governments and international bodies are responding with updated mandates around cybersecurity, AI ethics, and data privacy. Within a global supply chain supporting the development and rollout of AVs, the platforms and systems are often common, but regulatory frameworks, legal requirements, areas of responsibility, and regulatory fines are more often territory or country-driven. These include privacy, security, design, and safety, and care should be taken to consider the systems’ likely and possible use areas rather than defaulting to primary sold markets.

Examples of regulation include UN Regulation No. 155 (Cybersecurity Management System), which now applies to all new vehicle types sold in many countries and mandates robust cybersecurity risk management for automakers. The EU’s NIS2 Directive expands security obligations for operators of essential services, including transportation. In the UK, the Automated Vehicles Act sets a legal framework for self-driving vehicles and their use within the UK, projected to be in use on public roads from 2026. In the US, the National Highway Traffic Safety Administration (NHTSA) has proposed a voluntary program and is expected to introduce AI-specific safety standards for autonomous driving systems.

Organisations can stay ahead of compliance by conducting comprehensive cybersecurity maturity assessments, implementing AI compliance standards and ethics frameworks to ensure fairness and explainability, and participating in public-private collaboration initiatives to stay informed on evolving threats and best practices. Regulatory alignment isn’t just about avoiding penalties; it builds trust with customers and stakeholders to enable maximum leverage and minimum exposure in a high-risk, high-reward industry.

Real-World Examples

AI driven ransomware attack on logistics company
Recent incidents underscore the importance of robust cybersecurity measures in transportation. These include the disruption of AVs, targeting of customer information and safety systems, and logistics system.

Autonomous Vehicle Confusion
Autonomous vehicle confusion has been demonstrated through two intriguing proofs of concept. Firstly, researchers have shown that Tesla’s AI vision system can be deceived using adversarial examples on the road, such as fake lane markings or speed limit signs. This underscores the necessity for robust sensor fusion and constant validation of model outputs to ensure safety and reliability.

Secondly, an experiment conducted by one of Cyberfort’s own resources involved adding a 60mph speed sign to the rear window of a car. On private land, a semi-autonomous vehicle was driven past the parked car. This experiment was repeated with three different vehicles, and it was observed that two of the vehicles quickly accelerated to address the perceived speed limit change from 5mph to 60mph, while the third vehicle remained unaffected. These results were consistently replicated, highlighting potential vulnerabilities in autonomous vehicle systems and the importance of continuous improvement in AI robustness and sensor accuracy. These findings illustrate the critical need for ongoing advancements in autonomous vehicle technology to mitigate risks and enhance the reliability of AI-driven systems on the road.

Key Lessons
To ensure robust cyber resilience, it’s crucial to adopt a holistic security approach that encompasses everything from physical sensors to cloud-based analytics platforms. Defensive AI technologies, such as behavioural analytics, anomaly detection, and automated response systems, play a vital role in identifying and containing threats in real time. However, the growing threat of offensive AI, which attackers use for reconnaissance, phishing personalisation, and identifying zero-day vulnerabilities, underscores the need for continuous vigilance and advanced security measures.

Glen Williams, Cyberfort CEO speaks about how we embrace neurodiversity in the Cyber Security industry

11th april 2025

As the cybersecurity industry faces unprecedented challenges – with approximately 1.5 million attacks occurring globally each day and increasingly sophisticated AI-driven threats – we simultaneously confront a persistent skills shortage. This paradox presents a critical question: how can we defend against escalating threats with insufficient talent? 

The answer may lie in a resource many organisations consistently overlook: neurodiverse talent. 

At Cyberfort, we’ve discovered that embracing neurodiversity isn’t just a social responsibility initiative – it’s a competitive advantage that directly addresses our industry’s most pressing challenges. The unique cognitive approaches and exceptional pattern recognition abilities often associated with neurodiversity align perfectly with the skills required for effective cybersecurity work. 

The Perfect Match: Neurodiversity & Cybersecurity 

Neurodiversity encompasses conditions including autism spectrum disorder, ADHD, dyslexia, and others that represent variations in how the human brain processes information. These differences – far from being limitations – often manifest as heightened abilities in critical cybersecurity functions. 

In penetration testing and SOC analysis particularly, neurodiverse team members frequently demonstrate exceptional attention to detail, pattern recognition capabilities, and persistence that their neurotypical colleagues may not possess in equal measure.

These individuals can identify vulnerabilities and detect anomalies that others might miss – a crucial advantage against adversaries using increasingly sophisticated techniques. 

This is why neurodiversity initiatives shouldn’t be classified merely as diversity programmes. They represent access to specialist skills that directly improve security outcomes. In an industry where overlooking a single vulnerability can lead to catastrophic breaches, these cognitive differences translate into tangible business value. 

From Concept To Implementation 

Transforming neurodiversity from concept to operational reality requires practical adjustments that remove barriers without lowering standards. At Cyberfort, our approach includes: 

Rethinking recruitment: We send interview questions in advance, allow candidates to turn cameras off during video interviews, and focus on skills demonstration rather than social performance.

Workplace accommodations: Creating flexible environments where colleagues can step out of meetings when needed without stigma, offering noise-cancelling headphones or quiet spaces, and providing clear, direct communication.

Career development: Establishing specialised development paths that capitalise on unique strengths while providing support for areas of difficulty. 

These changes haven’t required massive investment or organisational overhaul – just thoughtful consideration of how traditional workplace practices might inadvertently exclude exceptional talent. 

Learning From Global Approaches 

The UK has significant room for improvement in how we identify and develop neurodiverse talent. Other cultures often do better at recognising these differences early and directing individuals toward fields where their unique abilities can flourish rather than attempting to make everyone conform to a single neurotypical standard. 

The Buckland Report, published approximately a year ago, offers valuable recommendations for employers seeking to better employ neurodiverse people. Its evidence-based approach provides a roadmap for organisations looking to implement effective neurodiversity programmes. 

Beyond Social Responsibility 

While the social benefits of neurodiversity inclusion are significant, the business case is equally compelling. In an industry facing critical talent shortages, organisations that effectively tap into neurodiverse talent pools gain access to capabilities their competitors lack. 

Our experience at Cyberfort demonstrates that meritocracy and inclusion aren’t competing values – they’re complementary. In many cases, the best people for cybersecurity roles are neurodiverse. 

The Path Forward 

As cyber threats continue evolving in complexity and scale, particularly with AI driving exponential growth in attack volumes, the need for diverse thinking in our defensive capabilities becomes increasingly critical. Organisations that successfully implement neurodiversity programmes will find themselves better equipped to meet these challenges. 

For the cybersecurity industry and UK businesses more broadly, embracing neurodiversity represents both an ethical imperative and a strategic opportunity. By removing unnecessary barriers to neurodiverse talent, we expand our collective defence capabilities while creating more inclusive workplaces. 

In the race to secure increasingly complex systems against increasingly sophisticated adversaries, neurodiversity may prove to be the advantage that makes the difference. 

Written by Glen Williams, Cyberfort CEO

10th april 2025

It’s not just about doing the right thing – it’s about building stronger technical capabilities.

In an industry facing a persistent skills shortage, cybersecurity companies cannot afford to overlook any potential talent pool.

While many organisations implement diversity, equity, and inclusion (DEI) initiatives as broad compliance exercises, at Cyberfort, we’ve taken a more strategic approach by specifically championing neurodiversity – not just as a social good but as a competitive advantage that strengthens our technical capabilities.

Neurodiversity and Merit: Perfect Alignment

I fundamentally believe in meritocracy. I don’t care about someone’s background, gender, or physical attributes; I care about who’s best for the job. That’s precisely why neurodiversity is so important to us: by creating specific accommodations for neurodiverse talent, we’re accessing an exceptional talent pool that others might overlook while simultaneously addressing the industry’s persistent skills gap.

This approach isn’t at odds with merit-based hiring – it enhances it. Without neurodiversity initiatives, many exceptional candidates might never make it through conventional recruitment processes despite possessing the exact skills we need. Traditional interviews often filter out candidates who think differently, even when those differences represent valuable cognitive advantages in cybersecurity roles.

Consider penetration testing or Security Operations Centre (SOC) analysis, where unique cognitive approaches and exceptional attention to detail can make the difference between detecting or missing a sophisticated threat. Many neurodiverse individuals excel at pattern recognition and logical thinking and can focus intensely on complex problems – precisely the skills needed to identify vulnerabilities and anomalies that neurotypical analysts might miss.

Business Impact in Technical Cybersecurity Roles

The business case for neurodiversity in cybersecurity is compelling. Unlike generic DEI initiatives that many companies adopt, we’ve deliberately specialised in becoming leaders in neurodiversity employment. This isn’t just about inclusion – it’s about accessing unique skills that drive better business outcomes.

There’s a reason why many successful entrepreneurs and innovators have ADHD or some form of neurodiversity. The unique thinking styles and problem-solving approaches that come with neurodiversity are particularly valuable in cybersecurity, where unconventional thinking can identify vulnerabilities that others miss.

As cyber threats become increasingly sophisticated, especially AI-driven threats like deepfakes, this cognitive diversity becomes a crucial defence mechanism.

At Cyberfort, we’ve seen tangible benefits from our neurodiversity initiatives including:

Enhanced threat detection capabilities through diverse cognitive approaches

Improved pattern recognition in identifying anomalous activities

Greater innovation in developing security solutions

Reduced skills gaps in critical technical areas 

Increased retention in roles that benefit from deep focus and specialisation

By implementing specific accommodations – such as sending interview questions in advance, allowing candidates to turn cameras off during interviews, and creating flexibility for neurodiverse colleagues to step out of meetings when needed – we’re not lowering standards; we’re removing arbitrary barriers that have nothing to do with job performance.

Neurodiversity Within the DEI Framework

As some organisations reassess their DEI strategies, there’s a risk of abandoning valuable principles while addressing legitimate concerns. While certain DEI initiatives might be perceived as ideologically driven, neurodiversity programmes deliver clear performance benefits that align perfectly with merit-based principles.

The key difference is in how we frame and implement these initiatives. Where many companies implement DEI initiatives as compliance exercises, we’ve taken a more targeted approach that directly enhances our technical capabilities. By focusing specifically on neurodiversity, we’ve created both a more inclusive workplace and stronger security solutions for our clients. It’s a win-win that delivers a measurable business impact.

This doesn’t mean abandoning the broader principles of inclusion, but rather focusing on aspects that directly benefit performance. The Buckland Report provides excellent recommendations for employers looking to better employ neurodiverse people. We’re implementing as many of these as possible because we recognise that the UK needs to do better at getting the best out of neurodiverse talent.

It’s not just about doing the right thing – it’s about building stronger technical capabilities.

Many cultures around the world embrace neurodiversity better than we do in the UK. While our education system often tries to make everyone ‘neurotypical,’ we’re missing opportunities to develop specialised talents. In cybersecurity, these unique cognitive approaches are exactly what we need to stay ahead of increasingly sophisticated threats.

The Future of Technical Talent

As the cybersecurity landscape evolves, the organisations that thrive will be those that can harness diverse thinking to combat diverse threats. Neurodiversity initiatives represent a strategic approach to talent that goes beyond traditional DEI frameworks, focusing specifically on cognitive diversity that drives technical excellence.

By prioritising neurodiversity within our talent strategy, we’re not just being inclusive – we’re building a more capable, innovative, and effective cybersecurity organisation. In an industry where thinking differently isn’t just valuable but essential, neurodiversity isn’t optional – it’s a competitive necessity.

Acquisition strengthens Cyberfort’s ‘buy-and-build’ strategy to significantly grow revenue and become the largest independent UK-based Cyber Security service provider within the next three years.

8th April 2025

Cyberfort, a leading Cyber Security services and solutions provider, today announces the acquisition of ZDL Group. The acquisition is part of Cyberfort’s ongoing ‘buy and build’ strategy as it looks to accelerate growth over the next three years to become one of the largest independent Cyber Security providers in the UK.

ZDL provides a comprehensive range of Cyber Security services to the UK market, including managed security services, penetration testing, ethical hacking, and bespoke Cyber Security training.

With an international, multi-sector customer base exceeding 120 clients and a team of over 40 highly skilled Cyber Security professionals, ZDL’s acquisition significantly expands Cyberfort’s reach, enhancing its ability to serve commercial customers both within and beyond the UK.

Kevin Roberts, Managing Director at ZDL Group added:

“Joining Cyberfort means ZDL customers will have access to a wider range of Cyber Security services to keep their businesses secure, resilient and compliant in an ever changing and complex Cyber Security landscape. When we learned of the growth plans for Cyberfort, as part of its strategy of becoming one of the largest UK based Cyber Security service providers, we decided that the perfect partner for our business would be Cyberfort. We’re very pleased that Cyberfort will continue to provide the highest levels of innovation and service to all of our valued customers and stakeholders.”

Rob Vann, chief solutions officer at Cyberfort, explains how AI is fundamentally changing the threat landscape for cloud environments.

31st march 2025

How is AI fundamentally changing the threat landscape for cloud environments?

This is an interesting question as, of course, AI is a tool that is useful to both good and bad actors. For now, let’s assume we’re focussing on the bad.

Targeted threats have always been more successful (and more expensive) than mass attacks. AI contributes to combining the scale and cost of a mass attack with success more aligned to the targeted approach. Specifically in the cloud world, there are multiple techniques where AI can ‘add value, complexity, and ultimately a more successful outcome to an attack. 

These include simple techniques (such as AI used to populate brute force attacks, or Generative AI used to support targeted access requests) through adaptive malware, with AI asked to rewrite code to bypass any or other detections, the more direct use of AI to detect and leverage vulnerable systems, or identify and exploit organisation level misconfigurations through scanning, probing and researching at speed (though perhaps more concerningly it can also apply the same speed and techniques to shared cloud or multi use APIs for example, compromising large scale one to many systems. 

AI can also be used to support more targeted approaches, its speed and ability to process data compressing attacks, and their outcomes, for example automating lateral movement, persistence and privilege escalation techniques, enabling attackers to quickly identify and acquire high value data in large cloud storage environments, or editing log files/manipulating other data to hide the breach and hinder its investigation.     

To what extent do you think traditional cloud security approaches are becoming obsolete in the face of AI-powered attacks?

The previous answer goes some way to support this, Cyber Security has always been a playing field biased in the attacker’s favour, with the attacker only needing to succeed once, and the defender needing to succeed every time.

Much of the traditional cloud security approaches are not aligned to the scale, speed of execution, and complexity of AI driven or supported attacks. Perhaps more importantly much of the benefit that people gain from Cloud environments is supported by “good enough” security measures, with point in time security coming after deployments – and a high dependence still maintained on human factors.

Traditional approaches often rely heavily on static defences, such as perimeter-based edge protection, fixed rule sets, and predefined access controls. These approaches are designed to guard against known attack vectors and assume a relatively predictable threat landscape. Coupled with reactive specialist resources that need the timeframe of a human interaction to respond to the threats, our AI compatriots’ eyes are starting to ‘light up’ at the possibilities for causing mayhem.

Attacks that previously took days of careful structure and planning are now executed in seconds. While legacy defences “could” in theory address this – if everything was patched and configured correctly all the time, and all resources acted perfectly all the time, and nothing was dependent on a third party or supply chain ever, then there might be a chance for example. The real world of security is very different to this nirvana.

To update a legacy piece of advice “you don’t have to be the fastest to get away from the bear, you just have to not be the slowest” in an AI attacker fuelled world, potentially there are 1000 faster, stronger, more aggressive cockroach sized bears chasing every customer at the same time. You probably won’t even see them before they take you down.

What practical strategies do companies need to adopt to stay ahead of emerging threats in the cloud?

Just like the bad guys, you can augment your defences with AI power as well.

But let’s start by doing the basics well, move what you can to automation (for example utilising infrastructure as code, and pipelines with automated testing to remove human configuration errors or complexities, automating the execution, validation and segregation of backups, and continuously testing for exploitability of core systems). Then let’s move to a focus on the surrounding factors (such as identity) that are often required to breach your systems and become more aggressive in containing and isolating suspect engagements. Work to the principle of “assume breach” segregate and aggressively monitor and respond to core systems, removing suspect access to enable time to investigate and then restoring it if benign. Plan and think of how you keep critical systems operating during these periods, so your services continue even if a key person or systems access is temporarily revoked.

With all this AI talk it’s important to not totally discard the human factor here. A key emphasis should be establishing comprehensive, continuous learning programs to equip your security teams with the knowledge and expertise needed to understand and combat AI-powered threats.  By fostering a culture of ongoing education, organisations can ensure their teams stay ahead of the evolving threat landscape and are prepared to counter sophisticated attacks that exploit AI and machine learning technologies.

Then let’s start to add in some of those AI level defences

Firstly, use AI to build proactive defences, building a generative AI (please don’t use public systems, you’d be training them on how to attack you) or find an evidenced secure partner who can train and align a private generative AI to support you and simply ask it how it would attack you, and plan your defences accordingly. Remember to evidence the removal of your data and learning from the partners system and validate their security before sharing data. This will deliver value in aligning your defences and validating your controls in a digital twin environment.

Secondly, implement continuous cloud posture management to flag any errors or misconfigurations in near real time drive take advantage of AI to drive your detections. Machine learning to generate anomaly information provides a rich source of ‘things that could be bad but are definitely different” to sort through the noise of millions of events to find the 10 that are useful.

Thirdly, use AI to drive response actions, this is the final state, and should be planned and approached with care, as active automated response can impact business and continuity, however assuming breach, removing misconfigurations, containing (and releasing) assets to provide time to investigate, validate and release benign activities.

As always security is a double-edged sword, the way to make things most secure is to switch them off and decommission them, however this obviously means you can’t realise any business value from the asset. These types of attack require a different approach of implementing zero trust and continuous CSPM with automated responses, if done properly, it will give you the best of both worlds, response to AI driven attacks at AI scale and speed, but if done without thought, planning and expert, experienced support and knowledge it will potentially create significant business issues.

Are there any real-world examples you could share of how organisations are successfully adapting?

Recently I worked with a customer who had undergone an incident. After the DFIR engagement, they asked us to look at maturing their defences, we helped them to safely take the following actions:

Migrate identity controls for cloud platforms to their corporate IAM system through the use of a PAM solution. This meant that the policies, monitoring and (after planning and testing) were consistent across the organisation) automated responses were consistent across all environments

Integrate testing and remediation into their build pipelines (mitigating the risk of deploying exploitable code).

The integration of their production environment, with the exception of some critical systems that served customers, into the SOAR (security orchestration automation and response) and the building of appropriate playbooks to contain (and release) suspect assets and resources.

The deployment of continuous CSPM (cloud security posture management) which was later automated to remediate >90% of issues automatically in real time

The extension of their EDR tooling into the production environment

Further training for their resources, including sessions specifically focussed on developers, architects and real life deep fake video examples for the entire business.

Navigating the Ever-Evolving Threat Landscape By Glen Williams, CEO, Cyberfort

Cyber Defense e-Magazine (https://www.cyberdefensemagazine.com/ ) – January 2025 Edition

As we look ahead to 2025, the world of cyber security is set to undergo significant changes. Attackers are becoming increasingly more sophisticated with the use of AI, making phishing emails even more convincing and enabling the daunting creation of cloned personal identities. 

This shift from traditional identity theft to much more complex techniques poses new challenges on both individuals and businesses. Additionally, the landscape of identity and permissions management is evolving, underscoring the importance of a proactive and comprehensive approach to cyber security. This includes leveraging advanced technology, maintaining continuous monitoring, and fostering a strong culture of security awareness within organisations. 

By understanding these emerging threats and preparing accordingly, we can better protect our organisations and ensure a safer digital future. But what will those key trends be as we enter 2025 and how we can all stay ahead of the threat in this ever-changing digital world? 

Human Error to Increase as Attacks Get “Less Dumb” 

In the past six months, we’ve seen an alarming increase in the use of generative AI by attackers, mirroring techniques that achieve 80% success rates in real world testing. This technology is being leveraged to craft highly targeted phishing emails, integrating social media and work personas to deceive recipients more effectively. Additionally, the use of deep fake technologies to clone senior individuals and demand tasks to be completed has become more prevalent.  This combined with machine learning will provide attackers with ‘more likely to succeed’ target lists in 2025, which we will then start to see offered at a premium through marketplaces and associate programs. As attacks become more sophisticated, the margin for human error will increase, making it crucial for organizations to enhance their security measures and training programs. 

Identity Theft to Be Replaced by Cloning 

2024 saw a significant rise in the use of Open-Source Intelligence (OSINT) and advanced data tools to create clone identities. This trend is expected to continue into 2025, posing a major challenge for identity verification processes.  As these cloned identities grow increasingly comprehensive, verifying legitimacy and ownership will become more challenging. Even traditional challenge-response methods may fail, as both the original and the clone are likely to provide accurate answers. Continuous and rigorous monitoring of identities will be essential to detect and mitigate these threats before they cause harm. 

Evolution of Identity and Permissions 

The concept of ‘zero trust’ has been a hot topic in cybersecurity discussions. However, most organizations are still in the strategy development stage and have not fully implemented zero trust across their IT environments. Even those that have adopted a zero-trust strategy often have not extended it to their cloud and SaaS environments.  As we move into next year, we will start to see hidden permissions assigned manually or explicitly at the account level, becoming an even bigger opportunity for attackers. Attackers will focus on these exceptions, leaving organizations vulnerable despite a 98% success rate in other areas.  Moreover, the complexity of modern IT environments, with a blend of on-premises, cloud, and hybrid infrastructures, adds to the challenge. Organizations must ensure that their zero trust policies are comprehensive and cover all aspects of their IT landscape. This includes continuous monitoring and validation of user identities and access privileges. Additionally, the integration of zero trust with other security frameworks and tools will be crucial in creating a robust defence mechanism. As cyber threats evolve, so must the strategies to counter them, making zero trust an ongoing journey rather than a onetime implementation. 

Preparing for the Future 

To prepare for these evolving threats, organizations must adopt a proactive approach to cyber security. This includes investing in advanced threat detection technologies, enhancing employee training programs, and continuously monitoring and updating security protocols.  The key to staying secure in 2025 will be a combination of advanced technology, continuous monitoring, and a culture of security awareness within organizations. By understanding these predictions and taking proactive steps, organizations can better protect themselves against the sophisticated threats that lie ahead. 

Cyberfort
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.