Featuring Glen Williams, Cyberfort CEO
The world’s cyber battlefield is evolving — and the defenders are still adjusting their footing
In the first quarter of 2025, Kenya’s national cyber-intelligence centre detected an unprecedented 2.5 billion threat events — a figure that dwarfs even the region’s previous highs and reflects a new era of cyber risk. This explosive surge, confirmed by the Communications Authority of Kenya, represents more than a 200 percent increase on the prior quarter, with system vulnerability scans and automated attack traffic leading the rise. Far from being an isolated case, Kenya’s experience is a lens through which to view a rapidly shifting global threat environment.
What distinguishes this moment is the role of artificial intelligence — not as a future risk, but as a present and multiplying force on both sides of the cyber-arms race. Automation, generative-AI tooling, and adaptive attack strategies have compressed the traditional gap between initial compromise and major incident from weeks to mere days, sometimes hours. In practice, this means that what once took criminal groups months to plan and execute can now be launched and scaled almost instantly, making national borders nearly irrelevant.
As Camden Woollven, Group Head of AI Product Marketing at GRC International Group, observes: “What’s happening in Kenya is happening everywhere. Attack volume has exploded, not because there are more hackers, but because AI has made it easy to scale. You don’t need a team anymore. You just need a decent prompt.”
These patterns are not unique to Kenya. From Singapore’s financial sector to critical infrastructure in São Paulo, security teams are reporting similar surges, with AI-driven attacks accelerating the pace, scale, and sophistication of threat activity worldwide. The stakes are rising not only for those on the digital front lines in Nairobi, but for every organisation operating in a globally connected, AI-enabled economy.
The new offence
The dramatic acceleration in attack speed has become one of the defining features of the AI era in cybersecurity. Globally, the “dwell time” — the window between an attacker’s initial access and the deployment of a major payload like ransomware — has fallen from an average of sixty days just a few years ago to less than four days in 2024, according to leading incident response studies. In some documented cases, attackers are able to move from entry to lateral movement across an organisation’s network in under an hour, compressing the window for detection and response to near real time.
“Generative AI has put cybercrime on steroids,” says Glen Williams, CEO of Cyberfort. “What used to take hours now takes minutes. Phishing emails are no longer riddled with spelling errors, they’re polished, persuasive, and chillingly not only accurate, but aligned to the recipient. Deepfakes aren’t science fiction anymore; they’re being used today to bypass voice verifications and deceive finance teams. We’re seeing AI-written malware that rewrites itself in real time to stay ahead of traditional defences. The barriers to entry for cybercrime have collapsed. The result? An arms race where attackers are sprinting – and too many defenders are still tying their laces.”
Central to this shift is the proliferation of generative AI tools and automated “playbooks” that can generate phishing campaigns, malware variants, and social engineering scripts on demand. Malicious actors are increasingly leveraging AI-powered platforms to craft deepfake lures — voice, video, and even interactive chatbots — which make traditional employee awareness and technical filters far less effective.
As Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, explains: “The rise of generative AI has opened new vectors for cyberattacks, fraud and social engineering. Given the pace of AI development, attack methods have also evolved – making it a lot harder for traditional security measures to detect and mitigate threats. Polymorphic malware, for example, can now rewrite its own code to evade detection, slipping past conventional scanners unnoticed. In addition, AI’s ability to produce convincing text, code and even synthetic identities is streamlining phishing campaigns, automating malware creation and helping attackers scan networks for vulnerabilities.”
These synthetic identities are emblematic of the growing sophistication of fraud tactics being employed by malicious actors. “We’re now seeing synthetic identities that are entirely AI-generated – right down to fake biometric data – being used to pass onboarding and Know Your Customer (KYC) checks,” says Doriel Abrahams, principal technologist at Forter. “It’s not just about a stolen ID anymore; attackers are creating convincing digital personas from scratch. These aren’t one-off attempts either. They’re often part of coordinated fraud rings using generative AI to spin up large volumes of believable, seemingly legitimate users.”
Business email compromise (BEC) and targeted scams have also moved into a new league, blending deepfakes and automation at scale. Sergei Serdyuk, VP of Product Management at NAKIVO, highlights how the rise of “dark” LLMs is reshaping attacker tactics: “We’re seeing AI models like FraudGPT and WormGPT being actively used on the dark web to generate highly personalised, believable phishing emails, code for new malware, and instructions for exploiting vulnerabilities. These tools let attackers fine-tune their messaging and adapt in real time, making each scam more convincing than the last.”
And, as Jeff Sims, Senior Data Scientist at Infoblox, points out in a recent case, these capabilities are not just theoretical: “One of the most striking examples they’ve tracked involves a threat actor known as Reckless Rabbit. This group has been targeting Japanese-speaking users with fake investment schemes that incorporate AI-generated deepfake videos of public figures like Elon Musk and Masayoshi Son. These videos are embedded directly into fraudulent websites designed to mimic legitimate news outlets such as Yomiuri Shimbun. This campaign marks a shift from traditional text-based scams to immersive, multimedia deception. It’s a clear example of how generative AI is being weaponised to enhance the credibility and emotional impact of social engineering attacks.”
For businesses and institutions worldwide, the practical result is a daily environment where both the volume and effectiveness of digital attacks are rising — and traditional defences are no longer enough.
Read the article on Business Quarter here: https://businessquarter.substack.com/p/ais-global-cyber-arms-race?r=5lu7lt&triedRedirect=true






















