Cyber threats are evolving at an unprecedented pace, growing more sophisticated and harder to detect. In response, organisations are investing heavily in cutting-edge technologies, from firewalls and encryption to AI-powered threat detection systems. While these tools are essential, there is a growing tendency to rely too heavily on technology alone, overlooking a crucial element in the cyber security equation – people.

It is often said that humans are the weakest link in security, but this narrative is outdated and misleading. In reality, people can be the strongest line of defence, when they are properly trained, supported, and empowered. Cyber security is not just a technical challenge; it is a human one. The ability to recognise phishing attempts, follow secure practices, and respond swiftly to incidents often determines whether an attack succeeds or fails.

People are not the weakest link; they are the critical differentiator. At Cyberfort we believe it is time to shift the focus and invest in human resilience as much as technological strength.

Human Factor

According to the 2025 Verizon Data Breach Investigations Report (DBIR), approximately 60% of all confirmed breaches involved a human action, whether it was clicking on a malicious link, falling victim to social engineering, or making an error like misdelivering sensitive data. This statistic underscores a critical truth, while technology plays a vital role in cyber security, human behaviour remains a central factor in both risk and resilience. Rather than viewing people as the problem, organisations must recognise them as a powerful part of the solution. With the right training, awareness, and support, employees can become proactive defenders, identifying threats, reporting anomalies, and making informed decisions that technology alone cannot.

Culture and Behaviour

At the heart of a cyber resilient organisation is a culture that values open communication, psychological safety, and shared responsibility. These cultural traits shape the everyday behaviours that determine how effectively an organisation can prevent, detect, and respond to cyber threats.

Employees are encouraged, not punished, for reporting risks, mistakes, or suspicious activity. This openness ensures that potential threats are surfaced early and addressed quickly. Silence, often driven by fear of blame, is replaced with transparency and trust.

Mistakes are treated as learning opportunities. By shifting from a blame culture to a learning culture, organisations empower employees to speak up, share insights, and continuously improve. This mindset fosters resilience and agility in the face of evolving threats.

Cyber security is seen as everyone’s job, not just IT’s. When employees understand how their actions impact the organisation’s security, they are more likely to adopt secure behaviours and support one another in doing the same.

Human Judgement vs Tech

Even the most advanced AI systems cannot replicate human intuition. While automated tools are essential for detecting known threats at scale, they often lack the contextual awareness and critical thinking that trained employees bring to the table. A vigilant team member who questions a suspicious email or flags unusual behaviour can catch what algorithms might overlook. Their ability to escalate concerns quickly can mean the difference between a contained incident and a full-scale breach.

Humans provide reasoning, context, and prioritisation, qualities that machines cannot fully emulate. Cyber resilience is not just about identifying threats; it is about balancing risk, cost, and operational impact. These are nuanced decisions that require human understanding and judgment.

Technology is powerful, but it is people who make it effective. Empowered employees are not just part of the defence; they are the heart of it.

Cross Functional Collaboration

Cyber resilience is not the sole responsibility of the IT or security team; it is a shared effort that spans the entire organisation. Building a truly resilient posture requires cross-functional collaboration, bringing together departments like HR, Legal, Communications, Risk, and Operations. Each team plays a unique and vital role in preparing for, responding to, and recovering from cyber incidents.

• HR ensures that security awareness is embedded into onboarding, training, and culture.
• Legal helps navigate regulatory obligations, breach notification requirements, and liability concerns.
• Communications manage internal and external messaging during a crisis to maintain trust and transparency.
• Operations and Risk assess business impact and coordinate continuity plans.

One of the most effective ways to strengthen this collaboration is through crisis simulations and tabletop exercises. These simulations test not just technical responses, but decision-making, communication, and coordination across teams, turning theory into practice and exposing gaps before real threats strike.

Leadership

Leadership and management play a pivotal role in shaping an organisation’s cyber resilience culture. When leaders actively model good security behaviour, such as using strong passwords, reporting phishing attempts, and following data protection protocols, they send a powerful message – cyber security is everyone’s responsibility. Their actions set the tone from the top, influencing how employees perceive and prioritise security in their daily work.

This leadership commitment must extend to the board level, where cyber security is treated as a strategic business risk, not just a technical issue. Board-level accountability ensures that resilience is embedded into governance, risk management, and long-term planning. When directors ask the right questions and demand regular updates on cyber posture, it reinforces the importance of security across the organisation.

Buy-in from management is not just symbolic; it is strategic. Leaders must champion resilience initiatives, allocate resources for training, and integrate cyber security into broader business goals. They also play a key role in setting behavioural norms, reinforcing secure practices through communication, recognition, and consistent example.

When leadership leads by example, from the boardroom to the front line, cyber resilience becomes part of the culture, not just a compliance checkbox.

From Theory to Practice

Organisational Resilience
A well-trained workforce is not just a support function; it is a frontline defence and a cornerstone of cyber resilience. True resilience is achieved when cyber security is embedded into the values, behaviours, and everyday actions of everyone in the organisation, not just the IT or security teams. This means cultivating a culture where security is second nature, from how emails are handled to how data is shared and stored.

Embedding this mindset requires more than annual training modules. It involves ongoing education, leadership buy-in, and visible reinforcement of secure behaviours. For example, Microsoft has implemented a company-wide security culture program that includes regular phishing simulations, gamified learning experiences, and executive-led security briefings. These initiatives are tailored to different roles and risk levels, ensuring relevance and engagement across the board.

The result? Employees become active participants in defence, spotting threats early, responding appropriately, and reinforcing a culture of vigilance and accountability.

Engaging Training
Cyber security training must go beyond the traditional “check-the-box” approach. To be effective, it needs to be engaging, relevant, and continuous. This means using storytelling, real-world scenarios, interactive simulations, and up-to-date threat examples that resonate with employees’ daily experiences. When training is relatable and dynamic, it not only captures attention but also builds lasting awareness and practical skills.

Effective training empowers staff to detect and respond to threats quickly, reducing the risk of breaches and enabling them to contribute to the development and safe use of new technologies. It also fosters a culture where security is seen as a shared responsibility, not just an IT concern.

A standout example is Google’s Security and Privacy Training Program, which uses gamified learning, phishing simulations, and scenario-based exercises tailored to different roles. Employees are regularly tested with real-time challenges, and the program evolves with emerging threats, keeping security top of mind and skills sharp.

Recognition and Reward
Recognising and rewarding good cyber security behaviour is a powerful way to reinforce a culture of resilience. When employees feel that their efforts to stay secure are noticed and appreciated, they are more likely to remain vigilant and engaged. Celebrating individuals or teams who demonstrate strong cyber hygiene, such as reporting phishing attempts, following secure data handling practices, or contributing to awareness initiatives, helps normalise and encourage these behaviours across the organisation.

Recognition does not have to be complex. It can range from shout-outs in team meetings and internal newsletters to formal awards or incentives. The key is consistency and visibility.

A best practice example comes from an American company called Salesforce, which runs a “Security Champions” program. Employees across departments are nominated for their proactive security efforts and receive public recognition, exclusive training opportunities, and branded rewards. This not only boosts morale but also builds a network of internal advocates who help spread security awareness organically.

By celebrating the right behaviours, organisations reduce human error and strengthen their first line of defence, their people.

Review and Response
Cyber security is most effective when it is treated as a shared responsibility, not just an IT function. One of the most impactful ways to reinforce this is by regularly collecting feedback from employees on what is working, what’s unclear, and where improvements are needed. This two-way dialogue encourages ownership, reinforces learning, and helps build a culture of vigilance and continuous improvement.

Feedback mechanisms can include anonymous surveys, post-training evaluations, suggestion boxes, or open forums during team meetings. The key is to act on the feedback, showing employees that their insights lead to real changes.

A best practice example comes from a UK company called PwC, which integrates cyber security feedback loops into its broader risk culture program. After simulations or incidents, employees are invited to share their experiences and suggestions. This feedback is then used to refine training, update policies, and improve response plans. The result is a more engaged workforce and a security strategy that evolves with real-world input.

By listening to employees and responding meaningfully, organisations not only improve their defences but also foster a sense of collective responsibility and trust.

Deepfakes, a portmanteau of “deep learning” and “fake,” refer to synthetic media, primarily videos or audio recordings generated or altered using artificial intelligence (AI) to depict people saying or doing things they never actually did. While deepfakes began as an entertainment or novelty tool, their growing sophistication has positioned them as a credible threat in the world of cybersecurity.

As organisations strengthen their digital defences against traditional attack vectors such as phishing, malware, and ransomware, deepfakes represent a newer and less-understood frontier. One that leverages AI to manipulate perception, erode trust, and bypass existing safeguards. This article explores the role of deepfakes in cybersecurity, how they are used maliciously, the implications for trust and identity, and the emerging defences and detection strategies within the cyber community.

Deepfakes as a Cyber Threat

The most immediate cybersecurity risk of deepfakes is their use in social engineering attacks. Traditionally, attackers might rely on spoofed emails or fake websites to trick individuals into revealing credentials or transferring funds. Deepfakes take this to a new level by adding highly convincing audio or video to impersonate individuals with significant authority, such as CEOs, CFOs, or even political leaders.

For example, there have already been high-profile cases where attackers used AI-generated voice deepfakes to impersonate executives and instruct employees to transfer money or share sensitive information. In 2019, criminals reportedly used a voice-cloned recording of a CEO’s speech patterns to trick an executive into transferring €220,000 to a fraudulent supplier. The deepfake mimicked not only the voice, but also the tone and urgency typical of the real executive, making the attack highly believable.

This kind of deception can bypass traditional email filtering and spam detection technologies, as the attack may take place via phone call or embedded media within a trusted communication channel like Teams, Zoom, or Slack. The threat landscape now includes synthetic impersonation, where deepfake audio or video is used to facilitate business email compromise (BEC), account hijacking, and financial fraud.

Impact on Trust, Identity, and Verification

The emergence of deepfakes challenges one of the foundational assumptions of cybersecurity: trust in verified identity. In both the corporate and public domains, trust in identity is paramount, whether that’s a voice on a call, a face in a video meeting, or a recorded message from a government official.

As deepfake technology becomes more accessible and cheaper to produce, attackers can exploit the “assumed authenticity” of media formats that were once considered difficult to fake. This leads to increased scepticism around the legitimacy of communications, which can paralyse decision-making and slow down operations.

For instance, in crisis scenarios such as ransomware attacks or geopolitical events, misinformation campaigns powered by deepfakes could manipulate public sentiment, incite panic, or create confusion around who is saying what. The implications for information integrity are profound, especially for media organisations, government agencies, and election bodies.

Emerging Defence Mechanisms

Cybersecurity professionals are actively developing and deploying deepfake detection technologies. These typically rely on machine learning models trained to identify artefacts introduced during the synthesis process, such as unnatural blinking, visual inconsistencies, or odd audio intonations. However, this is an arms race. As detection methods evolve, so do the techniques used by attackers to create more seamless fakes.

To counter deepfake threats, organisations are also adopting more robust verification methods, such as:

• Multifactor authentication (MFA) that does not rely on voice or image recognition alone
• Watermarking of legitimate media, which can verify authenticity
• Behavioural biometrics, which consider unique patterns in typing, movement, and interaction
• Zero-trust models where no entity is assumed trustworthy based on one factor alone

Moreover, security awareness training is evolving to include recognition of deepfakes, helping employees spot red flags, such as unusual requests, voice delays, or background inconsistencies in video.

In the legal and regulatory domain, countries are beginning to address the misuse of synthetic media. Some governments have passed laws targeting the malicious creation and distribution of deepfakes, particularly where these cause reputational or financial harm.

Deepfakes as a Defensive Tool

Interestingly, deepfake technology isn’t solely a threat, it can also be used constructively in cybersecurity. For example, security training platforms have begun using synthetic media to simulate spear-phishing or vishing (voice phishing) attacks in a controlled environment. This allows employees to experience realistic threats without exposing organisations to real-world harm.

Additionally, researchers and red teams can use synthetic media to test the resilience of security controls or authentication mechanisms, uncovering vulnerabilities before attackers do.

Recognising deepfakes

Deepfakes present a rapidly evolving threat within cybersecurity, one that leverages artificial intelligence to attack not systems, but the very notion of trust and identity. Their use in fraud, misinformation, and impersonation can have significant financial, operational, and reputational impacts on organisations.

The cybersecurity community must respond by combining technological countermeasures, regulatory oversight, and human vigilance. While detection tools are improving, the best defence is a layered one. Pairing deepfake awareness with secure communications protocols, behavioural analytics, and identity verification beyond the visual or auditory.

In an era where seeing (or hearing) is no longer believing, resilience depends on recognising that authenticity is not a given – it must be proven.

So how do you prove it? How should you and your employees validate you’re talking to a real person, firstly give yourself time to think and question;  very little is urgent to the second, and nearly always, giving yourself time to think enables people to apply their analytical brains, often this happens after an incident “I thought it was bad…” “yes, I can see that now…” the trick is to give yourself that time to think before the impact!

Five simple steps to identify deepfakes

Think about whether the actions the person is asking you to do is within the realm of expected from this individual, whether they comply with your organisation’s policies, regulatory requirements, legal requirements and ethics

Think about the person’s style, are there nuances that aren’t present, do they always say Hi, or Good Morning, or do they always sign off a call with a particular phrase or statement, do they shorten your or others names?

Look carefully for facial anomalies, lip syncing issues or odd phrasing or words

Ask an unexpected question, or state a phrase or statement, if you randomly say “why is your t-shirt green” when its clearly black, a person will correct you, a deep fake will just continue

Above all, remember that technology is advancing at pace, so even if 1-4 all check out, if you are even 1% unsure, verify by calling the person on a known contact method and finding out if it was actually them

The human brain is a powerful anomaly detection tool, in most of these incidents, people have chosen not to use it and suspended their disbelief, don’t make that choice.

For more information about Cyberfort Detect and Respond services please contact us at [email protected].  

Cyberfort
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.