Secure by Design sets a framework of Principles for the delivery of digital capability with cyber security and risk management at the core. This blog article explores how continual assurance measures: Vulnerability Management and Security Controls Testing ensure that delivery Principles including Principle 5: Build in Detect and Respond Security and Principle 7: Minimise the Attack Surface continue to be effective through-life by implementing Principle 9: Embed Continuous Assurance.

Vulnerability Management is a critical component of ongoing security assurance, providing risk owners with continuous evidence that the system’s security controls and capabilities are functioning as intended. This assurance spans the full lifecycle of a system from development to deployment and into ongoing operation.

Security Controls Testing verifies that security controls and capabilities continue to function as intended, especially after deployment and during system operation. Combined, they support the application of Secure by Design, building a resilient security posture.

Key Benefits of Vulnerability Management and Controls Testing

Secure by Design principles embedded into the development process, ensures that activities and controls such as threat modelling, secure coding, continuous testing, access controls, encryption and monitoring have validation mechanisms in place. In the next section of this article, we explore what the key principles are for vulnerability management and controls testing, highlighting the key benefits organisations can realise by adopting a Secure by Design approach.

Risk Mitigation and Management
Principle 5; emphasises proactively embedding detection and response mechanisms into systems and services during design and development, and not as an afterthought. This foundation allows vulnerability management to be more proactive, focusing on preventing vulnerabilities rather than just reacting to them. These Secure by Design controls serve as baselines, enabling automated detection of deviations or misconfigurations.

Ongoing vulnerability management supported by controls testing ensures that risk mitigation continues to be effective. Vulnerability identification, assessment and remediation provides risk owners the evidence that continuous monitoring validates that controls remain effective against evolving threats.

By documenting vulnerability trends, patch cycles, and remediation effectiveness, organisations can demonstrate compliance with internal security standards and regulatory requirements.

Security Controls Testing confirms that identified security controls remain effective in mitigating risks over time. This provides evidence that risk management remains effective, giving confidence that security posture across the system’s lifecycle remains.

Sustaining an excellent security posture after deployment is crucial, as systems can become vulnerable due to configuration drift, outdated software, or new threat vectors. Continuous validation through testing identifies where changes may have occurred and provides opportunity to resolve them, realising several benefits:

• Security measures continue to deliver protection as intended.
• Controls are not bypassed or degraded over time.
• The service continues to mitigate known and emerging risks.

Verifying operational effectiveness of controls post-deployment, ensure that updates, patches, or changes have not compromised system security and that security policies are applied and enforced. This helps to identify deviations from approved baselines or misconfigurations and prevents drift from security standards that can introduce new vulnerabilities.

Tracking Progress and Maturity
Ongoing vulnerability management and through-life controls testing helps track how effectively the implementation of Secure by Design principles are across the organisation including:

• Trends, gaps, and analysis of recurring issues can help to refine the secure development lifecycle and ensure continuous improvement.
• Metrics from vulnerability management such as time to patch, frequency of critical vulnerabilities, or compliance with baseline configurations support strategic objectives.
• Track maturity in Secure by Design adoption.
• Identify gaps in implementation or effectiveness.
• Adapt and improve processes to close those gaps, aligning with continuous improvement.

Reinforcing Secure by Design Through-Life
Vulnerability management is central to the success of other Principles, supporting the measures adopted by validating that they remain effective or providing opportunity for improvement. It covers the ‘Detect’ part of ’Detect and Respond Security.’  and involves continuously:

• Identifying known weaknesses (e.g., unpatched software, misconfigurations).
• Assessing the risk and severity of those vulnerabilities.
• Prioritising and remediating based on impact.
• Monitoring for signs of exploitation.
• Testing to confirm resolution of vulnerabilities and that they do not reappear.

Integrating Vulnerability Management with other through-life assurance and operational measures ensures a more robust security management programme. These include:

Controls Testing: Regular testing validates that security controls (like patch management, access controls, logging) are effective in mitigating vulnerabilities and risks.
Logging, Monitoring & Alerting: Vulnerability scanners, SIEM tools, and endpoint detection systems provide real-time visibility into potential threats exploiting known weaknesses.
Incident Detection & Response: When a vulnerability is exploited, fast detection and coordinated response limit damage and prevent recurrence.
Continuous Iteration: Threat landscapes evolve, so vulnerability management must be a continuous process, not a one-time event.

Having minimised the attack surface (Principle 7) during the design and build of the capability, Vulnerability Management and Controls Testing helps to identify new attack vectors and validate that the capability can remain resistant.

Continuously scanning for and identifying known security weaknesses across systems, applications, and networks – detects vulnerabilities early.
• Unnecessary or outdated services/components can be disabled or removed.
• Exposed ports, APIs, or services can be secured. This reduces the number of potential entry points, shrinking the attack surface.

Vulnerabilities are prioritised based on severity, exploitability, and asset criticality:
• Issues are prioritised, preventing adversaries from targeting easily exploitable paths.
• Unused or low-utility components that present elevated risk can be removed or replaced.

Vulnerability management often uncovers over-privileged accounts or services, or components running with unnecessary permissions.
• Controls Testing identifies if gaps exist and then by remediating these findings, organisations can enforce the principle of Least Privilege and Minimised Functionality.
• These improvements ensure that only essential capabilities are exposed.

Vulnerability data informs threat models.
• Helps understand real-world attack vectors and the likelihood of compromise.
• Supports asset and risk management in focusing mitigation efforts where they matter most.

Ongoing vulnerability assessments ensure newly introduced components do not expand the attack surface unnecessarily. Supported by Controls testing, this validates that updates, patches, and configuration changes have not inadvertently reintroduced risk.

Vulnerability management is not just a technical function it is a continuous, evidence-based assurance process. When integrated within Secure by Design practices, it provides risk owners with confidence that security measures are both present and effective, supports the detection and resolution of implementation gaps, and helps ensure that systems remain resilient throughout their operational life.

Understanding the key challenges

Vulnerability management plays a crucial role in upholding Principle 5 and Principle 7, which emphasises the need for integrated capabilities to detect, respond to, and recover from security incidents. Principle 7 advocates reducing the number of exploitable points in a system, but in practice, achieving this while managing vulnerabilities is complex. Consequently, aligning vulnerability management practices with this principle comes with several challenges:

Visibility Gaps & Poorly Defined Ownership and Responsibilities:

  • Challenge: Incomplete asset inventories, and unmonitored/unscanned systems make it hard to detect vulnerabilities across the full attack surface. The lack of clarity over who owns which assets or components with users/developers unknowingly increase the attack surface.
  • Impact: Undetected vulnerabilities in these “blind spots” if exploited, hinder both detection and timely response. This leads to gaps in vulnerability remediation, attack surface monitoring, misconfigurations, unsafe code practices, and ignored security guidance.

Integrating DR Tools with Complex and Dynamic IT Environments:

  • Challenge: Modern infrastructures (cloud, containers, microservices) changing rapidly and the lack of integration between vulnerability scanners and SIEM (Security Information and Event Management) and/or EDR (Endpoint Detection & Response) platforms.
  • Impact: The constant changes make it hard to maintain an up-to-date view of the attack surface and it also limits the ability to correlate vulnerabilities with active threats or incidents, reducing effectiveness in prioritising or automating responses

Prioritisation of Risks & Patch Management Delays:

  • Challenge: Security teams may struggle to prioritise which vulnerabilities require immediate attention due to limited context (e.g., threat intelligence, exploitability, asset criticality). Once they have decided on a priority, patching can cause downtime or affect business operations, leading to delays.
  • Impact: Prolongs vulnerability exposure, especially in high-risk systems. Time and resources may be wasted on low-risk issues, while critical threats remain unaddressed.

Outdated Vulnerability Data and Integrating Legacy & Complex System Updates:

  • Challenge: Modification, update or decommissioning of older systems often results in significant cost or disruption. Careful consideration must be taken when updating components (e.g., third-party libraries, firmware, OS) as these can break existing functionality or introduce new vulnerabilities. And relying on outdated vulnerability databases or incomplete scanning (e.g., failing to detect zero-days or misconfigurations) does not help. Legacy systems may not have been developed with SbD principles in mind and can have undocumented vulnerabilities.
  • Impact: These systems increase the attack surface and may have un-patchable vulnerabilities. They can introduce weaknesses or incompatibilities in otherwise secure environments. This weakens the ability to proactively detect or prepare for exploitation attempts. It also becomes difficult to ensure that security controls still function post-update.

Organisational Silos:

  • Challenge: Vulnerability management is often handled by separate teams from incident response or threat detection.
  • Impact: Creates communication gaps, slows coordinated response, and leads to disjointed security workflows.

How a specialist Cyber Security Provider can help organisations to address these challenges

To help organisations overcome these challenges organisations who do not have the in-house skills, expertise or knowledge should engage with a specialist cyber security services provider. A reputable cyber security services provider should have a track record of and be able to deliver holistic and managed cyber security services which keeps people, data, systems, and technology infrastructure secure, resilient, and compliant. For example, at Cyberfort  we provide National Cyber Security Centre assured Consultancy services that leverage our technology, hosting, and Security Operations capabilities to Identify and protect against cyber-attacks, detect and respond to security incidents.

Our Managed services provide vulnerability management that integrates with threat detection capabilities, connecting scanners with SIEM and/or EDR platforms for better context and automation.

  • We use Risk-Based Prioritisation, leveraging common risk and severity scoring methods such as CVSS, asset values, exploit availability, and threat intelligence to prioritise vulnerabilities.
  • We implement continuous monitoring as a shift from periodic scanning to continuous assessment and detection.
  • We break down silos and encourage cross-team collaboration between vulnerability management, SOC, and IT operations.

Additionally, we reinforce the continuous monitoring regimes through proactive and reactive controls testing. Reactively done in response to risk or incident resolution, providing assurance that controls are in place and effective. Proactively testing controls baselines can be crucial for either identifying controls weaknesses which lead to risks or mitigating issues before they become risks in the future by validating controls are effective. Whilst vulnerability management tends to focus on the technology landscape, controls testing can consider validation of the people, process, and procedural controls.

Reactive testing from external audits has included the review of Joiners, Movers, Leavers (JML) processes, to identify issues within the Leavers part of the current JML process that is in place that were resulting in unrevoked accounts.

Proactive controls testing conducted as a gap analysis against expected policy implementations to ensure that conformance by the business and those supporting the business in functions. An example of this validated that contractors with permission to craft and modify code held the correct vetting status, as per the businesses vetting policy set in place by the CISO.

In this article Cyberfort security experts discuss why threat modelling is a crucial strategic capability which enables organisations to proactively identify and mitigate cyber risks before they materialise. This capability when embedded within governance and aligned with the UK Government’s Secure by Design principles, becomes a repeatable, auditable, and measurable part of the security lifecycle supporting resilience, trust, and long-term value.

Understanding Threat Modelling and its importance to a Cyber Security strategy

At its core Threat Modelling is a structured process for identifying potential threats and vulnerabilities to a system, enabling teams to prioritise and implement mitigations before any thoughts of deployment are made. It is not a one-off audit but a repeatable, analytical exercise that integrates security into the design phase, ensuring that systems are ‘Secure by Design’.

Cyberfort understands that in today’s rapidly evolving digital landscape, organisations can no longer afford a reactive security posture. With expanding attack surfaces and increasingly sophisticated threats, businesses face the growing challenge of building resilience and trust into the core of their operations. Threat modelling offers a comprehensive and practical framework to achieve this goal, by providing a systematic process for identifying and addressing design flaws early.

To be effective, threat modelling must begin with a clear understanding of the organisation’s digital estate. Comprehensive asset discovery, covering applications, data, APIs, and infrastructure, is essential to minimising the attack surface and sourcing secure and supported technology products must be at the centre of any digital project. These secure approaches ensures that threat modelling is grounded in reality and supports informed decision-making.

A well-informed threat modelling process relies on a current and accurate understanding of the threat landscape. This begins with sourcing a threat assessment, to understand the current threats to the business and industry. This is a core activity within the risk driven approach. The outputs, probability and impact are used to generate a threat score, which should directly inform the Data Flow Diagram (DFD) and prioritisation during threat modelling workshops.

Scoping the threat modelling effort should be deliberate and focused. Starting with a manageable, business-critical system allows teams to iterate and build confidence. This supports making changes securely, ensuring that changes are incremental and security is considered early and consistently throughout the lifecycle.

Threat modelling must also integrate with broader security controls frameworks, so that identified threats lead to actionable controls. This reflects the principle to design usable security controls and, where necessary, prompts system redesign to defend in depth and design flexible architectures that can adapt to evolving threats.

Threat modelling outputs inform risk management, enhance SOC capabilities to build in detect and response security, guide architectural decisions, and strengthen third-party risk assessments. These insights also feed into business continuity and disaster recovery planning, helping organisations anticipate threats that could impact critical business functions. This cross-functional integration supports the principle to embed continuous assurance and ensures that security is not a one-time effort but a sustained end to end practice.

Governance – Embedding Threat Modelling

For threat modelling to be sustainable and effective strong governance must support it. This ensures the activity is not ad hoc, but a formalised part of the organisation’s security lifecycle, aligned with the principle to create responsibility for cyber security risk.

Integration with risk management and key service functions is another foundation of success. Since threat modelling is fundamentally a risk reduction exercise, it must be closely aligned with the business risk framework. This allows threats to be assessed, prioritised, and tracked effectively.

Organisations should update security policies to mandate threat modelling for all new systems, major changes, and high-risk projects. Minimum requirements should be defined for when and how threat modelling is conducted, with clear roles and responsibilities established. Integrating threat modelling into governance and project gates, such as design reviews and change control boards, ensures it becomes a required control, not an optional activity.

To build confidence and ensure quality, threat models should undergo peer review by experienced security professionals. Checklists and quality criteria help assess completeness and relevance, while periodic audits ensure models remain current. Aligning validation with internal audit and compliance reviews demonstrates due diligence and supports the principle to build in detect and respond security.

Finally, to support scalability and consistency, organisations should adopt structured and automated tools such as Microsoft’s Threat Modelling Tool or OWASP Threat Dragon. These platforms enable repeatable, auditable practices and align with Secure by Design’s call for robust, risk-driven security governance.

Threat Modelling and Shift Left Security

Modern cyber resilience demands that organisations move beyond reactive security and embrace a proactive, risk-based approach, one that identifies and mitigates vulnerabilities early in the development lifecycle. This is the essence of the Shift Left philosophy, and it aligns directly with several Secure by Design principles, including designing usable security controls, making changes securely, and embedding continuous assurance.

By shifting security left, organisations reduce the cost and complexity of remediation while improving the overall quality and resilience of their systems. This proactive posture supports the goal of creating responsibility for cyber security risk across teams, from developers and architects to business leaders and risk owners.

Threat modelling plays a central role in this strategy. By analysing systems during the design phase, organisations can identify potential threats and vulnerabilities before they are coded into production. This early intervention supports the principle to minimise the attack surface and ensures that security is built in from the start.

Integrating threat modelling with vulnerability management creates a powerful feedback loop. Threat models help prioritise which threats and vulnerabilities matter most, based on business impact and exploitability, allowing teams to focus on what truly needs fixing. This supports the principle to adopt a risk-driven approach, ensuring that resources are directed toward the most critical risks.

Moreover, when threat modelling is embedded into agile and DevOps workflows, it enables continuous validation of security assumptions. This aligns with the principle to build in detect and respond security, as teams can monitor for deviations and respond to emerging threats in real time. It also reinforces the importance of defending in depth, by ensuring that multiple layers of controls are considered and implemented from the outset.

Implementing Threat Modelling

Effective threat modelling begins with ensuring the right expertise is in place. Skilled threat modellers are essential to the success of any programme, and organisations should consider investing in certified threat modelling training or broader security architecture courses that include threat modelling components. Building internal capability or bringing in experienced threat modelling professionals.

Selecting the right threat modelling methodology is equally important. The framework should align with the organisation’s risk appetite, technical environment, and business goals. Popular methodologies include:

STRIDE-LM, which categorises threats into six types – Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege and Lateral Movement

PASTA (Process for Attack Simulation and Threat) offers a risk-centric approach that simulates attacks and aligns with business impact.

MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) provides a comprehensive, real-world knowledge base of adversary tactics and techniques, helping teams map threats to known behaviours and improve detection and response.

To scale threat modelling across multiple projects and teams, organisations should leverage automated tools such as Microsoft Threat Modelling Tool or OWASP Threat Dragon, these tools streamline the modelling process, improve consistency, and reduce manual effort, making it easier to embed threat modelling into agile and DevOps workflows and technologies like SIEM and continuous assurance platforms.

Senior leadership engagement is critical. Threat modelling must be embedded into governance structures and mandated as part of project lifecycle gates. Executive sponsorship ensures that threat modelling is prioritised, resourced, and aligned with strategic objectives.

Additionally, outputs from threat modelling should be actively consumed by operational teams such as the Security Operations Centre (SOC), which can use them to enhance threat detection and monitoring; Incident Management, which can develop response playbooks based on modelled scenarios; and Business Continuity and Resilience teams, which can ensure continuity plans address realistic threat vectors and create appropriate business continuity plans. This cross-functional integration ensures that threat modelling insights are actionable and drive improvements across detection, response, and recovery capabilities.

Why Use a Specialised Threat Modelling Consultancy?

As organisations weigh up the decision to implement threat modelling, one crucial consideration is whether to build the capability in-house or to engage a specialised consultancy like Cyberfort. While internal teams bring valuable domain knowledge, engaging a specialist consultancy offers several distinct advantages. 

Specialised consultants bring deep expertise in both the technical and procedural aspects of threat modelling. They will have typically worked across various industries and methodologies, enabling a tailored approach to each client’s unique risk appetite and technical environment. This accelerates implementation and reduces the risk of error or oversight.

A specialist cyber security provider can also offer an objective perspective, which is essential when analysing complex systems. Internal teams may inadvertently overlook critical threats due to familiarity bias.  Trained and experienced consultants will be able to conduct rigorous, unbiased assessments, identifying gaps that may otherwise go unnoticed.

Additionally, an experienced consultancy partner will be  adept at integrating threat modelling into governance structures and development workflows (Shift Left), ensuring it becomes a sustainable practice, not a one-off project. They will provide the tools, templates, and training to build internal competency.

For many organisations, particularly those with limited security architecture expertise, this efficiency can mean the difference between a theoretical exercise and a practical, value-driven programme. At Cyberfort we can do more than guide implementation; we become a strategic partner in building a mature, proactive security posture.

The National Cyber Security Centre (NCSC) have been advocating the adoption of Secure by Design (SbD), when they published their own Principles in 2019. Since then, the Ministry of Defence, Government Digital Services and the UK Government Security Function have all published and adopted their own versions of the Principles and associated activities that should be expected by Government departments and supply chains to adopt SbD as an approach to managing risks to the delivery and use of digital services.




Protecting information assets has become an increasingly critical priority for businesses. ISO 27001:2022 provides a structured approach to managing information security risks and improving resilience through a comprehensive Information Security Management System (ISMS). Cyberfort have put together this guide to outline the key concepts, strategic value, and practical steps involved in adopting this framework.

AI is a hot topic. Most organisations are discussing and reviewing AI as part of their business
and technology strategies. It is estimated by the Office of National Statistics that 1 in 6 UK
companies have adopted at least 1 AI tool into their businesses in 2023 (1). With Gartner
discovering 55% of organisations are looking to adopt an AI first strategy moving forward (2) and projected spend on AI estimated to reach £16.8bn by the of 2024 in the UK according to the US Internation Trade Administration (3). It is clear from the research identified above AI adoption will continue to move at pace.

In November 2023 the UK, US and several other country governments came together to release
a statement and a set of guidelines relating to AI Development and Secure by Design principles
(1). The guidelines advise how those who are using AI should handle their cybersecurity when
developing or using AI models as the governments who created the guidelines claimed, “security
can often be a secondary consideration in this fast-paced industry.”

Cyberfort
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.