Select Page

Gary Hibberd


A Guide to Intel CPU Vulnerabilities

The dominant global CPU manufacturer, Intel, is constantly striving to eke out more and more performance. However, in this pursuit for speed, several vulnerabilities have unfortunately crept in. These vulnerabilities are typically around data leakage, and the risk from these vulnerabilities has only increased in recent years, due to increased likelihood. This is because, in cloud computing and virtual environments, these CPUs are powering processes from different users on the same CPU cores. The potential for leakage of one user’s data to another user is more prevalent. This article seeks to explain what measures Intel is putting in place, and the attacks thwarting those measures.

CPU Features

The Intel Core and Xeon CPUs have many execution cores and high-speed memory units. This architecture supports parallel process execution. This makes the CPUs very useful in data centre environments where processing power is shared by users. Intel has had to introduce features to support isolation and security of these processes. And, as noted below, some of the features to increase speed led to directly to unforeseen security vulnerabilities. The features for speed, isolation enforcement and security are detailed below:

  • CPUs use virtual address spaces. The CPU isolates the memory of each process using a dedicated virtual address space. The software translates the virtual address to a page table entry (PTE). The CPU uses the PTE which has the physical location of the data. The PTE has the access control and status bits for the memory location. Thus, the data in the PTE is used to enforce access control for the memory location. A typical use case is for this access control enforcement to block low privileged user processes from accessing the more privileged operating system (OS) kernel memory.
  • CPUs also use segregated mapping of virtual-to-physical addresses blocks. This prevents processes from accessing each other’s memory.
  • CPUs use context switching in pre-emptive multitasking. This is where the OS instructs the CPU to switch to another process after executing the current process for a limited time. The CPU will also switch context when:
    • it receives an interrupt (e.g., local timer interrupt), or,
    • a process switches to another execution domain (e.g., switching to/from the OS kernel).

Upon context switching, the new process will not have access to the memory and registers of previous processes (context).

  • Simultaneous multithreading (SMT) allows many threads to execute on the same core simultaneously. The threads will share the same hardware resources. The software perceives these CPU threads as separate virtual CPU cores. Isolation of each thread from others occurs. There is only access to its allocated virtual address space and registers. Intel CPUs can support two simultaneous threads (virtual CPU cores) per physical core.
  • Speculative execution allows the CPU to parallel execute instructions from a single thread. When an instruction depends on prior unresolved operations, the CPU still executes it. If the prediction is incorrect, the CPU flushes the incorrect executions. The CPU now re-executes them to get correct results. Speculative execution is invisible to the software. We can observe its side effects – this led to the vulnerability called Spectre around 10 years ago.
  • Single instruction multiple data (SIMD) enables data-level parallelism. A SIMD instruction computes the same function many times with different data. The max number of parallel computations depends on the register size and the data type.
  • The CPU has a last-level cache (LLC) shared across execution cores. It uses an interconnect bus that connects the LLC, cores and the DRAM. Caches are used to speed up accessing repeatably used data normally stored in slower to access memory.
  • Each core also has core-private caches. Intel CPUs have an L1 and L2 cache. Each layer of cache can store several cache lines. Each line can store 64 bytes. These are located even closer to the CPU processing and thus are even quicker to access. When the software accesses a memory location, the CPU uses its address to find it within the closest cache. If the data is not present in a cache level (cache miss), the corresponding cache line is fetched from the next level of cache or the DRAM.
  • The CPU core uses various temporal buffers to optimise micro-operations. When the CPU accesses a memory location missing in the L1 cache, it can use a fill buffer to fetch cache line data bits. It will forward them to the dependent operations before bringing the entire cache line into the L1 cache. The store buffer holds the data for memory writes before committing them to the cache.
  • The CPU supports various memory types, configurable by the OS. These types allow caching policy enforcement. Writeback (WB), write-through (WT), write-protect (WP), write combining (WC), and uncacheable (UC). The latter two memory types do not cache.

The above CPU features greatly increased performance, enforce isolation and help prevent data leakage between program threads/processes.

Let us now discuss the recent data leakage vulnerabilities affecting CPUs, that have been discussed for the last decade:


Spectre exploits misprediction of instructions (control-flow or data-flow prediction). It may result in out-of-bounds data access within the victim address space. Sometimes out of bounds access caches the data. An attacker can read this cache. Disabling speculative execution or partitioning predictors work well as defences. Unfortunately, this requires extensive software and hardware modification and is not perfect. Spectre attacks are not fully resolved and is an active area of research.


Meltdown bypasses the access control measures within the CPU. This attack enables a user process to leak data from the OS kernel. Unlike Spectre, Meltdown does not rely on instructions in another address space. In its address space, an attacker accesses the kernel memory and encodes the data to the cache. The CPU enforces access control by blocking the attacker from reading the memory. However, a vulnerable CPU will forward the kernel data to succeeding instructions. This exposes the data to the attacker. Access control bypasses are a great target for Meltdown. For example, Intel SGX, Virtual Machine hypervisors, and memory protection keys (MPK)).  Intel has deployed hardware fixes for Meltdown since their 9th generation CPUs.


Microarchitectural data sampling (MDS) shows that transient execution after invalid memory accesses can leak data. They expose the contents of internal temporal buffers. Sometimes succeeding instructions receive stale data from these buffers. This occurs when the CPU faces an exception. For example, a memory read page fault, because of invalid permissions. An attacker, in another process running on the same CPU core, can read these shared buffers. The only defence is to disallow SMT and flush buffers on context switching. The 10th generation of Intel CPUs has a great track record on defending against these types of attacks.


Load value injection (LVI) exploits the same root cause as MDS. The attacker induces a transient fault into the victim’s code. They then follow up with a Spectre style attack to leak data and affect control flow. Again, recent Intel CPUs mitigate LVI attacks.


MMIO/ÆPIC leak vulnerabilities can also affect the temporal buffers inside the CPU.  Reading from legacy APIC addresses can leak stale data from the super queue. This is a buffer between the L2 cache and LLC.  Again, flushing the buffers mitigates this attack.

Present Day (August 2023)

From the above list of CPU vulnerabilities, you can see using the latest CPUs and enabling good hygiene will prevent most attacks.

Unfortunately, there is a new CPU data leakage vulnerability.  Again, this affects systems where users share CPU cores. The reason the above protections do not help are that they do not cover the Single instruction multiple data (SIMD) register buffers. These are the targeted buffers in the new attack called ‘Downfall’ which relies on the CPU instruction ‘gather’. The SIMD buffers are from where data is leaked. Unfortunately, due to coding optimisations, these buffers tend to hold the keys when performing encryption and decryption.

The CPU ‘gather’ instruction is a way to pull non-contiguous memory into a single vector. The CPU uses temporal buffers and speculative reads to optimise this instruction. Unfortunately, this leads this instruction to leak data to other processes running on the same core.

The gather instruction appears to use a temporal buffer shared across sibling CPU threads, and it transiently forwards data to later dependent instructions, and the data belongs to a different process. This other process and the process executing the ‘gather’ instruction must be running on the same core.

To counteract the threat posed by Downfall, Intel has published a security advisory and releasing firmware updates. These mitigations come as microcode updates designed to address CVE-2022-40982, which Intel has classified as having “medium severity.”

We recommend everyone looking after Intel based hardware applies the microcode patches from Intel.

For more information contact us today.

Other resources

Welcome to CyberFort, your trusted cybersecurity and compliance consultancy in the UK. We specialise in guiding businesses through the complex landscape of cyber risks and regulatory obligations. Our tailored services include risk assessment, security design, compliance audits, incident response, staff training, and regulatory guidance. Count on us to fortify your data protection and ensure legal compliance, safeguarding your business from potential threats.