A Key Burned into Silicon

By Dylan32 min read

Over the past year (2025–2026), I worked on an encrypted data protocol for the longevity biotech space. The business goal: let institutions holding high-value data — think genomic sequences, clinical trial records, aging-related molecular markers — monetize their data by selling computational access, without ever handing over the raw data itself. Data consumers (pharma companies, research teams, AI companies) can run analyses and train models on the data, but never see the plaintext. Once data leaks, you can't un-leak it. Biomedical data involves patient privacy and intellectual property, so "no leaks" isn't an optional security feature — it's the premise the entire business model rests on.

This kind of problem has been explored before. In 2024, TikTok open-sourced a project called PrivacyGo Data Clean Room, later donated to the Linux Foundation's Confidential Computing Consortium and renamed ManaTEE. It does something very similar: data providers store sensitive data encrypted, data consumers submit analysis code through Jupyter Notebooks, and the code runs inside a Confidential VM in the cloud — data is only decrypted within the CPU's hardware-isolated region (TEE, Trusted Execution Environment), invisible even to the operating system and the cloud provider's admins. After computation, only reviewed results are released.

Our protocol architecture is similar to ManaTEE — we also use a two-phase model (exploration phase with de-identified data, production computation inside a TEE) — but with targeted optimizations in storage architecture and on-chain governance. Regardless of architectural differences, though, all systems like this face the same fundamental problem:

The data is encrypted. Decryption keys are held by a key management service running inside a TEE — data providers just submit raw data, and this service handles encryption, key derivation, and key distribution. Now, a remote machine spins up a TEE instance and asks the key management service for the decryption key to run a computation. Why should the key management service hand it over?

This machine could be running in any cloud provider's data center. You don't trust the cloud provider's ops team, you don't trust the Hypervisor, you don't even trust the operating system — any of these could be compromised or abused by insiders. The only thing you can trust is the CPU hardware itself. More precisely, a key that was fused into the CPU at the factory.

This article isn't a comprehensive treatise on TEE theory. What I wanted to do is much narrower: walk through the complete attestation flow — from a fuse key burned into silicon to a Quote being verified — at a granularity where each step actually makes sense. Most of the documentation I encountered during development either stopped at concepts ("hardware proves code integrity") or jumped straight to API calls. The mechanical details in between kept tripping me up, so I decided to write them down.

The secret inside the chip

When a CPU is manufactured, the vendor (Intel or AMD) fuses root keys into the chip. "Fuses" is literal — tiny wires inside the chip are physically burned to permanently write the key. After that, it's physically unalterable and unreadable. No software, no firmware, not even the operating system can extract it. It can only be used indirectly through specific CPU instructions — something like "sign this data with that key." The CPU performs the signing and returns the result, but the key itself never leaves the chip.

Intel fuses two keys. One is the Root Provisioning Key (RPK). Intel records it during manufacturing — meaning Intel knows what this key should be for every CPU it ships. RPK derives two sub-keys that show up repeatedly in later steps, so keep them in mind:

  • PCK (Provisioning Certification Key): Used to issue certificates; a critical link in the certificate chain
  • Report HMAC Key: Used to sign the identity report generated by a TEE instance; only the same CPU can verify it

The other is the Root Sealing Key (RSK), used to encrypt data that needs to persist to disk (like state a TEE needs to recover after a restart). After manufacturing, Intel destroys its own copy — Intel itself doesn't know this key. The two keys have clear roles: RPK proves identity to the outside world, RSK protects data internally. As for why two separate fuse keys are needed — that's an interesting question, and I'll come back to it at the end.

AMD's design is simpler. They fuse a single CEK (Chip Endorsement Key) and derive the signing key for attestation reports from it. AMD knows the CEK, so it can directly issue certificates for the derived key.

These fuse keys are where the entire chain of trust begins. Every verification, signature, and certificate that follows traces back here. But a key that can't be exported is useless on its own — you need a mechanism to turn it into proof that a remote verifier can check. The first step of that mechanism happens the moment a TEE instance boots.

What happens when a TEE boots

When a TEE instance (using Intel TDX's Trust Domain as the example) is created, the hardware does one thing first: measurement.

When a physical machine powers on, the CPU starts executing from a fixed address in a ROM chip on the motherboard — this code is the UEFI (or BIOS on older machines). Before the operating system loads, UEFI is the only software running. It initializes the memory controller, detects hardware devices, configures the PCIe bus, turning the machine from "a pile of freshly powered silicon" into an environment capable of running an OS. Then UEFI finds the kernel on disk, loads it into memory, and hands over control — only then does the operating system start booting.

A TDX TEE instance has an equivalent component called TDVF (TDX Virtual Firmware). Based on the open-source OVMF (Open Virtual Machine Firmware), it's essentially a virtual machine version of UEFI. When the Hypervisor creates a TD (Trust Domain), TDVF is loaded into the TD's memory as the first code to execute inside the TEE instance. It does the same job as UEFI on a physical machine: initializes the virtual hardware environment, finds the kernel and initrd, sets up the cmdline, then jumps to the kernel — except all of this happens inside the TEE's hardware-isolated region.

TDVF plays a dual role in Measured Boot: it gets measured, and it measures others. When the TD is created, the hardware automatically hashes TDVF's initial memory image and writes it into a register called MRTD, which is then immutable — this ensures the firmware itself hasn't been tampered with.

After TDVF starts, it measures each subsequent component it loads. "Measurement" means computing a SHA-384 hash of the component's binary content and writing the hash into a runtime measurement register (RTMR). The write isn't a simple overwrite — it's an operation called extend:

RTMR[n] = hash(RTMR[n] || new_measurement)

Each extend mixes the new measurement with the current value and re-hashes. This guarantees two properties: irreversibility (you can't reverse-engineer intermediate steps from the final value) and order-sensitivity (loading A then B produces a different result from B then A). Once recorded, nothing can be erased. If direct overwrites were allowed, an attacker could tamper with early measurements late in the boot process, disguising a compromised boot as a legitimate one.

TDVF handles three things when loading the operating system: kernel, initrd, and cmdline.

The kernel is the Linux kernel itself — the core of the operating system, responsible for process scheduling, memory management, device drivers, and all other foundational functions.

initrd is a small temporary filesystem loaded into memory before the real root filesystem is mounted (modern Linux actually uses initramfs, but firmware and toolchains still use the name initrd). Why is it needed? The kernel needs to mount the root filesystem to continue booting, but the root filesystem might live on an encrypted disk or network storage that requires special drivers — those drivers are bundled in the initrd. The initrd also contains an /init script, the very first thing the kernel executes after starting, which loads drivers, mounts the real root filesystem, and then hands off control.

In confidential computing, initrd takes on an additional role. The TEE instance needs a system component to perform remote attestation, request decryption keys from the key management service, and mount encrypted storage. This component is called the guest agent — for example, dstack-guest-agent in the dstack framework. Why doesn't it run as a regular system service after systemd starts? Because in confidential computing setups like dstack-os, the root filesystem itself is encrypted — you need the decryption key and encrypted storage mounted before there's even a root filesystem to use. So the guest agent has to run during the initrd phase, launched by the /init script, finishing its work before systemd ever starts. It's boot infrastructure at the same level as udev and modprobe, just with a more critical job in this context.

cmdline (kernel command line) is the set of boot parameters passed to the kernel — for example, root=/dev/sda1 tells the kernel where the root filesystem is, init=/sbin/init specifies the first userspace process, console=ttyS0 controls console output. It's not a "command-line interface" — it's a configuration string that TDVF passes to the kernel at load time.

With those three pieces understood, the boot measurement sequence is straightforward:

  1. TD creation — Hardware automatically measures the TDVF firmware → MRTD = hash(firmware_binary), immutable from this point on
  2. TDVF firmware starts — Firmware measures its own configuration → RTMR[0] = extend(RTMR[0], firmware_config)
  3. TDVF loads kernel — Firmware measures the kernel binary → RTMR[1] = extend(RTMR[1], hash(kernel))
  4. TDVF loads initrd — Firmware measures the initrd → RTMR[1] = extend(RTMR[1], hash(initrd))
  5. TDVF sets cmdline — Firmware measures the kernel command line → RTMR[1] = extend(RTMR[1], hash(cmdline))
  6. Jump to kernel — The kernel starts, runs /init from initrd, and the guest agent quietly starts up during this phase. What happens after it starts — we're getting to that
  7. Application layer startup (optional) — Software continues measuring application-layer components (container images, application binaries, config files, etc.) → RTMR[2] = extend(RTMR[2], hash(app_component)). Unlike the earlier layers, RTMR[0] and RTMR[1] have their measurement contents and ordering standardized by the TDVF firmware spec — everyone knows the rules. RTMR[2] is up to the application layer to decide what to measure and in what order; the verifier needs to know this convention to reproduce the expected values. In practice, this is usually defined by the confidential computing framework — for example, Phala Network's dstack framework extends the hash of the docker-compose file into RTMR[2]

Why measure kernel, initrd, and cmdline separately? Because tampering with any one of them can compromise the entire system. A malicious kernel can log all memory accesses and exfiltrate data. A tampered initrd can plant a backdoor early in boot — when no security software is running yet, it has full kernel privileges and can modify anything loaded after it. And a seemingly harmless cmdline parameter — say init=/malicious or module.sig_enforce=0 — can replace the init process or disable kernel module signature verification, even if the kernel and initrd are both correct.

After boot completes, the MRTD and RTMR registers contain the full boot fingerprint of this TEE instance, from firmware to application. But these fingerprints are just local data — to let a remote verifier check them, you need a protocol to sign, package, and send them out.

Birth of an identity report

This protocol is fundamentally challenge-response: the verifier sends a challenge, the TEE instance responds with a hardware-signed answer. Every step of the protocol — which CPU instruction to call, what parameters to pass, which key signs what — is defined by the Intel TDX specification [Intel 2023]. The guest agent introduced earlier is the component that implements this spec, running the entire attestation flow automatically, fully transparent to application code.

Here's how it works: the verifier sends a nonce (a one-time random number) to the TEE instance as a challenge. The guest agent receives the nonce and does two things. First, it generates an ephemeral ECDHE key pair (eph_sk, eph_pk) — this pair is for establishing an encrypted communication channel later. Second, it hashes the nonce and eph_pk together and stuffs the result into a 64-byte field called REPORTDATA:

REPORTDATA = SHA512(nonce || eph_pk)[:64]

Why put the public key hash in REPORTDATA? Because REPORTDATA gets signed into the report by the hardware. After the verifier validates the report, it can extract the public key hash from REPORTDATA and confirm "this public key really came from this TEE instance," then use it to establish an ECDHE encrypted channel. Without this binding, an attacker could intercept a legitimate report and then impersonate that TEE instance using their own public key.

Next, the guest agent calls the CPU instruction TDCALL[TDG.MR.REPORT], asking the TDX Module to generate a TD Report. The TDX Module is trusted code running in a special CPU mode (SEAM mode). It reads all the accumulated measurement register values (MRTD, RTMR[0-3]) and the current TCB version number, adds the REPORTDATA passed in by the guest agent, and packages everything into a structured report.

Then the TDX Module signs the entire report using the Report HMAC Key mentioned earlier. Remember? This key is derived from RPK and never leaves the chip — without this CPU's fuse key, you can't obtain the same HMAC Key, and you can't verify the signature.

This TD Report is the TEE instance's identity report: the boot fingerprint from firmware through application at every layer, the hardware and microcode version numbers, plus the REPORTDATA the guest agent inserted. Every field is protected by a hardware signature — tamper with a single byte and the signature won't match.

There's a catch, though. HMAC is a symmetric-key algorithm. The signing key is locked inside the CPU, and only that same CPU can verify it. A remote verifier doesn't have this key and can't verify the report. Up to this point, the report generation process involves three parties, and it looks like this:

sequenceDiagram
    participant V as Verifier
    participant GA as Guest Agent (inside TD)
    participant TDX as TDX Module (CPU)

    V->>GA: Request attestation (with nonce)
    GA->>GA: Generate ECDHE keypair (eph_sk, eph_pk)
    GA->>GA: REPORTDATA = SHA512(nonce || eph_pk)[:64]
    GA->>TDX: TDCALL[TDG.MR.REPORT]
    TDX->>TDX: Read MRTD, RTMR[0-3], TCB_VERSION
    TDX->>TDX: Sign with Report HMAC Key (CPU-internal)
    TDX-->>GA: TD Report {measurements, REPORTDATA, HMAC}

Notice the last step: the TD Report returns to the Guest Agent, but it carries an HMAC signature — fine for local verification, useless remotely. That's the problem to solve next.

From local report to remotely verifiable Quote

This is where Intel and AMD's designs completely diverge.

AMD's approach is straightforward: AMD-SP (a physically separate security chip, isolated from the main CPU) signs the report directly with an ECDSA key derived from the fuse key. ECDSA is asymmetric — anyone with the public key can verify the signature. AMD knows the fuse key, so it can directly issue certificates for the derived key. The verifier receives the report, follows the certificate chain up, and traces back to AMD's root certificate. No intermediate conversion, no extra components.

Intel's situation is far more involved. The TD Report uses an HMAC signature that can't be verified remotely, so it needs to be "converted" into a Quote signed with an asymmetric key.

The component responsible for this conversion is the Quoting Enclave (QE). To understand the QE, you first need to know what an SGX enclave is. TDX, discussed earlier, protects an entire virtual machine — the OS, kernel, all processes live inside the TEE's isolated region. SGX (Software Guard Extensions) is an older TEE technology from Intel with finer granularity: it protects not an entire VM, but a specific memory region (enclave) within a single process. Code and data inside an enclave are completely invisible to the outside — even the operating system and Hypervisor on the same machine can't read it.

The architecture of the physical machine looks like this: the Host runs the Hypervisor, the Hypervisor uses TDX to create multiple TDs (TEE virtual machines), each TD isolated from the others and from the Host. At the same time, the Host runs a QE — an SGX enclave that sits outside the TEE VMs but is itself protected by SGX, with the AK private key stored inside the enclave where no other program on the Host can reach it. All TDs on this physical machine are served by this same QE.

When a TD needs attestation, it sends the TD Report to the QE via VSOCK. The QE first uses the CPU instruction EVERIFYREPORT2 to verify the report's HMAC signature — this step can only be done on the same CPU, because the Report HMAC Key never leaves the chip. Once verified, the QE re-signs the TD Report's content with its own Attestation Key (AK) private key using ECDSA, producing a Quote.

The Quote contains everything from the TD Report (measurements, REPORTDATA, TCB version, etc.), plus the QE's ECDSA signature, along with the AK certificate and PCK certificate. This is a complete, remotely verifiable identity proof. The conversion involves only two parties, but the trust relationship is subtle — the QE runs on the Host (outside the TEE), yet it can verify a report generated inside the TEE, because they share the same CPU's internal keys:

sequenceDiagram
    participant GA as Guest Agent (inside TD)
    participant QE as Quoting Enclave (Host SGX)

    GA->>QE: TD Report (via VSOCK)
    QE->>QE: EVERIFYREPORT2 verify HMAC (same CPU)
    QE->>QE: Re-sign with AK private key (ECDSA)
    QE-->>GA: Quote {TD Report content, ECDSA signature, cert chain}

The Quote is in hand. But — who vouches for the QE itself? The AK is an ECDSA key pair randomly generated by the QE. Intel doesn't know what the AK is, so it can't issue a certificate for it. How do you get the verifier to trust this AK?

The certificate chain: why you can trust this Quote

Intel's answer is to introduce another enclave — PCE (Provisioning Certification Enclave), running on the same physical machine. PCE holds the PCK (Provisioning Certification Key), derived from the Root Provisioning Key. Remember? The Root Provisioning Key is the fuse key Intel recorded during manufacturing — Intel knows what this key should be for every CPU, so Intel can issue a certificate for PCK.

When the QE first starts, it asks the local PCE to sign a certificate for the AK using PCK. This connects the trust chain:

Intel Root CA (pre-installed in the system, like CA root certs in browsers)
    │ Signed with Root CA private key
PCK Certificate = { PCK public key, Intel signature }
    │ Signed with PCK private key (executed by local PCE)
AK Certificate = { AK public key, PCK signature }
    │ Signed with AK private key (executed by QE during each attestation)
Quote = { TD Report contents, AK's ECDSA signature }

The verifier works bottom-up: verify the Quote signature with AK's public key, verify the AK certificate with PCK's public key, verify the PCK certificate with Intel Root CA's public key. Each layer is endorsed by the one above, ultimately tracing back to Intel's root certificate. If any link is tampered with, the signature won't match.

There's a historical context to this design. Intel's earlier attestation model was called EPID, which required contacting Intel's service online for every attestation — creating privacy concerns (Intel sees all requests), availability concerns (if Intel goes down, everything goes down), and latency concerns. The new model, DCAP, moves most of the work local: Intel signs the PCK certificate once during the Provisioning phase, and after that, every attestation completes entirely on the local machine with no need to contact Intel.

Why does Intel go through all this trouble? Because Intel chose to have the QE randomly generate the AK at runtime rather than derive it directly from the fuse key. This gives better flexibility (AK can be rotated, different TEE instances can use different AKs), but the trade-off is needing PCE as an intermediate issuer. AMD chose to derive the signing key directly from the fuse key — simpler, but the key is permanently tied to the hardware.

Intel TDX AMD SEV-SNP
Report signing algorithm HMAC (requires QE conversion to ECDSA) Direct ECDSA
Signing key origin Randomly generated by QE at runtime Derived directly from fuse key
Does the vendor know the signing key? No Yes
Requires local Provisioning Yes (PCE signs AK certificate) No
Extra components QE + PCE (two enclaves) AMD-SP (standalone security chip)

Putting all three stages together — nonce challenge, TD Report generation, and Quote conversion — the complete Intel TDX attestation flow traces a round trip from the verifier back to the verifier:

sequenceDiagram
    participant V as Verifier
    participant GA as Guest Agent
    participant TDX as TDX Module
    participant QE as Quoting Enclave

    V->>GA: nonce
    GA->>TDX: Request TD Report (with REPORTDATA)
    TDX-->>GA: TD Report (HMAC signature)
    GA->>QE: TD Report
    QE-->>GA: Quote (ECDSA signature + cert chain)
    GA-->>V: Quote + eph_pk
    V->>V: Verify cert chain + measurements + channel binding

Follow the arrows: the verifier's nonce enters the TEE, gets bound into a hardware-signed report, converted into a remotely verifiable Quote, and returns with a certificate chain that traces all the way back to Intel's root CA. Every hop adds a layer of cryptographic binding — strip any one away and the chain breaks.

AMD SEV-SNP's flow is much simpler — no Quoting Enclave, no HMAC-to-ECDSA conversion. AMD-SP directly signs a remotely verifiable Report:

sequenceDiagram
    participant V as Verifier
    participant GA as Guest Agent
    participant SP as AMD-SP

    V->>GA: nonce
    GA->>SP: Request Report (with REPORT_DATA)
    SP-->>GA: Report (ECDSA signature)
    GA-->>V: Report + eph_pk
    V->>V: Verify cert chain + measurements + channel binding

Three hops instead of five. The trade-off is flexibility — Intel's design lets the QE rotate attestation keys independently of the hardware, while AMD's signing key is permanently derived from the fuse key.

After the verifier receives the Quote

The verifier (the key management service) receives the Quote and the TEE instance's ephemeral public key eph_pk, then performs three layers of verification:

Layer one: hardware authenticity. Follow the certificate chain from bottom to top — Quote signature → AK certificate → PCK certificate → Intel Root CA. If the signature chain is intact and valid, this Quote genuinely came from a TEE running on a real Intel CPU, not a forgery.

Layer two: code integrity. Extract the MRTD and RTMR values from the Quote and compare them against expected values. Where do expected values come from? From reproducible builds — same source code, same Dockerfile, same build environment, producing deterministic binaries and deterministic measurement values. If the values in the Quote don't match expectations, this TEE instance isn't running the code you trust.

Layer three: channel binding. Extract the public key hash from the Quote's REPORTDATA and verify it matches the eph_pk sent by the TEE instance. This step confirms "I'm communicating with the TEE instance that generated this Quote," rather than a man-in-the-middle who intercepted a legitimate Quote and is impersonating it. After verification passes, use eph_pk to establish an ECDHE encrypted channel — subsequent key distribution happens over this encrypted channel.

All three layers must pass before the key management service hands over the decryption key. Hardware authenticity confirms the other side is a real TEE. Code integrity confirms the TEE is running the right program. Channel binding confirms you're talking to the right TEE instance. Remove any layer and the trust chain is broken.

Embedding attestation in TLS

The flow described above has an engineering problem: attestation and the communication channel are established separately, and developers have to write their own code to bind them together. If the binding logic has a bug — say, forgetting to put the TLS session's public key into REPORTDATA — the Quote and the TLS connection become decoupled, and man-in-the-middle attacks become possible. Sardar et al. used formal verification tools to analyze the protocol specifications of multiple RA-TLS implementations and did find exploitable gaps in the binding logic [Sardar 2024] — which directly drove later tightening of the protocol spec.

RA-TLS (Remote Attestation TLS) solves this by embedding attestation into the TLS handshake itself. When the TEE instance generates a self-signed TLS certificate, it stuffs the Quote into an X.509 certificate extension field, with the Quote's REPORTDATA containing a hash of the certificate's public key. The verifier receives the certificate, extracts the Quote from the extension field, verifies the hardware signature and measurements, then verifies the public key hash in REPORTDATA matches the certificate's public key. The entire attestation completes in a single round during the TLS handshake — binding is part of the protocol, not an optional step left to the developer.

Why do TEE applications need self-signed certificates? Because every TEE instance boot creates a brand-new identity with no persistent domain name or CA-issued certificate. Its identity should be proven by hardware, not by a CA. A self-signed certificate plus a Quote is equivalent to an identity proof endorsed by the hardware vendor (Intel/AMD).

In real confidential computing systems, RA-TLS is typically used only for internal component communication: the TEE instance uses RA-TLS when requesting keys from the key management service, and key management nodes use RA-TLS when syncing with each other. External users see standard HTTPS — they don't need to understand TEE. But only TEE instances that pass RA-TLS verification can get keys. The trust chain is invisible to users, but complete.

Engineering reality is messier than papers suggest

In theory, the attestation verification flow is clean: build the image → compute expected measurement values → compare against the actual values in the Quote at runtime. In real cloud environments, the path is far less smooth.

The RTMR[1] value depends on how the TDVF firmware measures kernel, initrd, and cmdline — the specific hash algorithm, the extend order, which metadata is included — all determined by the firmware implementation. The problem is that cloud providers use custom firmware — GCP's OVMF differs from the upstream open-source EDK2, and firmware versions get updated. The same image booted at different times might produce different RTMR values because of a firmware update. Google's engineers admitted in a GitHub issue that the current approach is basically "boot an instance, grab the measurements, and hope those are the values you should expect."

Third-party pre-computation tools (like Fraunhofer's measured-boot-tools) can theoretically compute expected measurements from firmware binary + kernel + initrd + cmdline. But GCP's firmware source isn't public — they use an internal clang toolchain and blaze build system, maintaining their own EDK2 patch set. The values the tool computes might not match reality.

The practical approach: don't pre-compute RTMR. Analyze the Event Log instead. During boot, every time TDVF performs an extend operation, it records the details — which component, what hash value — in an ACPI table called CCEL (Confidential Computing Event Log). The verifier can replay this log, re-execute all extend operations, verify the final values match the RTMR registers (confirming the log hasn't been tampered with), then check whether each component hash is in a whitelist. This is far more flexible than a fixed whitelist — a firmware upgrade only affects firmware-related Event Log entries, leaving kernel and initrd entries untouched — and it doesn't depend on precise knowledge of the firmware implementation.

There's a deeper issue: attestation proves code identity, not user identity. A Quote can tell you "this TEE instance is running a legitimate image," but it can't tell you "this TEE instance was submitted by a specific user." If you rely solely on attestation to decide whether to distribute keys, anyone who boots a TEE instance that passes verification gets the key — they're all running the same legitimate image.

The solution to this problem isn't at the attestation layer but in the upper protocol — for example, issuing one-time credentials when users submit tasks, binding user identity to a specific TEE instance, making key distribution depend on both hardware attestation and identity credentials. But that's a different story.

One last thing that deserves honest acknowledgment: the entire premise of this article is "you can trust CPU hardware," and that premise has boundaries. In October 2025, researchers from Georgia Tech and Purdue published the TEE.fail attack: using off-the-shelf equipment costing under $1,000, they performed physical snooping on the DDR5 memory bus and successfully extracted keys from Intel TDX and AMD SEV-SNP — including attestation keys. The core of the attack is the determinism of memory encryption: identical plaintext produces identical ciphertext, letting attackers do pattern matching on the bus. Worse, they were able to use the extracted attestation key to forge Quotes, pretending code was running inside a TEE when it actually wasn't — directly breaking the trust chain this entire article is built on. Intel and AMD both responded with "physical attacks are outside our threat model."

That's not the only attack surface. Side-channel attacks (Spectre, Meltdown, and their variants) have repeatedly shown that CPU microarchitectural behavior can leak data from inside TEEs. Intel, as both the manufacturer of fuse keys and the issuer of the certificate chain, is itself a trust assumption. The risk extends further upstream into the supply chain: if a chip has a backdoor implanted during manufacturing, the "unreadable" promise of fuse keys no longer holds. Attestation isn't a silver bullet. It's the most pragmatic trust anchor available given current engineering reality — but its boundaries deserve serious thought from every system designer who depends on it.

A chain from silicon to trust

Back to the original question: why should the key management service hand over the decryption key?

Because of a chain of trust that stretches from the chip fabrication facility to your code. A fuse key that can't be read is burned into the CPU at the factory. When the TEE instance boots, TDVF firmware measures kernel, initrd, and cmdline layer by layer, writing fingerprints into tamper-proof registers. When the application requests attestation, the CPU packages all measurements together with the application's custom REPORTDATA into a hardware-signed report. The Quoting Enclave converts this local report into a remotely verifiable Quote. The verifier follows the certificate chain from the Quote all the way up to the chip vendor's root certificate, confirms the hardware is real, the code is correct, and the channel is trustworthy — only after all three checks pass does it hand over the key.

That's why it's called a hardware root of trust. Not because you choose to trust the hardware, but because under this threat model, hardware is the only place you can anchor trust. The OS can be compromised, the Hypervisor can be broken, cloud provider employees can be bribed — but fuse keys are burned into silicon, and even the CPU itself can't read them out.

A single key burned into a chip holds up the entire trust model of confidential computing.

But who is the verifier?

Throughout this article, I've been using a phrase without examining it: "the verifier checks the Quote." But who, exactly, is the verifier? And what are they trying to learn?

When I first started working with TEEs, I ran into a confusion that took me an embarrassingly long time to resolve. Every paper, every tutorial, every conference talk described attestation the same way: a cloud provider proves to the code deployer that the code is running unmodified. The verifier is the party who wrote the code. The thing being protected is the code itself — proprietary algorithms, trade secrets, intellectual property. The cloud is untrusted infrastructure; attestation lets the code owner use it anyway.

That framing made perfect sense in the abstract. But it didn't match what we were actually building.

In our protocol, we weren't trying to protect our code from the cloud provider. We were trying to prove something to our users — the data providers — that our code had no backdoor, no hidden exfiltration path, no way for us to peek at their plaintext data. We weren't the verifier. We were the one being verified. The data providers were the verifiers, and they were checking whether our code was trustworthy.

I kept re-reading the documentation, wondering if I'd misunderstood the whole concept. If attestation is about protecting the code deployer, and we are the code deployer, then who are we attesting to? It felt like using a lock backwards.

It took a while before I realized: the mechanism is identical. The same Quote, the same certificate chain, the same measurement comparison. What changes is the direction of trust — and that direction changes everything about how the system is designed.

When attestation protects the code owner — call it the shielding use case — the code stays proprietary. The deployer computes expected measurements from their own private build pipeline and checks the Quote themselves. Nobody else needs to see the source. The whole point is that the code runs faithfully on someone else's machine without being exposed. This is how most of the industry uses TEE today. Netflix, Disney+, and every major streaming platform rely on Google's Widevine DRM, where content decryption and video rendering happen entirely inside ARM TrustZone on your device — the content owner's decryption logic is shielded from the device owner. NVIDIA's H100 was the first GPU with a hardware root of trust for confidential computing; AI companies deploy proprietary model weights into confidential GPU memory so that even the cloud provider running the machine cannot extract them. Fortanix's Confidential AI platform takes this further, letting model owners deploy frontier models into customer environments where the model weights remain encrypted and attestation-verified — the customer can run inference but never sees the weights.

When attestation proves the service provider's innocence — call it the transparency use case — the logic inverts. Your users are the verifiers, and they need to answer one question: does the code running inside that TEE match the code I've reviewed? For them to answer that, they need the source code. They need to build it themselves, compute the expected measurement values from the resulting binary, and compare those against the Quote. If the code is closed-source, the measurements in the Quote are just opaque numbers — the user has no way to interpret them, and attestation becomes meaningless theater.

In other words, the transparency use case makes open source a prerequisite, not a nice-to-have. Attestation without reproducible builds is a locked box with no keyhole — it proves something, but no one can check what.

Real systems already work this way. Signal's SVR2 (Secure Value Recovery) runs inside SGX enclaves to handle PIN-based key recovery. The enclave source is fully open — anyone can check out the code, enter the Docker build environment, build the enclave, and compare the resulting MRENCLAVE hash against the value hardcoded in the Signal client app. If they match, you know Signal's servers genuinely cannot access your PIN. Apple's Private Cloud Compute publishes every production PCC software image and provides a Virtual Research Environment where researchers can boot PCC node software, inspect binaries, and verify measurements against an append-only transparency log. WhatsApp's Private Processing runs AI features on end-to-end encrypted messages inside confidential VMs — Meta publishes CVM binary images and digests to third-party transparency logs, and NCC Group conducted an external audit. In crypto, Flashbots runs its block builder rbuilder inside Intel TDX, with reproducible VM images whose measurements are published at measurements.builder.flashbots.net — the claim is "we're not front-running you," and that claim is only credible if you can verify the code yourself.

Our system sits at the intersection. Internally, the key management service verifies compute TEE instances before handing over decryption keys — that's the shielding direction, protecting data from rogue infrastructure. But externally, data providers need to trust us, the service operator — that's the transparency direction, and it's why the compute logic needs to be open source and reproducibly buildable.

The attestation mechanism doesn't care about this distinction. The CPU signs the same measurements either way. But if you're designing a confidential computing system and you haven't asked "who is my verifier?" — you might be building the lock facing the wrong way.

The mystery of the second fuse key

At the beginning of this article, I mentioned that Intel fuses two keys: RPK handles attestation (proving identity externally), RSK handles sealing (encrypting data internally). Intel knows RPK but destroyed its records of RSK.

First question: why not just derive a sealing key from RPK to encrypt persisted data? One fewer fuse key, one fewer derivation scheme — wouldn't the design be simpler? Think about what Intel knows and doesn't know about each of these two keys, and what that means for the data owner.

Second question: AMD fuses only one key, CEK, with no second fuse key. How does AMD handle sealing? Does AMD not need to protect persisted data, or did it solve the same problem in a completely different way?


References

  • [Intel 2023] Intel Corporation. Intel Trust Domain Extensions (Intel TDX) Module Architecture Specification. https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html
  • [Knauth 2018] Thomas Knauth et al. "Integrating Remote Attestation with Transport Layer Security." arXiv:1801.05863, 2018.
  • [Sardar 2024] Muhammad Usama Sardar et al. "Formal Verification of Intel's Remote Attestation with Transport Layer Security." IEEE Access, 2024.
  • [Freund 2024] Axel Freund et al. "Confidential VMs Explained: An Empirical Analysis of AMD SEV-SNP and Intel TDX." ACM Computing Surveys, 2024.