linux-coco.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* Secure vTPMs for confidential VMs
@ 2022-09-20 20:28 Dov Murik
  2022-09-21  8:49 ` Jörg Rödel
  2022-09-21  9:36 ` Daniel P. Berrangé
  0 siblings, 2 replies; 7+ messages in thread
From: Dov Murik @ 2022-09-20 20:28 UTC (permalink / raw)
  To: linux-coco
  Cc: Dov Murik, Tobin Feldman-Fitzthum, James Bottomley, amd-sev-snp,
	Dr. David Alan Gilbert, Daniel P. Berrangé

Hello,

Tobin and I gave a talk in KVM Forum 2022 last week about Unifying
Confidential Attestation [1].  During the KVM Forum and Linux Plumbers
2022 we had several conversations with various attendees about using
secure virtual TPMs to attest confidential VMs.

We want to use the linux-coco mailing list to start community discussion
about various options and tradeoffs for implementing secure vTPMs.

Emulating hardware TPMs has an advantage that guest software already
uses TPM devices to measure boot sequence components (firmware,
bootloader, kernel, initrd) and runtime events (IMA in Linux).  We know
that this currently works with vTPMs backed by the VMM implementation,
such as QEMU's tpm device which is connected to swtpm running on the
host.

As far as we know, current vTPM solutions are insecure in the
confidential computing setting, because the TPM code is running on the
untrusted host, and communication between the guest and the TPM
component is visible and modifiable by the untrusted host.  This allows
the untrusted host to fake measurements or steal sensitive keys from the
TPM memory/NVDATA.

We so far recognized three issues that should be further researched in
order to implement secure vTPMs for confidential VMs; these are TPM
provisioning, implementations in TEEs, and guest enlightment.

* TPM provisioning: The TPM contains sensitive information such as EK
private key which should not be accessible to the host and to the guest.
How should such information be delivered to the vTPM when starting a new
VM?  If we provision encrypted NVDATA, who has the key to decrypt it?
If we provision it with "classic" TEE secret injection, we need to do it
quite early in the VM launch sequence (even before the firmware starts?).

One suggestion is to use an ephemeral EK, generated at launch by the
vTPM.  The system may start to boot using such a TPM, but once we want
to unseal secrets (for example, to unlock a LUKS partition), we need
something persistent inside the TPM (or re-seal the key for each TPM).
Ephemeral TPMs might be a useful first step.

* Implementation in TEEs: SNP introduced VPMLs, and AMD's linux-SVSM
running in VPML0 can also run vTPM code to handle TPM requests from the
guest running in VMPL1.  Such a solution is not applicable as-is to
other TEEs (SEV, TDX).  People suggested running vTPMs in a separate
confidential VMs, and somehow connect the tenant's guest to the TPM VM;
but we'll need a way to secure this communication channel.

* Guest enlightment: Guest software currently interacts with the TPM by
writing commands to a memory-mapped IO page (GPA 0xfed40000) and reading
responses from that page.  We want such writes to trigger the code of
our vTPM (for whatever implementation we choose).  Our current early
experience with TPM running in linux-SVSM required adding "exit-guest"
calls after writing commands to the IO page, in order to allow the SVSM
to run and recognize the incoming command.  Ideally, we'd like a
solution that doesn't require modifying all the TPM drivers out there
(in Linux, Windows, OVMF, grub, ...).

We're sure there are other issues as well, but these are the main ones
we encountered so far.  We'd like to hear the community's feedback and
ideas as early as possible.


Thanks,
Tobin and Dov.

[1] https://static.sched.com/hosted_files/kvmforum2022/02/Unifying-Confidential-Attestation-KVM-Forum-2022.pdf

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Secure vTPMs for confidential VMs
  2022-09-20 20:28 Secure vTPMs for confidential VMs Dov Murik
@ 2022-09-21  8:49 ` Jörg Rödel
  2022-09-21 17:07   ` Tom Lendacky
  2022-09-21  9:36 ` Daniel P. Berrangé
  1 sibling, 1 reply; 7+ messages in thread
From: Jörg Rödel @ 2022-09-21  8:49 UTC (permalink / raw)
  To: Dov Murik
  Cc: linux-coco, Daniel P. Berrangé, Tobin Feldman-Fitzthum, amd-sev-snp

Hi Dov,

On Tue, Sep 20, 2022 at 11:28:15PM +0300, Dov Murik wrote:
> * Implementation in TEEs: SNP introduced VPMLs, and AMD's linux-SVSM
> running in VPML0 can also run vTPM code to handle TPM requests from the
> guest running in VMPL1.  Such a solution is not applicable as-is to
> other TEEs (SEV, TDX).  People suggested running vTPMs in a separate
> confidential VMs, and somehow connect the tenant's guest to the TPM VM;
> but we'll need a way to secure this communication channel.

Yes, so for SEV-SNP the way to implement a vTPM is via a Secure VM
Service Module (SVSM) running at VMPL0.

I not sure how much we should care about the variant of running a vTPM
in a separate trusted VM. In the long run SEV and SEV-ES will be
replaced by SEV-SNP, and for TDX it would be best if Intel just adds a
software TPM into their SEAM module. IIRC TDX already has some TPM-like
features, e.g. PCRs, implemented there. A full vTPM seems to be doable.

> * Guest enlightment: Guest software currently interacts with the TPM by
> writing commands to a memory-mapped IO page (GPA 0xfed40000) and reading
> responses from that page.  We want such writes to trigger the code of
> our vTPM (for whatever implementation we choose).  Our current early
> experience with TPM running in linux-SVSM required adding "exit-guest"
> calls after writing commands to the IO page, in order to allow the SVSM
> to run and recognize the incoming command.  Ideally, we'd like a
> solution that doesn't require modifying all the TPM drivers out there
> (in Linux, Windows, OVMF, grub, ...).

It will not be that easy to emulate a vTPM at VMPL0 which has the same
interface as memory mapped TPMs. That would mean marking the page as
MMIO, but that will trigger a VC exception in the OS (or OVMF, Grub,
...), which would then need to forward the MMIO access to the SVSM. So
either way, OVMF and Grub need modification to work with a vTPM running
at a lower VMPL.

An alternative is using the ReflectVC feature to get the VC directed to
the lower VMPL, but that has much wider implications and is not
justified for only emulating a vTPM.

The current plan is to have VMPL1 talk to the VMPL0 vTPM via
standardised SVSM commands. This requires new TPM drivers for all VMPL1
components. At least unless someone comes up with a better idea :)

Regards,

-- 
Jörg Rödel
jroedel@suse.de

SUSE Software Solutions Germany GmbH
Frankenstraße 146
90461 Nürnberg
Germany

(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Secure vTPMs for confidential VMs
  2022-09-20 20:28 Secure vTPMs for confidential VMs Dov Murik
  2022-09-21  8:49 ` Jörg Rödel
@ 2022-09-21  9:36 ` Daniel P. Berrangé
  2022-10-03  7:42   ` Dov Murik
  1 sibling, 1 reply; 7+ messages in thread
From: Daniel P. Berrangé @ 2022-09-21  9:36 UTC (permalink / raw)
  To: Dov Murik
  Cc: linux-coco, Tobin Feldman-Fitzthum, James Bottomley, amd-sev-snp,
	Dr. David Alan Gilbert

On Tue, Sep 20, 2022 at 11:28:15PM +0300, Dov Murik wrote:
> Emulating hardware TPMs has an advantage that guest software already
> uses TPM devices to measure boot sequence components (firmware,
> bootloader, kernel, initrd) and runtime events (IMA in Linux).  We know
> that this currently works with vTPMs backed by the VMM implementation,
> such as QEMU's tpm device which is connected to swtpm running on the
> host.

Leveraging pre-existing support in guest OS feels pretty compelling.
It is apparent that there is alot of maintainer activity across pieces
of the Linux software/distro stack in relation to improving support
for SecureBoot and (v)TPMs in general. Being able to take advantage of
this would be good for confidential computing, by reducing the burden
on software/distro maintainers, and giving users technology that they
are (in theory) at least somewhat familiar with already.

If we can drive the confidential compute specific bits, including
the attestation of the confidential hardware, from the guest firmware,
then it ought to make it easier for guest OS images to be agnostic as
to whether they're running a non-confidential or confidential VM.

It becomes more of a deployment decision for the user of whether to
use a confidential VM or not at any launch attempt. eg they could
have 1 image and run it in a non-confidential VM on their internal
cloud, while using a confidential VM on public cloud when needing
to scale their resources.


This would not be so straightforward with some of the alternative
proposals for confidential VM disk images. For example another
proposal has been to have a bootloader like grub embedded in the
firmware, such that even /boot is encrypted in the disk image and
gets keys provided for unlock prior to the OS being launched.

This would make that disk image inherantly incompatible with use
in non-confidential VM, as well as requiring OS vendors to ship
even more different cloud disk image variants, and support different
boot processes in their software stack.


So overall I'm heavily attracted to re-using existing technology
to the greatest extent that is practical. It makes confidential
computing "normal" and will facilitate its uptake.

> We so far recognized three issues that should be further researched in
> order to implement secure vTPMs for confidential VMs; these are TPM
> provisioning, implementations in TEEs, and guest enlightment.
> 
> * TPM provisioning: The TPM contains sensitive information such as EK
> private key which should not be accessible to the host and to the guest.
> How should such information be delivered to the vTPM when starting a new
> VM?  If we provision encrypted NVDATA, who has the key to decrypt it?
> If we provision it with "classic" TEE secret injection, we need to do it
> quite early in the VM launch sequence (even before the firmware starts?).

For it to be transparent to the guest OS, then the vTPM state would
need to be unlocked prior to the guest OS being launched. This points
towards the confidential VM firmware triggering an initial call to the
attestation service, and receiving a key to unlock the vTPM state
as a response.

It is likely that the guest OS owner would want the option to perform
another attestation later in boot, to validate the broader OS userspace
boot status.  IOW, the firmware initiated attestation handles aspects
specific to bootstrapping the confidential VM environment, while an OS
initiated attestation would handle the generic (pre-existing) use cases
for OS state validation, familiar to anyone already using (v)TPMs.

> One suggestion is to use an ephemeral EK, generated at launch by the
> vTPM.  The system may start to boot using such a TPM, but once we want
> to unseal secrets (for example, to unlock a LUKS partition), we need
> something persistent inside the TPM (or re-seal the key for each TPM).
> Ephemeral TPMs might be a useful first step.

If the motivation for using vTPMs is to take advantage of pre-existing
TPM support in guest OS, then IMHO we should be aiming for the vTPM to
be on a par with a vTPM from a non-confidential VM / bare metal.  An
ephemeral only vTPM would loose some (but not all) of the benefit of
targetting pre-existing TPM support in guests.


> * Implementation in TEEs: SNP introduced VPMLs, and AMD's linux-SVSM
> running in VPML0 can also run vTPM code to handle TPM requests from the
> guest running in VMPL1.  Such a solution is not applicable as-is to
> other TEEs (SEV, TDX).  People suggested running vTPMs in a separate
> confidential VMs, and somehow connect the tenant's guest to the TPM VM;
> but we'll need a way to secure this communication channel.

TDX is obviously an important target, but I'm not sure its worth
worrying too much about SEV/SEV-ES as that generation is inherantly
limited & flawed compared to current SEV-SNP. The ony thing in favour
of SEV/SEV-ES is broader hardware availability today, but that will be
a time limited advantage that's eroded as SEV-SNP deployment expands. 

> * Guest enlightment: Guest software currently interacts with the TPM by
> writing commands to a memory-mapped IO page (GPA 0xfed40000) and reading
> responses from that page.  We want such writes to trigger the code of
> our vTPM (for whatever implementation we choose).  Our current early
> experience with TPM running in linux-SVSM required adding "exit-guest"
> calls after writing commands to the IO page, in order to allow the SVSM
> to run and recognize the incoming command.  Ideally, we'd like a
> solution that doesn't require modifying all the TPM drivers out there
> (in Linux, Windows, OVMF, grub, ...).

As best I could tell looking at the public Ubuntu confidential VM
image published in Azure, there were no modifications to TPM related
pieces of the stack. So theoretically it appears possible to achieve,
but I have no idea how they do so at a technical level.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Secure vTPMs for confidential VMs
  2022-09-21  8:49 ` Jörg Rödel
@ 2022-09-21 17:07   ` Tom Lendacky
  2022-09-22 21:14     ` Tobin Feldman-Fitzthum
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Lendacky @ 2022-09-21 17:07 UTC (permalink / raw)
  To: Jörg Rödel, Dov Murik
  Cc: linux-coco, Daniel P. Berrangé, Tobin Feldman-Fitzthum, amd-sev-snp

On 9/21/22 03:49, Jörg Rödel wrote:
> Hi Dov,
> 
> On Tue, Sep 20, 2022 at 11:28:15PM +0300, Dov Murik wrote:
>> * Implementation in TEEs: SNP introduced VPMLs, and AMD's linux-SVSM
>> running in VPML0 can also run vTPM code to handle TPM requests from the
>> guest running in VMPL1.  Such a solution is not applicable as-is to
>> other TEEs (SEV, TDX).  People suggested running vTPMs in a separate
>> confidential VMs, and somehow connect the tenant's guest to the TPM VM;
>> but we'll need a way to secure this communication channel.
> 
> Yes, so for SEV-SNP the way to implement a vTPM is via a Secure VM
> Service Module (SVSM) running at VMPL0.
> 
> I not sure how much we should care about the variant of running a vTPM
> in a separate trusted VM. In the long run SEV and SEV-ES will be
> replaced by SEV-SNP, and for TDX it would be best if Intel just adds a
> software TPM into their SEAM module. IIRC TDX already has some TPM-like
> features, e.g. PCRs, implemented there. A full vTPM seems to be doable.
> 
>> * Guest enlightment: Guest software currently interacts with the TPM by
>> writing commands to a memory-mapped IO page (GPA 0xfed40000) and reading
>> responses from that page.  We want such writes to trigger the code of
>> our vTPM (for whatever implementation we choose).  Our current early
>> experience with TPM running in linux-SVSM required adding "exit-guest"
>> calls after writing commands to the IO page, in order to allow the SVSM
>> to run and recognize the incoming command.  Ideally, we'd like a
>> solution that doesn't require modifying all the TPM drivers out there
>> (in Linux, Windows, OVMF, grub, ...).
> 
> It will not be that easy to emulate a vTPM at VMPL0 which has the same
> interface as memory mapped TPMs. That would mean marking the page as
> MMIO, but that will trigger a VC exception in the OS (or OVMF, Grub,
> ...), which would then need to forward the MMIO access to the SVSM. So
> either way, OVMF and Grub need modification to work with a vTPM running
> at a lower VMPL.

Agreed.

> 
> An alternative is using the ReflectVC feature to get the VC directed to
> the lower VMPL, but that has much wider implications and is not
> justified for only emulating a vTPM.

Using ReflectVC is geared more towards supporting un-enligtened guests. We 
don't want the SVSM to have to handle all #VCs that are triggered in the 
guest.

> 
> The current plan is to have VMPL1 talk to the VMPL0 vTPM via
> standardised SVSM commands. This requires new TPM drivers for all VMPL1
> components. At least unless someone comes up with a better idea :)

This is probably the best approach. We will have to modify the kernel no 
matter what, either recognize the MMIO range being accessed from within 
the #VC handler (and then parse the instruction, etc.) or modify/create a 
TPM driver that talks to the SVSM (and thus eliminates the exception 
path). Either way, an update to the kernel is required.

I'm not an expert in TPMs, but when using an SVSM enligntened TPM driver, 
maybe it becomes possible to even batch up multiple operations, which 
would improve overall performance.

We need to start looking at what the interface to the SVSM would look 
like. What is required from the SVSM (e.g. attestation report) and how to 
provide that to the VMPL1 guest, what is required to be supplied by the 
VMPL1 guest to perform the operation, etc.

To that end we can probably start talking about how we want to advertise 
support for a vTPM in the SVSM. I imagine it will be a new protocol with 
new functions (btw, look for an announcement shortly as the SVSM draft 
specification is now available from our website).

Thanks,
Tom

> 
> Regards,
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Secure vTPMs for confidential VMs
  2022-09-21 17:07   ` Tom Lendacky
@ 2022-09-22 21:14     ` Tobin Feldman-Fitzthum
  2022-09-22 22:01       ` [EXTERNAL] " Jon Lange
  0 siblings, 1 reply; 7+ messages in thread
From: Tobin Feldman-Fitzthum @ 2022-09-22 21:14 UTC (permalink / raw)
  To: Tom Lendacky, Jörg Rödel, Dov Murik
  Cc: linux-coco, Daniel P. Berrangé, amd-sev-snp

On 9/21/22 1:07 PM, Tom Lendacky wrote:
> On 9/21/22 03:49, Jörg Rödel wrote:
>>
>> The current plan is to have VMPL1 talk to the VMPL0 vTPM via
>> standardised SVSM commands. This requires new TPM drivers for all VMPL1
>> components. At least unless someone comes up with a better idea :)
> 
> This is probably the best approach. We will have to modify the kernel no
> matter what, either recognize the MMIO range being accessed from within
> the #VC handler (and then parse the instruction, etc.) or modify/create
> a TPM driver that talks to the SVSM (and thus eliminates the exception
> path). Either way, an update to the kernel is required.
> 
Yeah, supporting unenlightened guests is tricky. We were initially
thinking we could use the RMP table to cause an exit on the MMIO writes
and have the HV transfer control to VMPL0, but it's fairly complex (and
slow) to figure out what VMPL1 was about to write.

Still, it would be really big if we could do this without diverging from
the existing TPM interfaces. So here's a totally orthogonal idea that
you might not like.

What about adding an additional vCPU to the guest? This extra vCPU would
always run at VMPLO and would simply watch the TPM MMIO region. This
vCPU would also handle TPM emulation.

I realize this isn't really how AMD have envisioned using VMPL0
(although it could work alongside the current approach), but I think it
has some advantages. First, it seems like it would allow us to use the
existing interfaces without adding much extra complexity.

Second, I think it might be more in line with what we could support on
other platforms. Only SEV-SNP has VMPLs. It seems likely that any
complex reflection-based approach won't work on other platforms. Having
a vTPM VMPL0 vCPU on the other hand, would be fairly similar to having a
second VM with a shared address space. We might even be able to
implement something similar on SEV(-ES) using the so-called mirror VM.

Now there are also some drawbacks, but I'll let you guys point those out.

-Tobin
> I'm not an expert in TPMs, but when using an SVSM enligntened TPM
> driver, maybe it becomes possible to even batch up multiple operations,
> which would improve overall performance.
> 
> We need to start looking at what the interface to the SVSM would look
> like. What is required from the SVSM (e.g. attestation report) and how
> to provide that to the VMPL1 guest, what is required to be supplied by
> the VMPL1 guest to perform the operation, etc.
> 
> To that end we can probably start talking about how we want to advertise
> support for a vTPM in the SVSM. I imagine it will be a new protocol with
> new functions (btw, look for an announcement shortly as the SVSM draft
> specification is now available from our website).
> 
> Thanks,
> Tom
> 
>>
>> Regards,
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [EXTERNAL] Re: Secure vTPMs for confidential VMs
  2022-09-22 21:14     ` Tobin Feldman-Fitzthum
@ 2022-09-22 22:01       ` Jon Lange
  0 siblings, 0 replies; 7+ messages in thread
From: Jon Lange @ 2022-09-22 22:01 UTC (permalink / raw)
  To: Tobin Feldman-Fitzthum, Tom Lendacky, Jörg Rödel, Dov Murik
  Cc: linux-coco, Daniel P. Berrangé, amd-sev-snp

I have to believe that any guest-side implementation can only be successful if it does not impose requirements on the hosting environment.  I cannot imagine that hosts in general would be willing to define an additional vCPU context solely for the sake of executing vTPM logic.  The whole point of the SVSM interface definition is to permit a variety of SVSM implementations that can be compatible with a single guest OS image that works everywhere.  Imposing hosting requirements would appear to deviate from that goal.

Purely unenlightened guests surely will run into SNP complications in many places other than vTPM handling.  Isn't it reasonable to expect that any guest that knows enough to make SVSM calls is also going to implement a #VC handler?  I agree that instruction emulation in a #VC handler is cumbersome, but it is much simpler to do it from within the same VMPL and addressing context than having to do it across VMPLs.  If the TPM MMIO range were marked as not-PVALIDATEd then #VC delivery on accesses would be guaranteed.  Contrast that with VMPL-permission-restricted, where the behavior is dependent on what the host wants to do (which is outside of the SVSM contract anyway).

Emulation in a #VC handler may be undesirable for other reasons, so an entirely new, enlightened vTPM contract may be unavoidable (such as the intercept handling cost concern that was raised earlier), but if a new contract is the only path forward, then it seems like designing a specialized contract would be far preferable to hopping across vCPU contexts.

-Jon

-----Original Message-----
From: AMD-SEV-SNP <amd-sev-snp-bounces+jlange=microsoft.com@lists.suse.com> On Behalf Of Tobin Feldman-Fitzthum
Sent: Thursday, September 22, 2022 2:14 PM
To: Tom Lendacky <thomas.lendacky@amd.com>; Jörg Rödel <jroedel@suse.de>; Dov Murik <dovmurik@linux.ibm.com>
Cc: linux-coco@lists.linux.dev; Daniel P. Berrangé <berrange@redhat.com>; amd-sev-snp@lists.suse.com
Subject: [EXTERNAL] Re: Secure vTPMs for confidential VMs

[You don't often get email from tobin@linux.ibm.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

On 9/21/22 1:07 PM, Tom Lendacky wrote:
> On 9/21/22 03:49, Jörg Rödel wrote:
>>
>> The current plan is to have VMPL1 talk to the VMPL0 vTPM via 
>> standardised SVSM commands. This requires new TPM drivers for all 
>> VMPL1 components. At least unless someone comes up with a better idea 
>> :)
>
> This is probably the best approach. We will have to modify the kernel 
> no matter what, either recognize the MMIO range being accessed from 
> within the #VC handler (and then parse the instruction, etc.) or 
> modify/create a TPM driver that talks to the SVSM (and thus eliminates 
> the exception path). Either way, an update to the kernel is required.
>
Yeah, supporting unenlightened guests is tricky. We were initially thinking we could use the RMP table to cause an exit on the MMIO writes and have the HV transfer control to VMPL0, but it's fairly complex (and
slow) to figure out what VMPL1 was about to write.

Still, it would be really big if we could do this without diverging from the existing TPM interfaces. So here's a totally orthogonal idea that you might not like.

What about adding an additional vCPU to the guest? This extra vCPU would always run at VMPLO and would simply watch the TPM MMIO region. This vCPU would also handle TPM emulation.

I realize this isn't really how AMD have envisioned using VMPL0 (although it could work alongside the current approach), but I think it has some advantages. First, it seems like it would allow us to use the existing interfaces without adding much extra complexity.

Second, I think it might be more in line with what we could support on other platforms. Only SEV-SNP has VMPLs. It seems likely that any complex reflection-based approach won't work on other platforms. Having a vTPM VMPL0 vCPU on the other hand, would be fairly similar to having a second VM with a shared address space. We might even be able to implement something similar on SEV(-ES) using the so-called mirror VM.

Now there are also some drawbacks, but I'll let you guys point those out.

-Tobin
> I'm not an expert in TPMs, but when using an SVSM enligntened TPM 
> driver, maybe it becomes possible to even batch up multiple 
> operations, which would improve overall performance.
>
> We need to start looking at what the interface to the SVSM would look 
> like. What is required from the SVSM (e.g. attestation report) and how 
> to provide that to the VMPL1 guest, what is required to be supplied by 
> the VMPL1 guest to perform the operation, etc.
>
> To that end we can probably start talking about how we want to 
> advertise support for a vTPM in the SVSM. I imagine it will be a new 
> protocol with new functions (btw, look for an announcement shortly as 
> the SVSM draft specification is now available from our website).
>
> Thanks,
> Tom
>
>>
>> Regards,
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Secure vTPMs for confidential VMs
  2022-09-21  9:36 ` Daniel P. Berrangé
@ 2022-10-03  7:42   ` Dov Murik
  0 siblings, 0 replies; 7+ messages in thread
From: Dov Murik @ 2022-10-03  7:42 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: linux-coco, Tobin Feldman-Fitzthum, James Bottomley, amd-sev-snp,
	Dr. David Alan Gilbert, Dov Murik



On 21/09/2022 12:36, Daniel P. Berrangé wrote:
> On Tue, Sep 20, 2022 at 11:28:15PM +0300, Dov Murik wrote:
>> Emulating hardware TPMs has an advantage that guest software already
>> uses TPM devices to measure boot sequence components (firmware,
>> bootloader, kernel, initrd) and runtime events (IMA in Linux).  We know
>> that this currently works with vTPMs backed by the VMM implementation,
>> such as QEMU's tpm device which is connected to swtpm running on the
>> host.
> 
> Leveraging pre-existing support in guest OS feels pretty compelling.
> It is apparent that there is alot of maintainer activity across pieces
> of the Linux software/distro stack in relation to improving support
> for SecureBoot and (v)TPMs in general. Being able to take advantage of
> this would be good for confidential computing, by reducing the burden
> on software/distro maintainers, and giving users technology that they
> are (in theory) at least somewhat familiar with already.
> 

We see from discussions here that there are several layers in which we
can leverage pre-existing support in guest OS.  One would be to emulate
the TPM behaviour at the MMIO level, so writes from the OS to the MMIO
page will be captured and handled by the SVSM vTPM.  Another approach
would be to add another TPM low-level driver to the guest OS, which will
use proper guest-to-SVSM communication.  Maybe there are other ways.

In both cases, most of the OS code is unchanged: for example the use of
TPM in IMA and in trusted-keys.



> If we can drive the confidential compute specific bits, including
> the attestation of the confidential hardware, from the guest firmware,
> then it ought to make it easier for guest OS images to be agnostic as
> to whether they're running a non-confidential or confidential VM.
> 
> It becomes more of a deployment decision for the user of whether to
> use a confidential VM or not at any launch attempt. eg they could
> have 1 image and run it in a non-confidential VM on their internal
> cloud, while using a confidential VM on public cloud when needing
> to scale their resources.
> 
> 
> This would not be so straightforward with some of the alternative
> proposals for confidential VM disk images. For example another
> proposal has been to have a bootloader like grub embedded in the
> firmware, such that even /boot is encrypted in the disk image and
> gets keys provided for unlock prior to the OS being launched.
> 
> This would make that disk image inherantly incompatible with use
> in non-confidential VM, as well as requiring OS vendors to ship
> even more different cloud disk image variants, and support different
> boot processes in their software stack.
> 
> 
> So overall I'm heavily attracted to re-using existing technology
> to the greatest extent that is practical. It makes confidential
> computing "normal" and will facilitate its uptake.
> 
>> We so far recognized three issues that should be further researched in
>> order to implement secure vTPMs for confidential VMs; these are TPM
>> provisioning, implementations in TEEs, and guest enlightment.
>>
>> * TPM provisioning: The TPM contains sensitive information such as EK
>> private key which should not be accessible to the host and to the guest.
>> How should such information be delivered to the vTPM when starting a new
>> VM?  If we provision encrypted NVDATA, who has the key to decrypt it?
>> If we provision it with "classic" TEE secret injection, we need to do it
>> quite early in the VM launch sequence (even before the firmware starts?).
> 
> For it to be transparent to the guest OS, then the vTPM state would
> need to be unlocked prior to the guest OS being launched. This points
> towards the confidential VM firmware triggering an initial call to the
> attestation service, and receiving a key to unlock the vTPM state
> as a response.
> 

Indeed.  Implementing this requires very early communication with the
Guest Owner's Key Broker Service, which will probably need to be routed
through the host with the VMM's help.  We've previously discussed ways
to do this for SEV's pre-launch attestation flow (I think you suggested
using libvirt as part of the proxying solution); it seems that similar
mechanisms would be needed for SNP/TDX as well if we need to communicate
with the Guest Owner before the guest OS starts and sets up its network
stack.

Maybe a vsock would be helpful here to communicate with the host, but
AFAIK OVMF doesn't currently have a virtio-vsock driver... Another
approach is to define a shared page for guest-host communication, and
maybe even use the existing QEMU SEV commands for retrieving the
attestation and injecting the secrets (but with a different underlying
implementation).

-Dov

> It is likely that the guest OS owner would want the option to perform
> another attestation later in boot, to validate the broader OS userspace
> boot status.  IOW, the firmware initiated attestation handles aspects
> specific to bootstrapping the confidential VM environment, while an OS
> initiated attestation would handle the generic (pre-existing) use cases
> for OS state validation, familiar to anyone already using (v)TPMs.
> 
>> One suggestion is to use an ephemeral EK, generated at launch by the
>> vTPM.  The system may start to boot using such a TPM, but once we want
>> to unseal secrets (for example, to unlock a LUKS partition), we need
>> something persistent inside the TPM (or re-seal the key for each TPM).
>> Ephemeral TPMs might be a useful first step.
> 
> If the motivation for using vTPMs is to take advantage of pre-existing
> TPM support in guest OS, then IMHO we should be aiming for the vTPM to
> be on a par with a vTPM from a non-confidential VM / bare metal.  An
> ephemeral only vTPM would loose some (but not all) of the benefit of
> targetting pre-existing TPM support in guests.
> 
> 
>> * Implementation in TEEs: SNP introduced VPMLs, and AMD's linux-SVSM
>> running in VPML0 can also run vTPM code to handle TPM requests from the
>> guest running in VMPL1.  Such a solution is not applicable as-is to
>> other TEEs (SEV, TDX).  People suggested running vTPMs in a separate
>> confidential VMs, and somehow connect the tenant's guest to the TPM VM;
>> but we'll need a way to secure this communication channel.
> 
> TDX is obviously an important target, but I'm not sure its worth
> worrying too much about SEV/SEV-ES as that generation is inherantly
> limited & flawed compared to current SEV-SNP. The ony thing in favour
> of SEV/SEV-ES is broader hardware availability today, but that will be
> a time limited advantage that's eroded as SEV-SNP deployment expands. 
> 
>> * Guest enlightment: Guest software currently interacts with the TPM by
>> writing commands to a memory-mapped IO page (GPA 0xfed40000) and reading
>> responses from that page.  We want such writes to trigger the code of
>> our vTPM (for whatever implementation we choose).  Our current early
>> experience with TPM running in linux-SVSM required adding "exit-guest"
>> calls after writing commands to the IO page, in order to allow the SVSM
>> to run and recognize the incoming command.  Ideally, we'd like a
>> solution that doesn't require modifying all the TPM drivers out there
>> (in Linux, Windows, OVMF, grub, ...).
> 
> As best I could tell looking at the public Ubuntu confidential VM
> image published in Azure, there were no modifications to TPM related
> pieces of the stack. So theoretically it appears possible to achieve,
> but I have no idea how they do so at a technical level.
> 
> 
> With regards,
> Daniel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-10-03  7:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-20 20:28 Secure vTPMs for confidential VMs Dov Murik
2022-09-21  8:49 ` Jörg Rödel
2022-09-21 17:07   ` Tom Lendacky
2022-09-22 21:14     ` Tobin Feldman-Fitzthum
2022-09-22 22:01       ` [EXTERNAL] " Jon Lange
2022-09-21  9:36 ` Daniel P. Berrangé
2022-10-03  7:42   ` Dov Murik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).