All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tom Lendacky <thomas.lendacky@amd.com>
To: Jon Lange <jlange@microsoft.com>,
	Christophe de Dinechin <dinechin@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>
Cc: "linux-coco@lists.linux.dev" <linux-coco@lists.linux.dev>,
	"amd-sev-snp@lists.suse.com" <amd-sev-snp@lists.suse.com>
Subject: Re: [EXTERNAL] Re: SVSM Attestation and vTPM specification additions - v0.60
Date: Fri, 13 Jan 2023 10:10:40 -0600	[thread overview]
Message-ID: <63eff969-9015-19f4-2906-f3fd9b5c3bc2@amd.com> (raw)
In-Reply-To: <LV2PR21MB30847B7EA96416A48C8C8CC6CAFD9@LV2PR21MB3084.namprd21.prod.outlook.com>

On 1/11/23 19:27, Jon Lange wrote:
>> p16: Alignment for RCX is 8 bytes, but alignment for RDX in
>>        SVSM_CORE_CREATE_VCPU is 4K. Is that not the same calling area?
> 
> This is an inconsistency in the spec.  The first draft defined the calling area as 8-byte aligned but was redefined as 4k aligned after discussion about using the calling page as a parameter page.  The 4k change was made on page 13 but not here.  Tom, can you fix that, and can you look for other similar places I forgot to clean up before handing the spec off to you?

Yep, I can do that.

> 
>>        Also, what is the use case for moving the calling area?
> 
> It's about flexibility for OS memory management.  The calling area for the BSP is configured by the SVSM using whatever gPA it chooses.  That could be in the middle of EFI code or data, or some other random place.  By the time the OS has exited boot services and has started managing memory, it may find that that calling page is a fixed island in a sea of memory that it would rather use for something else.  Perhaps the OS wants to make sure that the calling pages for all vCPUs are contiguous to minimize the fragmentation of physical memory.  Best to give the OS a mechanism to relocate the calling page wherever it wants for maximum control of memory.
> 
>> p18: As indicated earlier, I'm confused by the 4K alginment requirement for
>>        RDX (Calling Area gPA)
> 
> This is consistent with p13.
> 
>> p20: Evidently, SVSM_CORE_DEPOSIT_MEM is intended to be used when another
>>        service returns 0x4mmm_mmmm. However, in the presence of a flag
>>        indicating "I may no longer need this memory", and given the
>>        limitations "cannot cross a page", I am concerned about possible lack
>>        of forward progress if two vCPUs start parallel operations where one
>>        vCPU says "Hey, I need X terabytes of RAM to do that" (which will then
>>        be split into umteen DEPOSIT_MEM calls due to page limit), while
>>        another says "Hey guys, I'm done" and sets the MEM_AVAILABLE flag. I
>>        see nothing in the spec that would prevent the second CPU from actively
>>        withdrawing the memory that the first one is trying to deposit. I think
>>        that the spec should clarify a forward-progress logic that prevents
>>        that from happening.
> 
> The expectation is that the OS is smart enough to solve this on its own.  It is certainly the case that the SVSM is designed to be stateless with respect to the core protocol elements, and not to make assumptions about how the OS intends to behave.  If the SVSM knows that it has memory to spare, it should be free to signal that to the OS for the OS to process as it chooses.  It would be pretty simple for the OS to have a gate that counts the number of SVSM operations that are underway at one time so that any one vCPU can avoid returning memory if there is a chance that other vCPUs may end up needing it.  This is best solved using conventions in the OS instead of crafting a delicate multi-threading protocol that the SVSM has to use to signal and arbitrate cross-vCPU contention.
> 
> It wouldn't hurt to make this more clear in the spec, for sure.

I'll update the specification to say that the OS is responsible for 
coordinating requests to DEPOSIT_MEM and WITHDRAW_MEM.

> 
>> p20: It would be interesting to have at least a vague idea of what
>>        operations can actually request more memory, just to set expectations.
> 
> This depends on the architecture of the SVSM.  It may be the case that the SVSM has to allocate a bunch of internal data structures per vCPU in order to manage whatever state is required to do whatever it's going to do, and in that case we would expect CREATE_VCPU to require memory.  Of course, this may not be required of all SVSM implementations.  It could be the case that the vTPM requires a bunch of memory but the SVSM doesn't reserve that memory because it doesn't want to waste memory if the vTPM is never used - in this case we would expect the first call with the vTPM protocol to request memory to instantiate the vTPM data structures.  It could be that the SVSM wants to maintain a sparse array of validated memory state and therefore every PVALIDATE request has to touch that array, which may require additional memory to materialize a new page of the sparse array upon first access - and in that case the PVALIDATE request may request more memory.  It's good to be prepared for additional memory demands for everything if possible.
> 
>> p20: Suppose that CREATE_VCPU requests more memory. It has no obvious
>>        "restart" field, unlike things like PVALIDATE. That means that there
>>        should be a rather strong guarantee that all SVSM calls that can
>>        potentially return 0x4mmm_mmmm either have no effect when they return
>>        such a request, or are idempotent if called again after providing more
>>        memory.
> 
> There is absolutely an expectation that each call can either describe the progress that has been made (like PVALIDATE) or else it must be idempotent.  I agree that this needs to be explicitly called out in the spec.  Tom, can you find a place to include this language?

Will do.

> 
>> p21: The MEM_AVAILABLE flag is set in the calling area of the startup vCPU.
>>        Is there any requirement that WITHDRAW_MEM should only be called from
>>        the startup vCPU, or from only one vCPU at a time?
> 
> There is no such constraint in the specification.  The SVSM is expected to serialize multiple simultaneous WITHDRAW_MEM calls to ensure that the same page doesn't show up in multiple withdrawn lists due to some race condition.  From the point of view of the calling OS, if multiple vCPUs make simultaneous calls to WITHDRAW_MEM, then the lists observed by each of them should be disjoint and consistent, so that each vCPU can return those pages to the local free pool correctly.

I'll remove the startup vCPU reference and add some language to the spec 
about this.

Thanks,
Tom

>  >> p22: The writable area ends at a page boundary. What could be a vald
>>        rationale for setting the RCX pointer in the middle of a page?
>>        Maybe simpler to require that the pointer be page-aligned than have a
>>        spec that mentions page offset 0xFF8 as a special case...
> 
> The rationale is to permit the parameters to inhabit the same page as the calling area - this reduces the amount of memory that the vCPU has to dedicate to an operation that is underway.  In this case, RCX would have a byte offset of +008 (just past the end of the calling area).  There is nothing magic about the +FF8 value here other than to serve as an illustration of the example of an invalid call which leaves enough room in a page for the call header but no room for any of the array elements.
> 
>> p22: Rationale for not returning incomplete? I'm trying to see how the
> guest
>>        could efficiently let secondary vCPUs withdraw memory with the protocol
>>        as specified, without a little additional wording regarding either the
>>        memory semantics of the MEM_AVAILABLE flag, and telling if there is
>>        indeed more work to be done by this vCPU using SVSM_ERR_INCOMPLETE.
> 
> The MEM_AVAILABLE flag in each vCPU's calling area is effectively equivalent to indicating that the withdrawal operation is incomplete.  The SVSM sets that flag upon the completion of every call to indicate whether memory was available for withdrawal at the completion of that last call.  If any vCPU observes MEM_AVAILABLE, then it can kick off a loop across as many vCPUs as it wants where each vCPU will call WITHDRAW_MEM (do-style and not while-style) until either a failure occurs or until the local MEM_AVAILABLE is cleared.  It doesn't matter if one vCPU finishes due to MEM_AVAILABLE while the others are still going; if one of the others observes MEM_AVAILABLE and attempts withdrawal after there is nothing left to withdraw, that call will just complete with an empty page list and will clear the local MEM_AVAILABLE to signal completion of the withdrawal process.  Defining SVSM_ERR_INCOMPLETE as a valid error code here doesn't change any of this logic; it only changes what the local vCPU checks (error code vs. local flag) to determine whether there is more work to do at the time it makes that check.
> 
> -Jon
> 
> -----Original Message-----
> From: Tom Lendacky <thomas.lendacky@amd.com>
> Sent: Wednesday, January 11, 2023 3:01 PM
> To: Christophe de Dinechin <dinechin@redhat.com>; Jon Lange <jlange@microsoft.com>; James Bottomley <jejb@linux.ibm.com>
> Cc: linux-coco@lists.linux.dev; amd-sev-snp@lists.suse.com
> Subject: [EXTERNAL] Re: SVSM Attestation and vTPM specification additions - v0.60
> 
> Adding @Jon Lange and @James Bottomley on the To: line.
> 
> @Jon Lange, please search for your name to answer some questions below.
> @James Bottomley, ditto.
> 
> On 1/11/23 10:39, Christophe de Dinechin wrote:
>> Hi Tom,
>>
>>
>> On 2023-01-10 at 12:54 -06, Tom Lendacky <thomas.lendacky@amd.com> wrote...
>>> Attached is an updated draft version of the SVSM specification with
>>> added support for an attestation protocol and a vTPM protocol as well
>>> as other miscellaneous changes (all identified by change bar). Please
>>> take a look and reply with any feedback you may have.
>>>
>>> Thanks,
>>> Tom
>>
>> Thanks for sharing.
>>
>> This is the first time I actually review that document, so my feedback
>> will be a bit longer than most. Also, I read it at a time where I had
>> lost network access to Internet, so RTFM wasn't an option...
>>
>> First, the actual errors:
>> p9: Typo VMLP1+ instead of VMPL1+
> 
> Fixed
> 
>> p18: "bit1=0" and "bit1=1": That seems to be bit 2
> 
> Fixed
> 
>>
>> Then the more mundane comments
>>
>> p9: "expected that, but not limited to": the wording sounds strange to
>> me
> 
> Ok, I'll work on re-wording this.
> 
>> p9: Undefined acronym (*) VMSA
> 
> Fixed
> 
>>
>> p10: "certain forms of RMPADJUST": The body of the document seems to
>>        indicate that the required RMPADJUST are performed as part of the
>>        various other services. There is no explicit need (apparently) for a
>>        separate guest-accessible RMPADJUST. Maybe expand a little bit on this
>>        topic, and explain if the guest is supposed to do any RMPADJUST if
>>        running at VMPL1.
> 
> RMPADJUST can be performed when not running at VMPL0, but only RMPADJUST executed at VMPL0 is allowed to turn a page into a VMSA page. I'll re-word this.
> 
>> p10: gPA space of the guest: "of the guest" seems redundant, since
>> it's gPAs
> 
> Fixed
> 
>> p10: lower VMPL -> less privileged VMPL, or explain that "lower" privilege
>>        levels have a higher number.
> 
> Fixed
> 
>> p10: "The initial SVSM memory configuration...": Unclear what "required"
>>        means in that sentence. Is that the core protocol? Can "create VCPU"
>>        request additional memory, for example?
> 
> I'll work on clarifying that.
> 
>>
>> p11: No explanation of how the SVSM knows where the secrets page is.
>>        Probably need an xref to some other doc.
> 
> That is implemenation specific. For example, in the Linux SVSM prototype, the address of where to place the secrets page is communicated to the hypervisor via a GUID structure at the end of the SVSM binary. Other implementations may have a different method.
> 
>> p11: Who does the initial construction of the secrets page? What happens if
>>        that other actor does not write zeroes? What attacks can the host
>>        perform on the secrets page if any?
> 
> The SEV firmware constructs the secrets page directly in guest memory.
> Standard SNP practices regarding memory usage and access within the guest apply.
> 
>> p11: undefined acronym VMPCK
> 
> Fixed
> 
>> p11: Why should the guest OS "capture" the SVSM_CAA value? In what sense?
>>        Is it because it can change afterwards, or because the secrets page
>>        becomes unavailable, or for another reason?
> 
> I'll reword this, but it basically means that the OS is expected to use that value (unless it changes it) for the BSP. The OS is free to do what it wants with the memory where the secrets page is located.
> 
>> p11: Byte offset in secrets page fields starts at 0x140. Explain why this is
>>        safe, and how other possible users of the secrets page would avoid
>>        stomping on that area.
> 
> Hmmm... what other users? The SNP spec defines the area as reserved for guest usage (intended for use by the SVSM, but not explicitly stated). The SVSM specifies how it uses it and identifies itself to the guest OS. If the guest OS wants to trash it later, it can do that, but it should save off any information it needs.
> 
> In the Linux SVSM prototype, the secrets page created by the SEV firmware lives in the SVSM memory. The SVSM locates where the OVMF secrets page is supposed to live and copies the secret page to that location, modifies it and then begins OVMF exececution. So the OS is using a different secrets page than the SVSM.
> 
>>
>> p12: How does the SVSM "terminate"? Is it a regular guest shutdown, or some
>>        other mechanism, or is unspecified? Can the SVSM log anything, and if
>>        so, where would it be found? (I assume that would be host platform
>>        specific, but would still be noteworthy in this specification)
> 
> As it is implementation dependent, I'd rather not put that type of information in the specification.
> 
> In the current Linux SVSM prototype, regular guest shutdown terminates the SVSM and the it logs to a fixed serial port.
> 
>>
>> p13: "Use of the Calling Area is necessary..." effectively, this is a single
>>        byte in the calling area, right? So it's not really "ensuring", maybe
>>        "detect" spurious invokation (1). I think malicious invokations are not
>>        made too difficult by this mechanism, since there are only two states,
>>        and the host still controls when vCPUs run.
> 
> I can change ensure to detect.
> 
>>
>> p14: Undefined acronyms: GHCB, MSR
> 
> Fixed
> 
> 
>> p14: I think GHCB Specification is an external reference, worthy having
>>        italics and a precise reference / link (can't check, no network
>> ATM)
> 
> Fixed
> 
>> p14: "If the host illegally entered the SVSM, this field will be zero": I
>>        believe that the conditions enforcing this should be precisely spelled
>>        out, including for a host with malicious intent. If the mechanism is
>>        indeed robust, then we are not protecting against "spurious" calls but
>>        against "spurious or malicious" calls. Otherwise, "will be zero" should
>>        be replaced by "should normally be zero".
> 
> Standard SNP practices are to be applied here. The field will be zero, as the only way it can be non-zero is from the guest setting the value or the host taking ownership of the CAA page and changing it. However, the act of the hypervisor taking ownership of the page will result in a #VC in the guest when attempting to access the page. At which point the guest knows that the host is being malicious and should terminate.
> 
>> p14: "only after VMSA.RAX and SVSM_CALL_PENDING": this suggests that there
>>        is some kind of possible race condition here. If that is true, then
>>        maybe there is a need to specify memory ordering semantics on the three
>>        relevant fields?
> 
> The race is that once VMSA.EFER.SVME is set to 1, the guest VMSA can be used on a VMRUN instruction. So all updates to the guest VMSA contents must be performed before making the VMSA runnable, again.
> 
>>
>> p16: PVALIDATE: What happens if a guest is unaware of SVSM and executes
>>        PVALIDATE directly? Is the SVSM supposed to emulate that, or to punish
>>        the guest, or something else? Is that even possible for the SVSM to
>>        trap and emulate relevant VMPL1 instructions? Also see note on page 10
>>        regarding RMPADJUST.
> 
> If the guest is unaware of the SVSM (meaning it is running at VMPL1, for example, but doesn't know it), the PVALIDATE will fail with a #GP and the guest will crash. The current upstream Linux implementation checks to ensure it is running at VMPL0 or else terminates.
> 
>> p16: "It affects the Calling Area for calling vCPU only": This seems slighly
>>        inconsistent with page 9 "other SVSM implementations may choose a
>>        single execution context that services all guest VCPUs".
> 
> Not really. It is used to identify the calling vCPU and thus the VMSA that contains the call information. So, even if a single execution context is being used, the SVSM still needs to have a unique calling area for each vCPU in the guest.
> 
>> p16: Alignment for RCX is 8 bytes, but alignment for RDX in
>>        SVSM_CORE_CREATE_VCPU is 4K. Is that not the same calling area?
>>        Also, what is the use case for moving the calling area?
> 
> As the original author, maybe @Jon Lange can explain these requirements.
> 
>>
>> p17: The table links appear in a strange colour (some kind of weird cyan).
>>        It seems clickable too, so I suspect a hyperlink, but since the link is
>>        always on the same page, it's not super-useful.
> 
> I can remove that.
> 
>>
>> p18: Is the CAA seen as assigned to the SVSM? I believe the answer is
>> no
> 
> Correct, it is not.
> 
>> p18: For increased readability, I suggest naming the error codes for the
>>        SVSM_CORE_PVALIDATE call, and putting them in a table.
>>        Also, why tag the specific errors right after the architectural
>>        PVALIDATE errors? In case of architectural extension, you'd always get
>>        0x8000_1011, which is not super helpful. Instead, you could reserve
>>        0x8000_1xxx for protocol errors, and put PVALIDATE errors at
>>        0x9nnn_nnnn, which probably gives you enough room at least for the
>>        coming 6 months.
> 
> I'll look into this, no promises, though :)
> 
>> p18: "VMPL of the VCPU making the request": wouldn't it make sense to add a
>>        VMPL field in the PVALIDATE operation descriptor, so that the guest
>>        could control less-privileged VMPLs?
> 
> After the PVALIDATE, the guest can use the RMPADJUST instruction for adjusting page permissions for less-privileged VMPLs.
> 
>> p18: As indicated earlier, I'm confused by the 4K alginment requirement for
>>        RDX (Calling Area gPA)
> 
> Lets wait for the response from @Jon Lange.
> 
>> p18: What is the APIC ID of the vCPU used for? I see no mention in the
>>        explanatory text. Is that an internal index, or does the SVSM
>>        implementation need it for some reason (I was not clever enough to
>>        imagine why)
> 
> It's needed to ensure we are using the correct VMSA and CAA for a specific vCPU. Since the BSP creates all of the OS APs, were it to not supply the APIC ID when calling the CREATE_VCPU call, the SVSM would not have a way to associate the VMSA and CAA being created with a vCPU.
> 
>>
>> p19: What specification defines FAIL_INUSE? Add xref?
> 
> That's the FAIL_INUSE error code, 3, of the RMPUPDATE instruction. The
> PSMASH, PVALIDATE and RMPADJUST instructions share the same return codes,
> where applicable.
> 
>>
>> p20: Evidently, SVSM_CORE_DEPOSIT_MEM is intended to be used when another
>>        service returns 0x4mmm_mmmm. However, in the presence of a flag
>>        indicating "I may no longer need this memory", and given the
>>        limitations "cannot cross a page", I am concerned about possible lack
>>        of forward progress if two vCPUs start parallel operations where one
>>        vCPU says "Hey, I need X terabytes of RAM to do that" (which will then
>>        be split into umteen DEPOSIT_MEM calls due to page limit), while
>>        another says "Hey guys, I'm done" and sets the MEM_AVAILABLE flag. I
>>        see nothing in the spec that would prevent the second CPU from actively
>>        withdrawing the memory that the first one is trying to deposit. I think
>>        that the spec should clarify a forward-progress logic that prevents
>>        that from happening.
>> p20: It would be interesting to have at least a vague idea of what
>>        operations can actually request more memory, just to set expectations.
>> p20: Suppose that CREATE_VCPU requests more memory. It has no obvious
>>        "restart" field, unlike things like PVALIDATE. That means that there
>>        should be a rather strong guarantee that all SVSM calls that can
>>        potentially return 0x4mmm_mmmm either have no effect when they return
>>        such a request, or are idempotent if called again after providing more
>>        memory.
>>
>> p21: The MEM_AVAILABLE flag is set in the calling area of the startup vCPU.
>>        Is there any requirement that WITHDRAW_MEM should only be called from
>>        the startup vCPU, or from only one vCPU at a time?
>>
>> p22: The writable area ends at a page boundary. What could be a vald
>>        rationale for setting the RCX pointer in the middle of a page?
>>        Maybe simpler to require that the pointer be page-aligned than have a
>>        spec that mentions page offset 0xFF8 as a special case... > p22: Rationale for not returning incomplete? I'm trying to see how the
> guest
>>        could efficiently let secondary vCPUs withdraw memory with the protocol
>>        as specified, without a little additional wording regarding either the
>>        memory semantics of the MEM_AVAILABLE flag, and telling if there is
>>        indeed more work to be done by this vCPU using SVSM_ERR_INCOMPLETE.
> 
> Let's see what @Jon Lange's view on the p20 - p22 comments.
> 
>>
>> p23: Table 9 is mnissing RDX, R8 and R9 rows (as input for configuration)
>> p23: Table 9 title should be "configuration or query"
>> p23: RCX result is the same for query and configuration, or is that for
>>        query only?
> 
> I'll update the table to make everything more clear.
> 
>> p23: At the specified RIP. If Bit 3 is not set in the configuration case,
>>        does it return after the VMGEXIT?
> 
> Yes, it will return to the current value of RIP in the VMSA.
> 
>>
>> p29: Table 14: How can RAX be used as command ordinal and command response
>>        size if it's already used for call identifier / result value?
> 
> Typo, that should have been 0x000 (since the column is Byte Offset).
> 
>>
>> p30: Why would you need 4 bytes for the TPM command ordinal? This causes the
>>        TPM command size to be misaligned. What about 2 bytes for command
>>        ordinal, one byte for locality, and one reserved byte?
> 
> This is following the MS simulator protocol for the TPM 2.0 reference
> implementation as proposed by @James Bottomley. He also submitted an RFC
> TPM platform driver implementation to the linux-coco list following this
> protocol. It would be best to involve him in the discussion.
> 
>>
>>
>> (*) Like most readers of this document, I know what it means, but since you
>> defined VMM just above, or VM the page before, I interpreted your intent to
>> be that every acronym should be defined on first use.
>>
>> (1) As I am writing this, I have a doubt what happens if the host writes to
>>       the secrets page, and can't verify easily without a network.
> 
> The host can't write to the secrets page without first performing an
> RMPUPDATE to make it a hypervisor page. If it then were to modify the page
> and then re-assign it to the guest, the guest would receive a #VC when
> accessing the page because it is no longer validated, allowing the guest
> to detect the condition and terminate.
> 
> Thanks for the feedback!
> 
> Tom
> 
>>
>>>
>>> [2. application/pdf; 58019-Secure_VM_Service_Module_Specification.pdf]...
>>
>>
>> --
>> Cheers,
>> Christophe de Dinechin (https://c3d.github.io/
>> Theory of Incomplete Measurements (https://c3d.github.io/TIM
>>

  reply	other threads:[~2023-01-13 16:10 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-10 18:54 SVSM Attestation and vTPM specification additions - v0.60 Tom Lendacky
2023-01-10 19:37 ` Tom Lendacky
2023-01-10 19:40 ` Dionna Amalie Glaze
2023-01-10 21:03   ` Tom Lendacky
2023-01-10 22:14     ` James Bottomley
2023-01-10 22:45       ` Tom Lendacky
2023-01-10 23:52         ` James Bottomley
2023-01-11  9:15           ` Christophe de Dinechin Dupont de Dinechin
2023-01-10 20:29 ` James Bottomley
2023-01-10 20:37   ` James Bottomley
2023-01-10 21:33     ` Tom Lendacky
2023-01-10 21:32   ` Tom Lendacky
2023-01-10 21:47     ` James Bottomley
2023-01-10 23:00       ` Tom Lendacky
2023-01-10 23:09         ` James Bottomley
2023-01-11 14:49           ` Tom Lendacky
2023-01-11 14:56             ` James Bottomley
2023-01-10 23:14         ` James Bottomley
2023-01-11 16:39 ` Christophe de Dinechin
2023-01-11 23:00   ` Tom Lendacky
2023-01-12  1:27     ` [EXTERNAL] " Jon Lange
2023-01-13 16:10       ` Tom Lendacky [this message]
2023-01-12 13:57   ` James Bottomley
2023-01-12 15:13     ` Tom Lendacky
2023-01-12 15:24       ` James Bottomley
2023-01-13 16:12         ` Tom Lendacky
2023-01-12  8:19 ` Dov Murik
2023-01-12 12:18   ` James Bottomley
2023-01-13 16:16   ` Tom Lendacky
2023-01-13 11:50 ` Nicolai Stange
2023-01-13 17:20   ` Tom Lendacky
2023-01-24  9:35 ` Jörg Rödel
2023-01-26 14:36   ` Tom Lendacky
2023-01-26 16:45     ` Christophe de Dinechin Dupont de Dinechin
2023-02-01 10:50   ` Jörg Rödel
2023-02-20 15:10     ` Tom Lendacky
2023-01-24  9:45 ` Jörg Rödel
2023-01-26 14:51   ` Tom Lendacky
2023-01-26 16:49     ` Christophe de Dinechin Dupont de Dinechin
2023-01-26 17:33       ` [EXTERNAL] " Jon Lange
2023-01-27  8:35         ` Jörg Rödel
2023-01-27 16:11           ` Jon Lange
2023-01-30 11:29             ` Jörg Rödel
2023-01-31  4:44               ` Jon Lange
2023-01-31 15:06                 ` Tom Lendacky
2023-01-31 15:34                   ` Jon Lange
2023-02-01 15:20                 ` [EXTERNAL] " Christophe de Dinechin Dupont de Dinechin
2023-02-02  6:04                   ` Jon Lange

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=63eff969-9015-19f4-2906-f3fd9b5c3bc2@amd.com \
    --to=thomas.lendacky@amd.com \
    --cc=amd-sev-snp@lists.suse.com \
    --cc=dinechin@redhat.com \
    --cc=jejb@linux.ibm.com \
    --cc=jlange@microsoft.com \
    --cc=linux-coco@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.