All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brijesh Singh <brijesh.singh@amd.com>
To: Borislav Petkov <bp@alien8.de>
Cc: brijesh.singh@amd.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	linux-efi@vger.kernel.org, platform-driver-x86@vger.kernel.org,
	linux-coco@lists.linux.dev, linux-mm@kvack.org,
	linux-crypto@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Joerg Roedel <jroedel@suse.de>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Andy Lutomirski <luto@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Sergio Lopez <slp@redhat.com>, Peter Gonda <pgonda@google.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	tony.luck@intel.com, npmccallum@redhat.com
Subject: Re: [PATCH Part1 RFC v3 11/22] x86/sev: Add helper for validating pages in early enc attribute changes
Date: Mon, 14 Jun 2021 07:45:11 -0500	[thread overview]
Message-ID: <d0759889-94df-73b0-4285-fa064eb187cd@amd.com> (raw)
In-Reply-To: <YMI02+k2zk9eazjQ@zn.tnic>


On 6/10/21 10:50 AM, Borislav Petkov wrote:
> On Wed, Jun 02, 2021 at 09:04:05AM -0500, Brijesh Singh wrote:
>> @@ -65,6 +65,12 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>>  /* RMP page size */
>>  #define RMP_PG_SIZE_4K			0
>>  
>> +/* Memory opertion for snp_prep_memory() */
>> +enum snp_mem_op {
>> +	MEMORY_PRIVATE,
>> +	MEMORY_SHARED
> See below.
>
>> +};
>> +
>>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>>  extern struct static_key_false sev_es_enable_key;
>>  extern void __sev_es_ist_enter(struct pt_regs *regs);
>> @@ -103,6 +109,11 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
>>  
>>  	return rc;
>>  }
>> +void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
>> +		unsigned int npages);
>> +void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
>> +		unsigned int npages);
> Align arguments on the opening brace.

Noted.


>
>> +void __init snp_prep_memory(unsigned long paddr, unsigned int sz, int op);
>>  #else
>>  static inline void sev_es_ist_enter(struct pt_regs *regs) { }
>>  static inline void sev_es_ist_exit(void) { }
>> @@ -110,6 +121,15 @@ static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { ret
>>  static inline void sev_es_nmi_complete(void) { }
>>  static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
>>  static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) { return 0; }
>> +static inline void __init
>> +early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages)
> Put those { } at the end of the line:
>
> early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
>
> no need for separate lines. Ditto below.

Noted.


>
>> +{
>> +}
>> +static inline void __init
>> +early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages)
>> +{
>> +}
>> +static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, int op) { }
>>  #endif
>>  
>>  #endif
>> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
>> index 455c09a9b2c2..6e9b45bb38ab 100644
>> --- a/arch/x86/kernel/sev.c
>> +++ b/arch/x86/kernel/sev.c
>> @@ -532,6 +532,111 @@ static u64 get_jump_table_addr(void)
>>  	return ret;
>>  }
>>  
>> +static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
>> +{
>> +	unsigned long vaddr_end;
>> +	int rc;
>> +
>> +	vaddr = vaddr & PAGE_MASK;
>> +	vaddr_end = vaddr + (npages << PAGE_SHIFT);
>> +
>> +	while (vaddr < vaddr_end) {
>> +		rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
>> +		if (WARN(rc, "Failed to validate address 0x%lx ret %d", vaddr, rc))
>> +			sev_es_terminate(1, GHCB_TERM_PVALIDATE);
> 					^^
>
> I guess that 1 should be a define too, if we have to be correct:
>
> 			sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
>
> or so. Ditto for all other calls of this.

Sure, I will define a macro for it.


>
>> +
>> +		vaddr = vaddr + PAGE_SIZE;
>> +	}
>> +}
>> +
>> +static void __init early_set_page_state(unsigned long paddr, unsigned int npages, int op)
>> +{
>> +	unsigned long paddr_end;
>> +	u64 val;
>> +
>> +	paddr = paddr & PAGE_MASK;
>> +	paddr_end = paddr + (npages << PAGE_SHIFT);
>> +
>> +	while (paddr < paddr_end) {
>> +		/*
>> +		 * Use the MSR protocol because this function can be called before the GHCB
>> +		 * is established.
>> +		 */
>> +		sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
>> +		VMGEXIT();
>> +
>> +		val = sev_es_rd_ghcb_msr();
>> +
>> +		if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)
> From a previous review:
>
> Does that one need a warning too or am I being too paranoid?

IMO, there is no need to add a warning. This case should happen if its
either a hypervisor bug or hypervisor does not follow the GHCB
specification. I followed the SEV-ES vmgexit handling  and it does not
warn if the hypervisor returns a wrong response code. We simply
terminate the guest.


>
>> +			goto e_term;
>> +
>> +		if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
>> +			 "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
>> +			 op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared",
>> +			 paddr, GHCB_MSR_PSC_RESP_VAL(val)))
>> +			goto e_term;
>> +
>> +		paddr = paddr + PAGE_SIZE;
>> +	}
>> +
>> +	return;
>> +
>> +e_term:
>> +	sev_es_terminate(1, GHCB_TERM_PSC);
>> +}
>> +
>> +void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
>> +					 unsigned int npages)
>> +{
>> +	if (!sev_feature_enabled(SEV_SNP))
>> +		return;
>> +
>> +	 /* Ask hypervisor to add the memory pages in RMP table as a 'private'. */
> 	    Ask the hypervisor to mark the memory pages as private in the RMP table.

Noted.


>
>> +	early_set_page_state(paddr, npages, SNP_PAGE_STATE_PRIVATE);
>> +
>> +	/* Validate the memory pages after they've been added in the RMP table. */
>> +	pvalidate_pages(vaddr, npages, 1);
>> +}
>> +
>> +void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
>> +					unsigned int npages)
>> +{
>> +	if (!sev_feature_enabled(SEV_SNP))
>> +		return;
>> +
>> +	/*
>> +	 * Invalidate the memory pages before they are marked shared in the
>> +	 * RMP table.
>> +	 */
>> +	pvalidate_pages(vaddr, npages, 0);
>> +
>> +	 /* Ask hypervisor to make the memory pages shared in the RMP table. */
> 			      mark

Noted.


>> +	early_set_page_state(paddr, npages, SNP_PAGE_STATE_SHARED);
>> +}
>> +
>> +void __init snp_prep_memory(unsigned long paddr, unsigned int sz, int op)
>> +{
>> +	unsigned long vaddr, npages;
>> +
>> +	vaddr = (unsigned long)__va(paddr);
>> +	npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
>> +
>> +	switch (op) {
>> +	case MEMORY_PRIVATE: {
>> +		early_snp_set_memory_private(vaddr, paddr, npages);
>> +		return;
>> +	}
>> +	case MEMORY_SHARED: {
>> +		early_snp_set_memory_shared(vaddr, paddr, npages);
>> +		return;
>> +	}
>> +	default:
>> +		break;
>> +	}
>> +
>> +	WARN(1, "invalid memory op %d\n", op);
> A lot easier, diff ontop of your patch:

thanks. I will apply it.

I did thought about reusing the VMGEXIT defined macro
SNP_PAGE_STATE_{PRIVATE, SHARED} but I was not sure if you will be okay
with that. Additionally now both the function name and macro name will
include the "SNP". The call will look like this:

snp_prep_memory(paddr, SNP_PAGE_STATE_PRIVATE)

>
> ---
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 7c2cb5300e43..2ad4b5ab3f6c 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -65,12 +65,6 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>  /* RMP page size */
>  #define RMP_PG_SIZE_4K			0
>  
> -/* Memory opertion for snp_prep_memory() */
> -enum snp_mem_op {
> -	MEMORY_PRIVATE,
> -	MEMORY_SHARED
> -};
> -
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  extern struct static_key_false sev_es_enable_key;
>  extern void __sev_es_ist_enter(struct pt_regs *regs);
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 2a5dce42af35..991d7964cee9 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -662,20 +662,13 @@ void __init snp_prep_memory(unsigned long paddr, unsigned int sz, int op)
>  	vaddr = (unsigned long)__va(paddr);
>  	npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
>  
> -	switch (op) {
> -	case MEMORY_PRIVATE: {
> +	if (op == SNP_PAGE_STATE_PRIVATE)
>  		early_snp_set_memory_private(vaddr, paddr, npages);
> -		return;
> -	}
> -	case MEMORY_SHARED: {
> +	else if (op == SNP_PAGE_STATE_SHARED)
>  		early_snp_set_memory_shared(vaddr, paddr, npages);
> -		return;
> +	else {
> +		WARN(1, "invalid memory page op %d\n", op);
>  	}
> -	default:
> -		break;
> -	}
> -
> -	WARN(1, "invalid memory op %d\n", op);
>  }
>  
>  int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
> ---
>
>>  static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE);
>>  
>> +/*
>> + * When SNP is active, changes the page state from private to shared before
> s/changes/change/

Noted.


>
>> + * copying the data from the source to destination and restore after the copy.
>> + * This is required because the source address is mapped as decrypted by the
>> + * caller of the routine.
>> + */
>> +static inline void __init snp_memcpy(void *dst, void *src, size_t sz,
>> +				     unsigned long paddr, bool decrypt)
>> +{
>> +	unsigned long npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
>> +
>> +	if (!sev_feature_enabled(SEV_SNP) || !decrypt) {
>> +		memcpy(dst, src, sz);
>> +		return;
>> +	}
>> +
>> +	/*
>> +	 * If the paddr needs to be accessed decrypted, mark the page
> What do you mean "If" - this is the SNP version of memcpy. Just say:
>
> 	/*
> 	 * With SNP, the page address needs to be ...
> 	 */
>
>> +	 * shared in the RMP table before copying it.
>> +	 */
>> +	early_snp_set_memory_shared((unsigned long)__va(paddr), paddr, npages);
>> +
>> +	memcpy(dst, src, sz);
>> +
>> +	/* Restore the page state after the memcpy. */
>> +	early_snp_set_memory_private((unsigned long)__va(paddr), paddr, npages);
>> +}
>> +
>>  /*
>>   * This routine does not change the underlying encryption setting of the
>>   * page(s) that map this memory. It assumes that eventually the memory is
>> @@ -96,8 +125,8 @@ static void __init __sme_early_enc_dec(resource_size_t paddr,
>>  		 * Use a temporary buffer, of cache-line multiple size, to
>>  		 * avoid data corruption as documented in the APM.
>>  		 */
>> -		memcpy(sme_early_buffer, src, len);
>> -		memcpy(dst, sme_early_buffer, len);
>> +		snp_memcpy(sme_early_buffer, src, len, paddr, enc);
>> +		snp_memcpy(dst, sme_early_buffer, len, paddr, !enc);
>>  
>>  		early_memunmap(dst, len);
>>  		early_memunmap(src, len);
>> @@ -277,9 +306,23 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
>>  	else
>>  		sme_early_decrypt(pa, size);
>>  
>> +	/*
>> +	 * If page is getting mapped decrypted in the page table, then the page state
>> +	 * change in the RMP table must happen before the page table updates.
>> +	 */
>> +	if (!enc)
>> +		early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);
> Merge the two branches:

Noted.


>
> 	/* Encrypt/decrypt the contents in-place */
>         if (enc) {
>                 sme_early_encrypt(pa, size);
>         } else {
>                 sme_early_decrypt(pa, size);
>
>                 /*
>                  * On SNP, the page state change in the RMP table must happen
>                  * before the page table updates.
>                  */
>                 early_snp_set_memory_shared((unsigned long)__va(pa), pa, 1);
>         }

- Brijesh


  reply	other threads:[~2021-06-14 12:45 UTC|newest]

Thread overview: 113+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-02 14:03 [PATCH Part1 RFC v3 00/22] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Brijesh Singh
2021-06-02 14:03 ` [PATCH Part1 RFC v3 01/22] x86/sev: shorten GHCB terminate macro names Brijesh Singh
2021-06-08 15:54   ` Venu Busireddy
2021-06-02 14:03 ` [PATCH Part1 RFC v3 02/22] x86/sev: Define the Linux specific guest termination reasons Brijesh Singh
2021-06-08 15:59   ` Venu Busireddy
2021-06-08 16:51     ` Brijesh Singh
2021-06-02 14:03 ` [PATCH Part1 RFC v3 03/22] x86/sev: Save the negotiated GHCB version Brijesh Singh
2021-06-03 19:57   ` Borislav Petkov
2021-06-08 17:35   ` Venu Busireddy
2021-06-02 14:03 ` [PATCH Part1 RFC v3 04/22] x86/mm: Add sev_feature_enabled() helper Brijesh Singh
2021-06-05 10:50   ` Borislav Petkov
2021-06-02 14:03 ` [PATCH Part1 RFC v3 05/22] x86/sev: Add support for hypervisor feature VMGEXIT Brijesh Singh
2021-06-07 14:19   ` Borislav Petkov
2021-06-07 14:58     ` Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 06/22] x86/sev: check SEV-SNP features support Brijesh Singh
2021-06-07 14:54   ` Borislav Petkov
2021-06-07 16:01     ` Brijesh Singh
2021-06-17 18:46     ` Brijesh Singh
2021-06-18  5:46       ` Borislav Petkov
2021-06-02 14:04 ` [PATCH Part1 RFC v3 07/22] x86/sev: Add a helper for the PVALIDATE instruction Brijesh Singh
2021-06-07 15:35   ` Borislav Petkov
2021-06-02 14:04 ` [PATCH Part1 RFC v3 08/22] x86/compressed: Add helper for validating pages in the decompression stage Brijesh Singh
2021-06-08 11:12   ` Borislav Petkov
2021-06-08 15:58     ` Brijesh Singh
2021-06-16 10:21   ` Borislav Petkov
2021-06-02 14:04 ` [PATCH Part1 RFC v3 09/22] x86/compressed: Register GHCB memory when SEV-SNP is active Brijesh Singh
2021-06-09 17:47   ` Borislav Petkov
2021-06-14 12:28     ` Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 10/22] x86/sev: " Brijesh Singh
2021-06-10  5:49   ` Borislav Petkov
2021-06-14 12:29     ` Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 11/22] x86/sev: Add helper for validating pages in early enc attribute changes Brijesh Singh
2021-06-10 15:50   ` Borislav Petkov
2021-06-14 12:45     ` Brijesh Singh [this message]
2021-06-14 19:03       ` Borislav Petkov
2021-06-14 21:01         ` Brijesh Singh
2021-06-16 10:07           ` Borislav Petkov
2021-06-16 11:00             ` Brijesh Singh
2021-06-16 12:03               ` Borislav Petkov
2021-06-16 12:49                 ` Brijesh Singh
2021-06-16 13:02                   ` Borislav Petkov
2021-06-16 13:10                     ` Brijesh Singh
2021-06-16 14:36                       ` Brijesh Singh
2021-06-16 14:37                         ` Brijesh Singh
2021-06-16 13:06                   ` Dr. David Alan Gilbert
2021-06-02 14:04 ` [PATCH Part1 RFC v3 12/22] x86/kernel: Make the bss.decrypted section shared in RMP table Brijesh Singh
2021-06-10 16:06   ` Borislav Petkov
2021-06-02 14:04 ` [PATCH Part1 RFC v3 13/22] x86/kernel: Validate rom memory before accessing when SEV-SNP is active Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 14/22] x86/mm: Add support to validate memory when changing C-bit Brijesh Singh
2021-06-11  9:44   ` Borislav Petkov
2021-06-14 13:05     ` Brijesh Singh
2021-06-14 19:27       ` Borislav Petkov
2021-06-02 14:04 ` [PATCH Part1 RFC v3 15/22] KVM: SVM: define new SEV_FEATURES field in the VMCB Save State Area Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 16/22] KVM: SVM: Create a separate mapping for the SEV-ES save area Brijesh Singh
2021-06-14 10:58   ` Borislav Petkov
2021-06-14 19:34     ` Tom Lendacky
2021-06-14 19:50       ` Borislav Petkov
2021-06-02 14:04 ` [PATCH Part1 RFC v3 17/22] KVM: SVM: Create a separate mapping for the GHCB " Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 18/22] KVM: SVM: Update the SEV-ES save area mapping Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 19/22] x86/sev-snp: SEV-SNP AP creation support Brijesh Singh
2021-06-16 13:07   ` Borislav Petkov
2021-06-16 16:13     ` Tom Lendacky
2021-06-02 14:04 ` [PATCH Part1 RFC v3 20/22] x86/boot: Add Confidential Computing address to setup_header Brijesh Singh
2021-06-18  6:08   ` Borislav Petkov
2021-06-18 13:57     ` Brijesh Singh
2021-06-18 15:05       ` Borislav Petkov
2021-06-23  4:30         ` Michael Roth
2021-06-23 10:22           ` Borislav Petkov
2021-06-23 10:22             ` Borislav Petkov
2021-06-24  3:19             ` Michael Roth
2021-06-24  3:19               ` Michael Roth
2021-06-24  7:27               ` Borislav Petkov
2021-06-24  7:27                 ` Borislav Petkov
2021-06-24 12:26                 ` Michael Roth
2021-06-24 12:26                   ` Michael Roth
2021-06-24 12:34                 ` Michael Roth
2021-06-24 12:34                   ` Michael Roth
2021-06-24 12:54                   ` Borislav Petkov
2021-06-24 12:54                     ` Borislav Petkov
2021-06-24 14:11                     ` Michael Roth
2021-06-24 14:11                       ` Michael Roth
2021-06-25 14:48                       ` Borislav Petkov
2021-06-25 14:48                         ` Borislav Petkov
2021-06-25 15:24                         ` Brijesh Singh
2021-06-25 15:24                           ` Brijesh Singh
2021-06-25 17:01                           ` Borislav Petkov
2021-06-25 17:01                             ` Borislav Petkov
2021-06-25 18:14                             ` Michael Roth
2021-06-25 18:14                               ` Michael Roth
2021-06-28 13:43                               ` Borislav Petkov
2021-06-28 13:43                                 ` Borislav Petkov
2021-06-24 13:09               ` Kuppuswamy, Sathyanarayanan
2021-06-24 13:09                 ` Kuppuswamy, Sathyanarayanan
2021-06-02 14:04 ` [PATCH Part1 RFC v3 21/22] x86/sev: Register SNP guest request platform device Brijesh Singh
2021-06-04 11:28   ` Sergio Lopez
2021-06-09 19:24   ` Dr. David Alan Gilbert
2021-06-11 13:16     ` Tom Lendacky
2021-06-14 17:15       ` Dr. David Alan Gilbert
2021-06-14 18:24         ` Brijesh Singh
2021-06-14 13:20     ` Brijesh Singh
2021-06-14 17:23       ` Dr. David Alan Gilbert
2021-06-14 20:50         ` Brijesh Singh
2021-06-18  9:46   ` Borislav Petkov
2021-06-18 13:59     ` Brijesh Singh
2021-06-02 14:04 ` [PATCH Part1 RFC v3 22/22] virt: Add SEV-SNP guest driver Brijesh Singh
2021-06-30 13:35   ` Borislav Petkov
2021-06-30 16:26     ` Brijesh Singh
2021-07-01 18:03       ` Borislav Petkov
2021-07-01 21:32         ` Brijesh Singh
2021-07-03 16:19           ` Borislav Petkov
2021-07-05 10:39             ` Brijesh Singh
2021-06-07 19:15 ` [PATCH Part1 RFC v3 00/22] Add AMD Secure Nested Paging (SEV-SNP) Guest Support Venu Busireddy
2021-06-07 19:17   ` Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0759889-94df-73b0-4285-fa064eb187cd@amd.com \
    --to=brijesh.singh@amd.com \
    --cc=ardb@kernel.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jmattson@google.com \
    --cc=jroedel@suse.de \
    --cc=kvm@vger.kernel.org \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-efi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=npmccallum@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pgonda@google.com \
    --cc=platform-driver-x86@vger.kernel.org \
    --cc=rientjes@google.com \
    --cc=seanjc@google.com \
    --cc=slp@redhat.com \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=tony.luck@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.