All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Anup Patel <anup@brainfault.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Andrew Jones <drjones@redhat.com>,
	Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
	maciej.szmigiero@oracle.com,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" 
	<kvmarm@lists.cs.columbia.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" 
	<linux-mips@vger.kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" 
	<kvm@vger.kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" 
	<kvm-riscv@lists.infradead.org>,
	Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v4 16/20] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU
Date: Mon, 9 May 2022 16:22:22 +0000	[thread overview]
Message-ID: <Ynk/vnHQsXnZkMGT@google.com> (raw)
In-Reply-To: <20220422210546.458943-17-dmatlack@google.com>

On Fri, Apr 22, 2022, David Matlack wrote:
> Currently make_huge_page_split_spte() assumes execute permissions can be
> granted to any 4K SPTE when splitting huge pages. This is true for the
> TDP MMU but is not necessarily true for the shadow MMU, since KVM may be
> shadowing a non-executable huge page.
> 
> To fix this, pass in the child shadow page where the huge page will be
> split and derive the execution permission from the shadow page's role.
> This is correct because huge pages are always split with direct shadow
> page and thus the shadow page role contains the correct access
> permissions.
> 
> No functional change intended.
> 
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
>  arch/x86/kvm/mmu/spte.c    | 13 +++++++------
>  arch/x86/kvm/mmu/spte.h    |  2 +-
>  arch/x86/kvm/mmu/tdp_mmu.c |  2 +-
>  3 files changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 4739b53c9734..9db98fbeee61 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -215,10 +215,11 @@ static u64 make_spte_executable(u64 spte)
>   * This is used during huge page splitting to build the SPTEs that make up the
>   * new page table.
>   */
> -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index)
> +u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)

Rather than pass in @sp, what about passing in @role?  Then the need for
exec_allowed and child_level goes away (for whatever reason I reacted to the
"allowed" part of exec_allowed).

E.g.

---
 arch/x86/kvm/mmu/spte.c    | 11 +++++------
 arch/x86/kvm/mmu/spte.h    |  3 ++-
 arch/x86/kvm/mmu/tdp_mmu.c |  2 +-
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 9db98fbeee61..1b766e381727 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -215,10 +215,9 @@ static u64 make_spte_executable(u64 spte)
  * This is used during huge page splitting to build the SPTEs that make up the
  * new page table.
  */
-u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)
+u64 make_huge_page_split_spte(u64 huge_spte, union kvm_mmu_page_role role,
+			      int index)
 {
-	bool exec_allowed = sp->role.access & ACC_EXEC_MASK;
-	int child_level = sp->role.level;
 	u64 child_spte;

 	if (WARN_ON_ONCE(!is_shadow_present_pte(huge_spte)))
@@ -234,9 +233,9 @@ u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)
 	 * split. So we just have to OR in the offset to the page at the next
 	 * lower level for the given index.
 	 */
-	child_spte |= (index * KVM_PAGES_PER_HPAGE(child_level)) << PAGE_SHIFT;
+	child_spte |= (index * KVM_PAGES_PER_HPAGE(role.level)) << PAGE_SHIFT;

-	if (child_level == PG_LEVEL_4K) {
+	if (role.level == PG_LEVEL_4K) {
 		child_spte &= ~PT_PAGE_SIZE_MASK;

 		/*
@@ -244,7 +243,7 @@ u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)
 		 * the page executable as the NX hugepage mitigation no longer
 		 * applies.
 		 */
-		if (exec_allowed && is_nx_huge_page_enabled())
+		if ((role.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled())
 			child_spte = make_spte_executable(child_spte);
 	}

diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 921ea77f1b5e..80d36d0d9def 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -415,7 +415,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	       unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
 	       u64 old_spte, bool prefetch, bool can_unsync,
 	       bool host_writable, u64 *new_spte);
-u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index);
+u64 make_huge_page_split_spte(u64 huge_spte, union kvm_mmu_page_role role,
+			      int index);
 u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled);
 u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access);
 u64 mark_spte_for_access_track(u64 spte);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 110a34ca41c2..c4c4bad69f38 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1469,7 +1469,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
 	 * not been linked in yet and thus is not reachable from any other CPU.
 	 */
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++)
-		sp->spt[i] = make_huge_page_split_spte(huge_spte, sp, i);
+		sp->spt[i] = make_huge_page_split_spte(huge_spte, sp->role, i);

 	/*
 	 * Replace the huge spte with a pointer to the populated lower level

base-commit: 721828e2397ab854b536de3ea10a9bc7962091a9
--

WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Marc Zyngier <maz@kernel.org>, Albert Ou <aou@eecs.berkeley.edu>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<kvm@vger.kernel.org>, Huacai Chen <chenhuacai@kernel.org>,
	"open list:KERNEL VIRTUAL MACHINE FOR MIPS \(KVM/mips\)"
	<linux-mips@vger.kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	"open list:KERNEL VIRTUAL MACHINE FOR RISC-V \(KVM/riscv\)"
	<kvm-riscv@lists.infradead.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Ben Gardon <bgardon@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	maciej.szmigiero@oracle.com,
	"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 \(KVM/arm64\)"
	<kvmarm@lists.cs.columbia.edu>, Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH v4 16/20] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU
Date: Mon, 9 May 2022 16:22:22 +0000	[thread overview]
Message-ID: <Ynk/vnHQsXnZkMGT@google.com> (raw)
In-Reply-To: <20220422210546.458943-17-dmatlack@google.com>

On Fri, Apr 22, 2022, David Matlack wrote:
> Currently make_huge_page_split_spte() assumes execute permissions can be
> granted to any 4K SPTE when splitting huge pages. This is true for the
> TDP MMU but is not necessarily true for the shadow MMU, since KVM may be
> shadowing a non-executable huge page.
> 
> To fix this, pass in the child shadow page where the huge page will be
> split and derive the execution permission from the shadow page's role.
> This is correct because huge pages are always split with direct shadow
> page and thus the shadow page role contains the correct access
> permissions.
> 
> No functional change intended.
> 
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
>  arch/x86/kvm/mmu/spte.c    | 13 +++++++------
>  arch/x86/kvm/mmu/spte.h    |  2 +-
>  arch/x86/kvm/mmu/tdp_mmu.c |  2 +-
>  3 files changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 4739b53c9734..9db98fbeee61 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -215,10 +215,11 @@ static u64 make_spte_executable(u64 spte)
>   * This is used during huge page splitting to build the SPTEs that make up the
>   * new page table.
>   */
> -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index)
> +u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)

Rather than pass in @sp, what about passing in @role?  Then the need for
exec_allowed and child_level goes away (for whatever reason I reacted to the
"allowed" part of exec_allowed).

E.g.

---
 arch/x86/kvm/mmu/spte.c    | 11 +++++------
 arch/x86/kvm/mmu/spte.h    |  3 ++-
 arch/x86/kvm/mmu/tdp_mmu.c |  2 +-
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 9db98fbeee61..1b766e381727 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -215,10 +215,9 @@ static u64 make_spte_executable(u64 spte)
  * This is used during huge page splitting to build the SPTEs that make up the
  * new page table.
  */
-u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)
+u64 make_huge_page_split_spte(u64 huge_spte, union kvm_mmu_page_role role,
+			      int index)
 {
-	bool exec_allowed = sp->role.access & ACC_EXEC_MASK;
-	int child_level = sp->role.level;
 	u64 child_spte;

 	if (WARN_ON_ONCE(!is_shadow_present_pte(huge_spte)))
@@ -234,9 +233,9 @@ u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)
 	 * split. So we just have to OR in the offset to the page at the next
 	 * lower level for the given index.
 	 */
-	child_spte |= (index * KVM_PAGES_PER_HPAGE(child_level)) << PAGE_SHIFT;
+	child_spte |= (index * KVM_PAGES_PER_HPAGE(role.level)) << PAGE_SHIFT;

-	if (child_level == PG_LEVEL_4K) {
+	if (role.level == PG_LEVEL_4K) {
 		child_spte &= ~PT_PAGE_SIZE_MASK;

 		/*
@@ -244,7 +243,7 @@ u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index)
 		 * the page executable as the NX hugepage mitigation no longer
 		 * applies.
 		 */
-		if (exec_allowed && is_nx_huge_page_enabled())
+		if ((role.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled())
 			child_spte = make_spte_executable(child_spte);
 	}

diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 921ea77f1b5e..80d36d0d9def 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -415,7 +415,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	       unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
 	       u64 old_spte, bool prefetch, bool can_unsync,
 	       bool host_writable, u64 *new_spte);
-u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index);
+u64 make_huge_page_split_spte(u64 huge_spte, union kvm_mmu_page_role role,
+			      int index);
 u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled);
 u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access);
 u64 mark_spte_for_access_track(u64 spte);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 110a34ca41c2..c4c4bad69f38 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1469,7 +1469,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
 	 * not been linked in yet and thus is not reachable from any other CPU.
 	 */
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++)
-		sp->spt[i] = make_huge_page_split_spte(huge_spte, sp, i);
+		sp->spt[i] = make_huge_page_split_spte(huge_spte, sp->role, i);

 	/*
 	 * Replace the huge spte with a pointer to the populated lower level

base-commit: 721828e2397ab854b536de3ea10a9bc7962091a9
--
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-05-09 16:22 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-22 21:05 [PATCH v4 00/20] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-04-22 21:05 ` David Matlack
2022-04-22 21:05 ` [PATCH v4 01/20] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-07  7:46   ` Lai Jiangshan
2022-05-07  7:46     ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 02/20] KVM: x86/mmu: Use a bool for direct David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-07  7:46   ` Lai Jiangshan
2022-05-07  7:46     ` Lai Jiangshan
2022-04-22 21:05 ` [PATCH v4 03/20] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 21:50   ` Sean Christopherson
2022-05-05 21:50     ` Sean Christopherson
2022-05-09 22:10     ` David Matlack
2022-05-09 22:10       ` David Matlack
2022-05-10  2:38       ` Lai Jiangshan
2022-05-10  2:38         ` Lai Jiangshan
2022-05-07  8:28   ` Lai Jiangshan
2022-05-07  8:28     ` Lai Jiangshan
2022-05-09 21:04     ` David Matlack
2022-05-09 21:04       ` David Matlack
2022-05-10  2:58       ` Lai Jiangshan
2022-05-10  2:58         ` Lai Jiangshan
2022-05-10 13:31         ` Sean Christopherson
2022-05-10 13:31           ` Sean Christopherson
2022-05-12 16:10         ` David Matlack
2022-05-12 16:10           ` David Matlack
2022-05-13 18:26           ` David Matlack
2022-05-13 18:26             ` David Matlack
2022-04-22 21:05 ` [PATCH v4 04/20] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 21:58   ` Sean Christopherson
2022-05-05 21:58     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 05/20] KVM: x86/mmu: Consolidate shadow page allocation and initialization David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 22:10   ` Sean Christopherson
2022-05-05 22:10     ` Sean Christopherson
2022-05-09 20:53     ` David Matlack
2022-05-09 20:53       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 06/20] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 22:15   ` Sean Christopherson
2022-05-05 22:15     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 07/20] KVM: x86/mmu: Move guest PT write-protection to account_shadowed() David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 22:51   ` Sean Christopherson
2022-05-05 22:51     ` Sean Christopherson
2022-05-09 21:18     ` David Matlack
2022-05-09 21:18       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 08/20] KVM: x86/mmu: Pass memory caches to allocate SPs separately David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 23:00   ` Sean Christopherson
2022-05-05 23:00     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 09/20] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 10/20] KVM: x86/mmu: Pass kvm pointer separately from vcpu to kvm_mmu_find_shadow_page() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page() David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 23:33   ` Sean Christopherson
2022-05-05 23:33     ` Sean Christopherson
2022-05-09 21:26     ` David Matlack
2022-05-09 21:26       ` David Matlack
2022-05-09 22:56       ` Sean Christopherson
2022-05-09 22:56         ` Sean Christopherson
2022-05-09 23:59         ` David Matlack
2022-05-09 23:59           ` David Matlack
2022-04-22 21:05 ` [PATCH v4 12/20] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 13/20] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-05 23:46   ` Sean Christopherson
2022-05-05 23:46     ` Sean Christopherson
2022-05-09 21:27     ` David Matlack
2022-05-09 21:27       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 14/20] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-22 21:05 ` [PATCH v4 15/20] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-06 19:47   ` Sean Christopherson
2022-05-06 19:47     ` Sean Christopherson
2022-05-09 16:10   ` Sean Christopherson
2022-05-09 16:10     ` Sean Christopherson
2022-05-09 21:29     ` David Matlack
2022-05-09 21:29       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 16/20] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-09 16:22   ` Sean Christopherson [this message]
2022-05-09 16:22     ` Sean Christopherson
2022-05-09 21:31     ` David Matlack
2022-05-09 21:31       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 17/20] KVM: x86/mmu: Zap collapsible SPTEs at all levels in " David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-09 16:31   ` Sean Christopherson
2022-05-09 16:31     ` Sean Christopherson
2022-05-09 21:34     ` David Matlack
2022-05-09 21:34       ` David Matlack
2022-04-22 21:05 ` [PATCH v4 18/20] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-09 16:36   ` Sean Christopherson
2022-05-09 16:36     ` Sean Christopherson
2022-04-22 21:05 ` [PATCH v4 19/20] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-04-22 21:05   ` David Matlack
2022-04-23  8:08   ` kernel test robot
2022-04-23  8:08     ` kernel test robot
2022-04-24 15:21   ` kernel test robot
2022-04-24 15:21     ` kernel test robot
2022-04-22 21:05 ` [PATCH v4 20/20] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs David Matlack
2022-04-22 21:05   ` David Matlack
2022-05-07  7:51   ` Lai Jiangshan
2022-05-07  7:51     ` Lai Jiangshan
2022-05-09 21:40     ` David Matlack
2022-05-09 21:40       ` David Matlack
2022-05-09 16:48   ` Sean Christopherson
2022-05-09 16:48     ` Sean Christopherson
2022-05-09 21:44     ` David Matlack
2022-05-09 21:44       ` David Matlack
2022-05-09 22:47       ` Sean Christopherson
2022-05-09 22:47         ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Ynk/vnHQsXnZkMGT@google.com \
    --to=seanjc@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=bgardon@google.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=drjones@redhat.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-mips@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.