kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Anup Patel <anup.patel@wdc.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Janosch Frank <frankja@linux.ibm.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Cc: James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Atish Patra <atish.patra@wdc.com>,
	David Hildenbrand <david@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org,
	kvm@vger.kernel.org, kvm-ppc@vger.kernel.org,
	kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,
	linux-kernel@vger.kernel.org, Ben Gardon <bgardon@google.com>,
	"Maciej S . Szmigiero" <maciej.szmigiero@oracle.com>
Subject: [PATCH v5.5 30/30] KVM: Dynamically allocate "new" memslots from the get-go
Date: Thu,  4 Nov 2021 00:25:31 +0000	[thread overview]
Message-ID: <20211104002531.1176691-31-seanjc@google.com> (raw)
In-Reply-To: <20211104002531.1176691-1-seanjc@google.com>

Allocate the "new" memslot for !DELETE memslot updates straight away
instead of filling an intermediate on-stack object and forcing
kvm_set_memslot() to juggle the allocation and do weird things like reuse
the old memslot object in MOVE.

In the MOVE case, this results in an "extra" memslot allocation due to
allocating both the "new" slot and the "invalid" slot, but that's a
temporary and not-huge allocation, and MOVE is a relatively rare memslot
operation.

Regarding MOVE, drop the open-coded management of the gfn tree with a
call to kvm_replace_memslot(), which already handles the case where
new->base_gfn != old->base_gfn.  This is made possible by virtue of not
having to copy the "new" memslot data after erasing the old memslot from
the gfn tree.  Using kvm_replace_memslot(), and more specifically not
reusing the old memslot, means the MOVE case now does hva tree and hash
list updates, but that's a small price to pay for simplifying the code
and making MOVE align with all the other flavors of updates.  The "extra"
updates are firmly in the noise from a performance perspective, e.g. the
"move (in)active area" selfttests show a (very, very) slight improvement.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 virt/kvm/kvm_main.c | 178 +++++++++++++++++++-------------------------
 1 file changed, 77 insertions(+), 101 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5cc0b50faa8c..b413082c081d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1488,23 +1488,25 @@ static int kvm_prepare_memory_region(struct kvm *kvm,
 	 * new and KVM isn't using a ring buffer, allocate and initialize a
 	 * new bitmap.
 	 */
-	if (!(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
-		new->dirty_bitmap = NULL;
-	else if (old->dirty_bitmap)
-		new->dirty_bitmap = old->dirty_bitmap;
-	else if (!kvm->dirty_ring_size) {
-		r = kvm_alloc_dirty_bitmap(new);
-		if (r)
-			return r;
+	if (change != KVM_MR_DELETE) {
+		if (!(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
+			new->dirty_bitmap = NULL;
+		else if (old && old->dirty_bitmap)
+			new->dirty_bitmap = old->dirty_bitmap;
+		else if (!kvm->dirty_ring_size) {
+			r = kvm_alloc_dirty_bitmap(new);
+			if (r)
+				return r;
 
-		if (kvm_dirty_log_manual_protect_and_init_set(kvm))
-			bitmap_set(new->dirty_bitmap, 0, new->npages);
+			if (kvm_dirty_log_manual_protect_and_init_set(kvm))
+				bitmap_set(new->dirty_bitmap, 0, new->npages);
+		}
 	}
 
 	r = kvm_arch_prepare_memory_region(kvm, old, new, change);
 
 	/* Free the bitmap on failure if it was allocated above. */
-	if (r && new->dirty_bitmap && !old->dirty_bitmap)
+	if (r && new && new->dirty_bitmap && old && !old->dirty_bitmap)
 		kvm_destroy_dirty_bitmap(new);
 
 	return r;
@@ -1591,16 +1593,16 @@ static void kvm_copy_memslot(struct kvm_memory_slot *dest,
 
 static void kvm_invalidate_memslot(struct kvm *kvm,
 				   struct kvm_memory_slot *old,
-				   struct kvm_memory_slot *working_slot)
+				   struct kvm_memory_slot *invalid_slot)
 {
 	/*
 	 * Mark the current slot INVALID.  As with all memslot modifications,
 	 * this must be done on an unreachable slot to avoid modifying the
 	 * current slot in the active tree.
 	 */
-	kvm_copy_memslot(working_slot, old);
-	working_slot->flags |= KVM_MEMSLOT_INVALID;
-	kvm_replace_memslot(kvm, old, working_slot);
+	kvm_copy_memslot(invalid_slot, old);
+	invalid_slot->flags |= KVM_MEMSLOT_INVALID;
+	kvm_replace_memslot(kvm, old, invalid_slot);
 
 	/*
 	 * Activate the slot that is now marked INVALID, but don't propagate
@@ -1627,20 +1629,15 @@ static void kvm_invalidate_memslot(struct kvm *kvm,
 	 * above.  Writers are required to retrieve memslots *after* acquiring
 	 * slots_arch_lock, thus the active slot's data is guaranteed to be fresh.
 	 */
-	old->arch = working_slot->arch;
+	old->arch = invalid_slot->arch;
 }
 
 static void kvm_create_memslot(struct kvm *kvm,
-			       const struct kvm_memory_slot *new,
-			       struct kvm_memory_slot *working)
+			       struct kvm_memory_slot *new)
 {
-	/*
-	 * Add the new memslot to the inactive set as a copy of the
-	 * new memslot data provided by userspace.
-	 */
-	kvm_copy_memslot(working, new);
-	kvm_replace_memslot(kvm, NULL, working);
-	kvm_activate_memslot(kvm, NULL, working);
+	/* Add the new memslot to the inactive set and activate. */
+	kvm_replace_memslot(kvm, NULL, new);
+	kvm_activate_memslot(kvm, NULL, new);
 }
 
 static void kvm_delete_memslot(struct kvm *kvm,
@@ -1649,65 +1646,36 @@ static void kvm_delete_memslot(struct kvm *kvm,
 {
 	/*
 	 * Remove the old memslot (in the inactive memslots) by passing NULL as
-	 * the "new" slot.
+	 * the "new" slot, and for the invalid version in the active slots.
 	 */
 	kvm_replace_memslot(kvm, old, NULL);
-
-	/* And do the same for the invalid version in the active slot. */
 	kvm_activate_memslot(kvm, invalid_slot, NULL);
-
-	/* Free the invalid slot, the caller will clean up the old slot. */
-	kfree(invalid_slot);
 }
 
-static struct kvm_memory_slot *kvm_move_memslot(struct kvm *kvm,
-						struct kvm_memory_slot *old,
-						const struct kvm_memory_slot *new,
-						struct kvm_memory_slot *invalid_slot)
+static void kvm_move_memslot(struct kvm *kvm,
+			     struct kvm_memory_slot *old,
+			     struct kvm_memory_slot *new,
+			     struct kvm_memory_slot *invalid_slot)
 {
-	struct kvm_memslots *slots = kvm_get_inactive_memslots(kvm, old->as_id);
-
 	/*
-	 * The memslot's gfn is changing, remove it from the inactive tree, it
-	 * will be re-added with its updated gfn. Because its range is
-	 * changing, an in-place replace is not possible.
+	 * Replace the old memslot in the inactive slots, and then swap slots
+	 * and replace the current INVALID with the new as well.
 	 */
-	kvm_erase_gfn_node(slots, old);
-
-	/*
-	 * The old slot is now fully disconnected, reuse its memory for the
-	 * persistent copy of "new".
-	 */
-	kvm_copy_memslot(old, new);
-
-	/* Re-add to the gfn tree with the updated gfn */
-	kvm_insert_gfn_node(slots, old);
-
-	/* Replace the current INVALID slot with the updated memslot. */
-	kvm_activate_memslot(kvm, invalid_slot, old);
-
-	/*
-	 * Clear the INVALID flag so that the invalid_slot is now a perfect
-	 * copy of the old slot.  Return it for cleanup in the caller.
-	 */
-	WARN_ON_ONCE(!(invalid_slot->flags & KVM_MEMSLOT_INVALID));
-	invalid_slot->flags &= ~KVM_MEMSLOT_INVALID;
-	return invalid_slot;
+	kvm_replace_memslot(kvm, old, new);
+	kvm_activate_memslot(kvm, invalid_slot, new);
 }
 
 static void kvm_update_flags_memslot(struct kvm *kvm,
 				     struct kvm_memory_slot *old,
-				     const struct kvm_memory_slot *new,
-				     struct kvm_memory_slot *working_slot)
+				     struct kvm_memory_slot *new)
 {
 	/*
 	 * Similar to the MOVE case, but the slot doesn't need to be zapped as
 	 * an intermediate step. Instead, the old memslot is simply replaced
 	 * with a new, updated copy in both memslot sets.
 	 */
-	kvm_copy_memslot(working_slot, new);
-	kvm_replace_memslot(kvm, old, working_slot);
-	kvm_activate_memslot(kvm, old, working_slot);
+	kvm_replace_memslot(kvm, old, new);
+	kvm_activate_memslot(kvm, old, new);
 }
 
 static int kvm_set_memslot(struct kvm *kvm,
@@ -1715,19 +1683,9 @@ static int kvm_set_memslot(struct kvm *kvm,
 			   struct kvm_memory_slot *new,
 			   enum kvm_mr_change change)
 {
-	struct kvm_memory_slot *working;
+	struct kvm_memory_slot *invalid_slot;
 	int r;
 
-	/*
-	 * Modifications are done on an unreachable slot.  Any changes are then
-	 * (eventually) propagated to both the active and inactive slots.  This
-	 * allocation would ideally be on-demand (in helpers), but is done here
-	 * to avoid having to handle failure after kvm_prepare_memory_region().
-	 */
-	working = kzalloc(sizeof(*working), GFP_KERNEL_ACCOUNT);
-	if (!working)
-		return -ENOMEM;
-
 	/*
 	 * Released in kvm_swap_active_memslots.
 	 *
@@ -1752,9 +1710,19 @@ static int kvm_set_memslot(struct kvm *kvm,
 	 * (and without a lock), a window would exist between effecting the
 	 * delete/move and committing the changes in arch code where KVM or a
 	 * guest could access a non-existent memslot.
+	 *
+	 * Modifications are done on a temporary, unreachable slot.  The old
+	 * slot needs to be preserved in case a later step fails and the
+	 * invalidation needs to be reverted.
 	 */
-	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
-		kvm_invalidate_memslot(kvm, old, working);
+	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
+		invalid_slot = kzalloc(sizeof(*invalid_slot), GFP_KERNEL_ACCOUNT);
+		if (!invalid_slot) {
+			mutex_unlock(&kvm->slots_arch_lock);
+			return -ENOMEM;
+		}
+		kvm_invalidate_memslot(kvm, old, invalid_slot);
+	}
 
 	r = kvm_prepare_memory_region(kvm, old, new, change);
 	if (r) {
@@ -1764,11 +1732,12 @@ static int kvm_set_memslot(struct kvm *kvm,
 		 * in the inactive slots.  Changing the active memslots also
 		 * release slots_arch_lock.
 		 */
-		if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
-			kvm_activate_memslot(kvm, working, old);
-		else
+		if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
+			kvm_activate_memslot(kvm, invalid_slot, old);
+			kfree(invalid_slot);
+		} else {
 			mutex_unlock(&kvm->slots_arch_lock);
-		kfree(working);
+		}
 		return r;
 	}
 
@@ -1780,16 +1749,20 @@ static int kvm_set_memslot(struct kvm *kvm,
 	 * old slot is detached but otherwise preserved.
 	 */
 	if (change == KVM_MR_CREATE)
-		kvm_create_memslot(kvm, new, working);
+		kvm_create_memslot(kvm, new);
 	else if (change == KVM_MR_DELETE)
-		kvm_delete_memslot(kvm, old, working);
+		kvm_delete_memslot(kvm, old, invalid_slot);
 	else if (change == KVM_MR_MOVE)
-		old = kvm_move_memslot(kvm, old, new, working);
+		kvm_move_memslot(kvm, old, new, invalid_slot);
 	else if (change == KVM_MR_FLAGS_ONLY)
-		kvm_update_flags_memslot(kvm, old, new, working);
+		kvm_update_flags_memslot(kvm, old, new);
 	else
 		BUG();
 
+	/* Free the temporary INVALID slot used for DELETE and MOVE. */
+	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
+		kfree(invalid_slot);
+
 	/*
 	 * No need to refresh new->arch, changes after dropping slots_arch_lock
 	 * will directly hit the final, active memsot.  Architectures are
@@ -1834,8 +1807,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id,
 int __kvm_set_memory_region(struct kvm *kvm,
 			    const struct kvm_userspace_memory_region *mem)
 {
-	struct kvm_memory_slot *old;
-	struct kvm_memory_slot new;
+	struct kvm_memory_slot *old, *new;
 	struct kvm_memslots *slots;
 	enum kvm_mr_change change;
 	unsigned long npages;
@@ -1884,11 +1856,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		if (WARN_ON_ONCE(kvm->nr_memslot_pages < old->npages))
 			return -EIO;
 
-		memset(&new, 0, sizeof(new));
-		new.id = id;
-		new.as_id = as_id;
-
-		return kvm_set_memslot(kvm, old, &new, KVM_MR_DELETE);
+		return kvm_set_memslot(kvm, old, NULL, KVM_MR_DELETE);
 	}
 
 	base_gfn = (mem->guest_phys_addr >> PAGE_SHIFT);
@@ -1921,14 +1889,22 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	    kvm_check_memslot_overlap(slots, id, base_gfn, base_gfn + npages))
 		return -EEXIST;
 
-	new.as_id = as_id;
-	new.id = id;
-	new.base_gfn = base_gfn;
-	new.npages = npages;
-	new.flags = mem->flags;
-	new.userspace_addr = mem->userspace_addr;
+	/* Allocate a slot that will persist in the memslot. */
+	new = kzalloc(sizeof(*new), GFP_KERNEL_ACCOUNT);
+	if (!new)
+		return -ENOMEM;
 
-	return kvm_set_memslot(kvm, old, &new, change);
+	new->as_id = as_id;
+	new->id = id;
+	new->base_gfn = base_gfn;
+	new->npages = npages;
+	new->flags = mem->flags;
+	new->userspace_addr = mem->userspace_addr;
+
+	r = kvm_set_memslot(kvm, old, new, change);
+	if (r)
+		kfree(new);
+	return r;
 }
 EXPORT_SYMBOL_GPL(__kvm_set_memory_region);
 
-- 
2.33.1.1089.g2158813163f-goog


  parent reply	other threads:[~2021-11-04  0:28 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-04  0:25 [PATCH v5.5 00/30] KVM: Scalable memslots implementation Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 01/30] KVM: Ensure local memslot copies operate on up-to-date arch-specific data Sean Christopherson
2021-11-04 21:27   ` Ben Gardon
2021-11-04 22:41     ` Sean Christopherson
2021-11-09  0:37   ` Maciej S. Szmigiero
2021-11-09  1:17     ` Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 02/30] KVM: Disallow user memslot with size that exceeds "unsigned long" Sean Christopherson
2021-11-09  0:38   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 03/30] KVM: Require total number of memslot pages to fit in an unsigned long Sean Christopherson
2021-11-09  0:38   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 04/30] KVM: Open code kvm_delete_memslot() into its only caller Sean Christopherson
2021-11-09  0:38   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 05/30] KVM: Resync only arch fields when slots_arch_lock gets reacquired Sean Christopherson
2021-11-09  0:38   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 06/30] KVM: Use "new" memslot's address space ID instead of dedicated param Sean Christopherson
2021-11-09  0:39   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 07/30] KVM: Let/force architectures to deal with arch specific memslot data Sean Christopherson
2021-11-09  0:39   ` Maciej S. Szmigiero
2021-11-09  1:13     ` Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 08/30] KVM: arm64: Use "new" memslot instead of userspace memory region Sean Christopherson
2021-11-09  6:36   ` Reiji Watanabe
2021-11-04  0:25 ` [PATCH v5.5 09/30] KVM: MIPS: Drop pr_debug from memslot commit to avoid using "mem" Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 10/30] KVM: PPC: Avoid referencing userspace memory region in memslot updates Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 11/30] KVM: s390: Use "new" memslot instead of userspace memory region Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 12/30] KVM: x86: " Sean Christopherson
2021-11-09  0:40   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 13/30] KVM: RISC-V: " Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 14/30] KVM: Stop passing kvm_userspace_memory_region to arch memslot hooks Sean Christopherson
2021-11-09  0:40   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 15/30] KVM: Use prepare/commit hooks to handle generic memslot metadata updates Sean Christopherson
2021-11-09  0:40   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 16/30] KVM: x86: Don't assume old/new memslots are non-NULL at memslot commit Sean Christopherson
2021-11-09  0:40   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 17/30] KVM: s390: Skip gfn/size sanity checks on memslot DELETE or FLAGS_ONLY Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 18/30] KVM: Don't make a full copy of the old memslot in __kvm_set_memory_region() Sean Christopherson
2021-11-09  0:41   ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 19/30] KVM: x86: Don't call kvm_mmu_change_mmu_pages() if the count hasn't changed Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 20/30] KVM: x86: Use nr_memslot_pages to avoid traversing the memslots array Sean Christopherson
2021-11-09  0:41   ` Maciej S. Szmigiero
2021-11-09  1:34     ` Sean Christopherson
2021-11-09 16:29       ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 21/30] KVM: Integrate gfn_to_memslot_approx() into search_memslots() Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 22/30] KVM: Move WARN on invalid memslot index to update_memslots() Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 23/30] KVM: Resolve memslot ID via a hash table instead of via a static array Sean Christopherson
2021-11-11 23:51   ` Maciej S. Szmigiero
2021-11-12  1:03     ` Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 24/30] KVM: Use interval tree to do fast hva lookup in memslots Sean Christopherson
2021-11-11 23:52   ` Maciej S. Szmigiero
2021-11-12  1:05     ` Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 25/30] KVM: s390: Introduce kvm_s390_get_gfn_end() Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 26/30] KVM: Keep memslots in tree-based structures instead of array-based ones Sean Christopherson
2021-11-11 23:52   ` Maciej S. Szmigiero
2021-11-12  0:51     ` Sean Christopherson
2021-11-13 15:22       ` Maciej S. Szmigiero
2021-11-04  0:25 ` [PATCH v5.5 27/30] KVM: Optimize gfn lookup in kvm_zap_gfn_range() Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 28/30] KVM: Optimize overlapping memslots check Sean Christopherson
2021-11-04  0:25 ` [PATCH v5.5 29/30] KVM: Wait 'til the bitter end to initialize the "new" memslot Sean Christopherson
2021-11-11 23:52   ` Maciej S. Szmigiero
2021-11-04  0:25 ` Sean Christopherson [this message]
2021-11-11 23:53   ` [PATCH v5.5 30/30] KVM: Dynamically allocate "new" memslots from the get-go Maciej S. Szmigiero
2021-11-12  1:32     ` Sean Christopherson
2021-11-09  0:43 ` [PATCH v5.5 00/30] KVM: Scalable memslots implementation Maciej S. Szmigiero
2021-11-09  1:21   ` Sean Christopherson
2021-11-11 23:53     ` Maciej S. Szmigiero
2021-11-23 14:42       ` Maciej S. Szmigiero
2021-11-26 12:33         ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211104002531.1176691-31-seanjc@google.com \
    --to=seanjc@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=alexandru.elisei@arm.com \
    --cc=anup.patel@wdc.com \
    --cc=aou@eecs.berkeley.edu \
    --cc=atish.patra@wdc.com \
    --cc=bgardon@google.com \
    --cc=borntraeger@de.ibm.com \
    --cc=chenhuacai@kernel.org \
    --cc=cohuck@redhat.com \
    --cc=david@redhat.com \
    --cc=frankja@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=james.morse@arm.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=paulus@ozlabs.org \
    --cc=pbonzini@redhat.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).