kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/15] KVM: Dynamically size memslot arrays
@ 2019-10-22  0:35 Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails Sean Christopherson
                   ` (17 more replies)
  0 siblings, 18 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

The end goal of this series is to dynamically size the memslot array so
that KVM allocates memory based on the number of memslots in use, as
opposed to unconditionally allocating memory for the maximum number of
memslots.  On x86, each memslot consumes 88 bytes, and so with 2 address
spaces of 512 memslots, each VM consumes ~90k bytes for the memslots.
E.g. given a VM that uses a total of 30 memslots, dynamic sizing reduces
the memory footprint from 90k to ~2.6k bytes.

The changes required to support dynamic sizing are relatively small,
e.g. are essentially contained in patches 12/13 and 13/13.  Patches 1-11
clean up the memslot code, which has gotten quite crusy, especially
__kvm_set_memory_region().  The clean up is likely not strictly necessary
to switch to dynamic sizing, but I didn't have a remotely reasonable
level of confidence in the correctness of the dynamic sizing without first
doing the clean up.

Testing, especially non-x86 platforms, would be greatly appreciated.  The
non-x86 changes are for all intents and purposes untested, e.g. I compile
tested pieces of the code by copying them into x86, but that's it.  In
theory, the vast majority of the functional changes are arch agnostic, in
theory...

v2:
  - Split "Drop kvm_arch_create_memslot()" into three patches to move
    minor functional changes to standalone patches [Janosch].
  - Rebase to latest kvm/queue (f0574a1cea5b, "KVM: x86: fix ...")
  - Collect an Acked-by and a Reviewed-by

Sean Christopherson (15):
  KVM: Reinstall old memslots if arch preparation fails
  KVM: Don't free new memslot if allocation of said memslot fails
  KVM: PPC: Move memslot memory allocation into prepare_memory_region()
  KVM: x86: Allocate memslot resources during prepare_memory_region()
  KVM: Drop kvm_arch_create_memslot()
  KVM: Explicitly free allocated-but-unused dirty bitmap
  KVM: Refactor error handling for setting memory region
  KVM: Move setting of memslot into helper routine
  KVM: Move memslot deletion to helper function
  KVM: Simplify kvm_free_memslot() and all its descendents
  KVM: Clean up local variable usage in __kvm_set_memory_region()
  KVM: Provide common implementation for generic dirty log functions
  KVM: Ensure validity of memslot with respect to kvm_get_dirty_log()
  KVM: Terminate memslot walks via used_slots
  KVM: Dynamically size memslot array based on number of used slots

 arch/mips/include/asm/kvm_host.h      |   2 +-
 arch/mips/kvm/mips.c                  |  68 +---
 arch/powerpc/include/asm/kvm_ppc.h    |  14 +-
 arch/powerpc/kvm/book3s.c             |  22 +-
 arch/powerpc/kvm/book3s_hv.c          |  36 +-
 arch/powerpc/kvm/book3s_pr.c          |  20 +-
 arch/powerpc/kvm/booke.c              |  17 +-
 arch/powerpc/kvm/powerpc.c            |  13 +-
 arch/s390/include/asm/kvm_host.h      |   2 +-
 arch/s390/kvm/kvm-s390.c              |  21 +-
 arch/x86/include/asm/kvm_page_track.h |   3 +-
 arch/x86/kvm/page_track.c             |  15 +-
 arch/x86/kvm/x86.c                    | 100 ++---
 include/linux/kvm_host.h              |  48 +--
 virt/kvm/arm/arm.c                    |  47 +--
 virt/kvm/arm/mmu.c                    |  18 +-
 virt/kvm/kvm_main.c                   | 546 ++++++++++++++++----------
 17 files changed, 467 insertions(+), 525 deletions(-)

-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-23  9:29   ` Christoffer Dall
  2019-10-22  0:35 ` [PATCH v2 02/15] KVM: Don't free new memslot if allocation of said memslot fails Sean Christopherson
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Reinstall the old memslots if preparing the new memory region fails
after invalidating a to-be-{re}moved memslot.

Remove the superfluous 'old_memslots' variable so that it's somewhat
clear that the error handling path needs to free the unused memslots,
not simply the 'old' memslots.

Fixes: bc6678a33d9b9 ("KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU update")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 67ef3f2e19e8..9afd706dc038 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -924,7 +924,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	unsigned long npages;
 	struct kvm_memory_slot *slot;
 	struct kvm_memory_slot old, new;
-	struct kvm_memslots *slots = NULL, *old_memslots;
+	struct kvm_memslots *slots;
 	int as_id, id;
 	enum kvm_mr_change change;
 
@@ -1032,7 +1032,13 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		slot = id_to_memslot(slots, id);
 		slot->flags |= KVM_MEMSLOT_INVALID;
 
-		old_memslots = install_new_memslots(kvm, as_id, slots);
+		/*
+		 * We can re-use the old memslots, the only difference from the
+		 * newly installed memslots is the invalid flag, which will get
+		 * dropped by update_memslots anyway.  We'll also revert to the
+		 * old memslots if preparing the new memory region fails.
+		 */
+		slots = install_new_memslots(kvm, as_id, slots);
 
 		/* From this point no new shadow pages pointing to a deleted,
 		 * or moved, memslot will be created.
@@ -1042,13 +1048,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		 *	- kvm_is_visible_gfn (mmu_check_roots)
 		 */
 		kvm_arch_flush_shadow_memslot(kvm, slot);
-
-		/*
-		 * We can re-use the old_memslots from above, the only difference
-		 * from the currently installed memslots is the invalid flag.  This
-		 * will get overwritten by update_memslots anyway.
-		 */
-		slots = old_memslots;
 	}
 
 	r = kvm_arch_prepare_memory_region(kvm, &new, mem, change);
@@ -1062,15 +1061,17 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	}
 
 	update_memslots(slots, &new, change);
-	old_memslots = install_new_memslots(kvm, as_id, slots);
+	slots = install_new_memslots(kvm, as_id, slots);
 
 	kvm_arch_commit_memory_region(kvm, mem, &old, &new, change);
 
 	kvm_free_memslot(kvm, &old, &new);
-	kvfree(old_memslots);
+	kvfree(slots);
 	return 0;
 
 out_slots:
+	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
+		slots = install_new_memslots(kvm, as_id, slots);
 	kvfree(slots);
 out_free:
 	kvm_free_memslot(kvm, &new, &old);
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 02/15] KVM: Don't free new memslot if allocation of said memslot fails
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-23  9:29   ` Christoffer Dall
  2019-10-22  0:35 ` [PATCH v2 03/15] KVM: PPC: Move memslot memory allocation into prepare_memory_region() Sean Christopherson
                   ` (15 subsequent siblings)
  17 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

The two implementations of kvm_arch_create_memslot() in x86 and PPC are
both good citizens and free up all local resources if creation fails.
Return immediately (via a superfluous goto) instead of calling
kvm_free_memslot().

Note, the call to kvm_free_memslot() is effectively an expensive nop in
this case as there are no resources to be freed.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 9afd706dc038..2cb38b2148cb 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1014,7 +1014,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		new.userspace_addr = mem->userspace_addr;
 
 		if (kvm_arch_create_memslot(kvm, &new, npages))
-			goto out_free;
+			goto out;
 	}
 
 	/* Allocate page dirty bitmap if needed */
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 03/15] KVM: PPC: Move memslot memory allocation into prepare_memory_region()
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 02/15] KVM: Don't free new memslot if allocation of said memslot fails Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-24 11:55   ` kbuild test robot
  2019-10-22  0:35 ` [PATCH v2 04/15] KVM: x86: Allocate memslot resources during prepare_memory_region() Sean Christopherson
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Allocate the rmap array during kvm_arch_prepare_memory_region() to pave
the way for removing kvm_arch_create_memslot() altogether.  Moving PPC's
memory allocation only changes the order of kernel memory allocations
between PPC and common KVM code.

No functional change intended.

Acked-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/powerpc/include/asm/kvm_ppc.h |  8 ++------
 arch/powerpc/kvm/book3s.c          | 12 ++++--------
 arch/powerpc/kvm/book3s_hv.c       | 25 ++++++++++++-------------
 arch/powerpc/kvm/book3s_pr.c       | 11 ++---------
 arch/powerpc/kvm/booke.c           |  9 ++-------
 arch/powerpc/kvm/powerpc.c         |  4 ++--
 6 files changed, 24 insertions(+), 45 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ee62776e5433..495901ae0070 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -203,9 +203,6 @@ extern void kvmppc_core_destroy_vm(struct kvm *kvm);
 extern void kvmppc_core_free_memslot(struct kvm *kvm,
 				     struct kvm_memory_slot *free,
 				     struct kvm_memory_slot *dont);
-extern int kvmppc_core_create_memslot(struct kvm *kvm,
-				      struct kvm_memory_slot *slot,
-				      unsigned long npages);
 extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
 				const struct kvm_userspace_memory_region *mem);
@@ -280,7 +277,8 @@ struct kvmppc_ops {
 	void (*flush_memslot)(struct kvm *kvm, struct kvm_memory_slot *memslot);
 	int (*prepare_memory_region)(struct kvm *kvm,
 				     struct kvm_memory_slot *memslot,
-				     const struct kvm_userspace_memory_region *mem);
+				     const struct kvm_userspace_memory_region *mem,
+				     enum kvm_mr_change change);
 	void (*commit_memory_region)(struct kvm *kvm,
 				     const struct kvm_userspace_memory_region *mem,
 				     const struct kvm_memory_slot *old,
@@ -294,8 +292,6 @@ struct kvmppc_ops {
 	void (*mmu_destroy)(struct kvm_vcpu *vcpu);
 	void (*free_memslot)(struct kvm_memory_slot *free,
 			     struct kvm_memory_slot *dont);
-	int (*create_memslot)(struct kvm_memory_slot *slot,
-			      unsigned long npages);
 	int (*init_vm)(struct kvm *kvm);
 	void (*destroy_vm)(struct kvm *kvm);
 	int (*get_smmu_info)(struct kvm *kvm, struct kvm_ppc_smmu_info *info);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index ec2547cc5ecb..111b39a5250d 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -840,12 +840,6 @@ void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 	kvm->arch.kvm_ops->free_memslot(free, dont);
 }
 
-int kvmppc_core_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			       unsigned long npages)
-{
-	return kvm->arch.kvm_ops->create_memslot(slot, npages);
-}
-
 void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
 {
 	kvm->arch.kvm_ops->flush_memslot(kvm, memslot);
@@ -853,9 +847,11 @@ void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
 
 int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
-				const struct kvm_userspace_memory_region *mem)
+				const struct kvm_userspace_memory_region *mem,
+				enum kvm_mr_change change)
 {
-	return kvm->arch.kvm_ops->prepare_memory_region(kvm, memslot, mem);
+	return kvm->arch.kvm_ops->prepare_memory_region(kvm, memslot, mem,
+							change);
 }
 
 void kvmppc_core_commit_memory_region(struct kvm *kvm,
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 709cf1fd4cf4..4e5664379329 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4461,20 +4461,20 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *free,
 	}
 }
 
-static int kvmppc_core_create_memslot_hv(struct kvm_memory_slot *slot,
-					 unsigned long npages)
-{
-	slot->arch.rmap = vzalloc(array_size(npages, sizeof(*slot->arch.rmap)));
-	if (!slot->arch.rmap)
-		return -ENOMEM;
-
-	return 0;
-}
-
 static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
-					struct kvm_memory_slot *memslot,
-					const struct kvm_userspace_memory_region *mem)
+					struct kvm_memory_slot *slot,
+					const struct kvm_userspace_memory_region *mem,
+					enum kvm_mr_change change)
 {
+	unsigned long npages = mem->memory_size >> PAGE_SHIFT;
+
+	if (change == KVM_MR_CREATE) {
+		slot->arch.rmap = vzalloc(array_size(npages,
+					  sizeof(*slot->arch.rmap)));
+		if (!slot->arch.rmap)
+			return -ENOMEM;
+	}
+
 	return 0;
 }
 
@@ -5416,7 +5416,6 @@ static struct kvmppc_ops kvm_ops_hv = {
 	.set_spte_hva = kvm_set_spte_hva_hv,
 	.mmu_destroy  = kvmppc_mmu_destroy_hv,
 	.free_memslot = kvmppc_core_free_memslot_hv,
-	.create_memslot = kvmppc_core_create_memslot_hv,
 	.init_vm =  kvmppc_core_init_vm_hv,
 	.destroy_vm = kvmppc_core_destroy_vm_hv,
 	.get_smmu_info = kvm_vm_ioctl_get_smmu_info_hv,
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index cc65af8fe6f7..5fceb1da5fde 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1903,7 +1903,8 @@ static void kvmppc_core_flush_memslot_pr(struct kvm *kvm,
 
 static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm,
 					struct kvm_memory_slot *memslot,
-					const struct kvm_userspace_memory_region *mem)
+					const struct kvm_userspace_memory_region *mem,
+					enum kvm_mr_change change)
 {
 	return 0;
 }
@@ -1923,13 +1924,6 @@ static void kvmppc_core_free_memslot_pr(struct kvm_memory_slot *free,
 	return;
 }
 
-static int kvmppc_core_create_memslot_pr(struct kvm_memory_slot *slot,
-					 unsigned long npages)
-{
-	return 0;
-}
-
-
 #ifdef CONFIG_PPC64
 static int kvm_vm_ioctl_get_smmu_info_pr(struct kvm *kvm,
 					 struct kvm_ppc_smmu_info *info)
@@ -2073,7 +2067,6 @@ static struct kvmppc_ops kvm_ops_pr = {
 	.set_spte_hva = kvm_set_spte_hva_pr,
 	.mmu_destroy  = kvmppc_mmu_destroy_pr,
 	.free_memslot = kvmppc_core_free_memslot_pr,
-	.create_memslot = kvmppc_core_create_memslot_pr,
 	.init_vm = kvmppc_core_init_vm_pr,
 	.destroy_vm = kvmppc_core_destroy_vm_pr,
 	.get_smmu_info = kvm_vm_ioctl_get_smmu_info_pr,
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index be9a45874194..cf2845e147c5 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1806,15 +1806,10 @@ void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 {
 }
 
-int kvmppc_core_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			       unsigned long npages)
-{
-	return 0;
-}
-
 int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				      struct kvm_memory_slot *memslot,
-				      const struct kvm_userspace_memory_region *mem)
+				      const struct kvm_userspace_memory_region *mem,
+				      enum kvm_mr_change change)
 {
 	return 0;
 }
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3a77bb643452..95b4a766b6cb 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -690,7 +690,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 			    unsigned long npages)
 {
-	return kvmppc_core_create_memslot(kvm, slot, npages);
+	return 0;
 }
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
@@ -698,7 +698,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   const struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
-	return kvmppc_core_prepare_memory_region(kvm, memslot, mem);
+	return kvmppc_core_prepare_memory_region(kvm, memslot, mem, change);
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 04/15] KVM: x86: Allocate memslot resources during prepare_memory_region()
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 03/15] KVM: PPC: Move memslot memory allocation into prepare_memory_region() Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 05/15] KVM: Drop kvm_arch_create_memslot() Sean Christopherson
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Allocate the various metadata structures associated with a memslot
during during kvm_arch_prepare_memory_region(), which paves the way for
removing kvm_arch_create_memslot() altogether.  Moving x86's memory
allocation only changes the order of kernel memory allocations between
x86 and common KVM code.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/x86.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f26f8be4e621..fa80f07ba501 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9601,6 +9601,12 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 
 int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 			    unsigned long npages)
+{
+	return 0;
+}
+
+static int kvm_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+			      unsigned long npages)
 {
 	int i;
 
@@ -9678,6 +9684,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
 				enum kvm_mr_change change)
 {
+	if (change == KVM_MR_CREATE)
+		return kvm_create_memslot(kvm, memslot,
+					  mem->memory_size >> PAGE_SHIFT);
 	return 0;
 }
 
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 05/15] KVM: Drop kvm_arch_create_memslot()
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (3 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 04/15] KVM: x86: Allocate memslot resources during prepare_memory_region() Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 06/15] KVM: Explicitly free allocated-but-unused dirty bitmap Sean Christopherson
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Remove kvm_arch_create_memslot() now that all arch implementations are
effectively nops.  Removing kvm_arch_create_memslot() eliminates the
possibility for arch specific code to allocate memory prior to setting
a memslot, which sets the stage for simplifying kvm_free_memslot().

Cc: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/mips/kvm/mips.c       |  6 ------
 arch/powerpc/kvm/powerpc.c |  6 ------
 arch/s390/kvm/kvm-s390.c   |  6 ------
 arch/x86/kvm/x86.c         |  6 ------
 include/linux/kvm_host.h   |  2 --
 virt/kvm/arm/mmu.c         |  6 ------
 virt/kvm/kvm_main.c        | 21 +++++++--------------
 7 files changed, 7 insertions(+), 46 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 1109924560d8..713e5465edb0 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -188,12 +188,6 @@ long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl,
 	return -ENOIOCTLCMD;
 }
 
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			    unsigned long npages)
-{
-	return 0;
-}
-
 void kvm_arch_flush_shadow_all(struct kvm *kvm)
 {
 	/* Flush whole GPA */
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 95b4a766b6cb..bb427543f8a3 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -687,12 +687,6 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 	kvmppc_core_free_memslot(kvm, free, dont);
 }
 
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			    unsigned long npages)
-{
-	return 0;
-}
-
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot,
 				   const struct kvm_userspace_memory_region *mem,
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index d047e846e1b9..21f61dae7db1 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4494,12 +4494,6 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
 	return VM_FAULT_SIGBUS;
 }
 
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			    unsigned long npages)
-{
-	return 0;
-}
-
 /* Section: memory related */
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fa80f07ba501..eb812bb18599 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9599,12 +9599,6 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 	kvm_page_track_free_memslot(free, dont);
 }
 
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			    unsigned long npages)
-{
-	return 0;
-}
-
 static int kvm_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 			      unsigned long npages)
 {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 719fc3e15ea4..23a2373931cb 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -679,8 +679,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 			    const struct kvm_userspace_memory_region *mem);
 void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 			   struct kvm_memory_slot *dont);
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			    unsigned long npages);
 void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen);
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 38b4c910b6c3..f264de85f648 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2358,12 +2358,6 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 {
 }
 
-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
-			    unsigned long npages)
-{
-	return 0;
-}
-
 void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
 {
 }
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2cb38b2148cb..6615c01b14a3 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -965,12 +965,13 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	new.base_gfn = base_gfn;
 	new.npages = npages;
 	new.flags = mem->flags;
+	new.userspace_addr = mem->userspace_addr;
 
 	if (npages) {
 		if (!old.npages)
 			change = KVM_MR_CREATE;
 		else { /* Modify an existing slot. */
-			if ((mem->userspace_addr != old.userspace_addr) ||
+			if ((new.userspace_addr != old.userspace_addr) ||
 			    (npages != old.npages) ||
 			    ((new.flags ^ old.flags) & KVM_MEM_READONLY))
 				goto out;
@@ -1005,22 +1006,14 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		}
 	}
 
-	/* Free page dirty bitmap if unneeded */
+	r = -ENOMEM;
+
+	/* Allocate/free page dirty bitmap as needed */
 	if (!(new.flags & KVM_MEM_LOG_DIRTY_PAGES))
 		new.dirty_bitmap = NULL;
-
-	r = -ENOMEM;
-	if (change == KVM_MR_CREATE) {
-		new.userspace_addr = mem->userspace_addr;
-
-		if (kvm_arch_create_memslot(kvm, &new, npages))
-			goto out;
-	}
-
-	/* Allocate page dirty bitmap if needed */
-	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
+	else if (!new.dirty_bitmap) {
 		if (kvm_create_dirty_bitmap(&new) < 0)
-			goto out_free;
+			goto out;
 	}
 
 	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 06/15] KVM: Explicitly free allocated-but-unused dirty bitmap
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (4 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 05/15] KVM: Drop kvm_arch_create_memslot() Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 07/15] KVM: Refactor error handling for setting memory region Sean Christopherson
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Explicitly free an allocated-but-unused dirty bitmap instead of relying
on kvm_free_memslot() if an error occurs in __kvm_set_memory_region().
There is no longer a need to abuse kvm_free_memslot() to free arch
specific resources as arch specific code is now called only after the
common flow is guaranteed to succeed.  Arch code can still fail, but
it's responsible for its own cleanup in that case.

Eliminating the error path's abuse of kvm_free_memslot() paves the way
for simplifying kvm_free_memslot(), i.e. dropping its @dont param.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6615c01b14a3..26151e2e3365 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1018,7 +1018,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 
 	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
 	if (!slots)
-		goto out_free;
+		goto out_bitmap;
 	memcpy(slots, __kvm_memslots(kvm, as_id), sizeof(struct kvm_memslots));
 
 	if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
@@ -1066,8 +1066,9 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
 		slots = install_new_memslots(kvm, as_id, slots);
 	kvfree(slots);
-out_free:
-	kvm_free_memslot(kvm, &new, &old);
+out_bitmap:
+	if (new.dirty_bitmap && !old.dirty_bitmap)
+		kvm_destroy_dirty_bitmap(&new);
 out:
 	return r;
 }
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 07/15] KVM: Refactor error handling for setting memory region
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (5 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 06/15] KVM: Explicitly free allocated-but-unused dirty bitmap Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 08/15] KVM: Move setting of memslot into helper routine Sean Christopherson
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Replace a big pile o' gotos with returns to make it more obvious what
error code is being returned, and to prepare for refactoring the
functional, i.e. post-checks, portion of __kvm_set_memory_region().

Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 40 ++++++++++++++++++----------------------
 1 file changed, 18 insertions(+), 22 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 26151e2e3365..40e27ffaf29a 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -930,34 +930,33 @@ int __kvm_set_memory_region(struct kvm *kvm,
 
 	r = check_memory_region_flags(mem);
 	if (r)
-		goto out;
+		return r;
 
-	r = -EINVAL;
 	as_id = mem->slot >> 16;
 	id = (u16)mem->slot;
 
 	/* General sanity checks */
 	if (mem->memory_size & (PAGE_SIZE - 1))
-		goto out;
+		return -EINVAL;
 	if (mem->guest_phys_addr & (PAGE_SIZE - 1))
-		goto out;
+		return -EINVAL;
 	/* We can read the guest memory with __xxx_user() later on. */
 	if ((id < KVM_USER_MEM_SLOTS) &&
 	    ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
 	     !access_ok((void __user *)(unsigned long)mem->userspace_addr,
 			mem->memory_size)))
-		goto out;
+		return -EINVAL;
 	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
-		goto out;
+		return -EINVAL;
 	if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
-		goto out;
+		return -EINVAL;
 
 	slot = id_to_memslot(__kvm_memslots(kvm, as_id), id);
 	base_gfn = mem->guest_phys_addr >> PAGE_SHIFT;
 	npages = mem->memory_size >> PAGE_SHIFT;
 
 	if (npages > KVM_MEM_MAX_NR_PAGES)
-		goto out;
+		return -EINVAL;
 
 	new = old = *slot;
 
@@ -974,20 +973,18 @@ int __kvm_set_memory_region(struct kvm *kvm,
 			if ((new.userspace_addr != old.userspace_addr) ||
 			    (npages != old.npages) ||
 			    ((new.flags ^ old.flags) & KVM_MEM_READONLY))
-				goto out;
+				return -EINVAL;
 
 			if (base_gfn != old.base_gfn)
 				change = KVM_MR_MOVE;
 			else if (new.flags != old.flags)
 				change = KVM_MR_FLAGS_ONLY;
-			else { /* Nothing to change. */
-				r = 0;
-				goto out;
-			}
+			else /* Nothing to change. */
+				return 0;
 		}
 	} else {
 		if (!old.npages)
-			goto out;
+			return -EINVAL;
 
 		change = KVM_MR_DELETE;
 		new.base_gfn = 0;
@@ -996,29 +993,29 @@ int __kvm_set_memory_region(struct kvm *kvm,
 
 	if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
 		/* Check for overlaps */
-		r = -EEXIST;
 		kvm_for_each_memslot(slot, __kvm_memslots(kvm, as_id)) {
 			if (slot->id == id)
 				continue;
 			if (!((base_gfn + npages <= slot->base_gfn) ||
 			      (base_gfn >= slot->base_gfn + slot->npages)))
-				goto out;
+				return -EEXIST;
 		}
 	}
 
-	r = -ENOMEM;
-
 	/* Allocate/free page dirty bitmap as needed */
 	if (!(new.flags & KVM_MEM_LOG_DIRTY_PAGES))
 		new.dirty_bitmap = NULL;
 	else if (!new.dirty_bitmap) {
-		if (kvm_create_dirty_bitmap(&new) < 0)
-			goto out;
+		r = kvm_create_dirty_bitmap(&new);
+		if (r)
+			return r;
 	}
 
 	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
-	if (!slots)
+	if (!slots) {
+		r = -ENOMEM;
 		goto out_bitmap;
+	}
 	memcpy(slots, __kvm_memslots(kvm, as_id), sizeof(struct kvm_memslots));
 
 	if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
@@ -1069,7 +1066,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 out_bitmap:
 	if (new.dirty_bitmap && !old.dirty_bitmap)
 		kvm_destroy_dirty_bitmap(&new);
-out:
 	return r;
 }
 EXPORT_SYMBOL_GPL(__kvm_set_memory_region);
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 08/15] KVM: Move setting of memslot into helper routine
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (6 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 07/15] KVM: Refactor error handling for setting memory region Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 09/15] KVM: Move memslot deletion to helper function Sean Christopherson
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Split out the core functionality of setting a memslot into a separate
helper in preparation for moving memslot deletion into its own routine.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 106 ++++++++++++++++++++++++++------------------
 1 file changed, 63 insertions(+), 43 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 40e27ffaf29a..b534865754b6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -908,6 +908,66 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
 	return old_memslots;
 }
 
+static int kvm_set_memslot(struct kvm *kvm,
+			   const struct kvm_userspace_memory_region *mem,
+			   const struct kvm_memory_slot *old,
+			   struct kvm_memory_slot *new, int as_id,
+			   enum kvm_mr_change change)
+{
+	struct kvm_memory_slot *slot;
+	struct kvm_memslots *slots;
+	int r;
+
+	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
+	if (!slots)
+		return -ENOMEM;
+	memcpy(slots, __kvm_memslots(kvm, as_id), sizeof(struct kvm_memslots));
+
+	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
+		/*
+		 * Note, the INVALID flag needs to be in the appropriate entry
+		 * in the freshly allocated memslots, not in @old or @new.
+		 */
+		slot = id_to_memslot(slots, old->id);
+		slot->flags |= KVM_MEMSLOT_INVALID;
+
+		/*
+		 * We can re-use the old memslots, the only difference from the
+		 * newly installed memslots is the invalid flag, which will get
+		 * dropped by update_memslots anyway.  We'll also revert to the
+		 * old memslots if preparing the new memory region fails.
+		 */
+		slots = install_new_memslots(kvm, as_id, slots);
+
+		/* From this point no new shadow pages pointing to a deleted,
+		 * or moved, memslot will be created.
+		 *
+		 * validation of sp->gfn happens in:
+		 *	- gfn_to_hva (kvm_read_guest, gfn_to_pfn)
+		 *	- kvm_is_visible_gfn (mmu_check_roots)
+		 */
+		kvm_arch_flush_shadow_memslot(kvm, slot);
+	}
+
+	r = kvm_arch_prepare_memory_region(kvm, new, mem, change);
+	if (r)
+		goto out_slots;
+
+	update_memslots(slots, new, change);
+	slots = install_new_memslots(kvm, as_id, slots);
+
+	kvm_arch_commit_memory_region(kvm, mem, old, new, change);
+
+	kvfree(slots);
+	return 0;
+
+out_slots:
+	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
+		slots = install_new_memslots(kvm, as_id, slots);
+	kvfree(slots);
+	return r;
+}
+
 /*
  * Allocate some memory and give it an address in the guest physical address
  * space.
@@ -924,7 +984,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	unsigned long npages;
 	struct kvm_memory_slot *slot;
 	struct kvm_memory_slot old, new;
-	struct kvm_memslots *slots;
 	int as_id, id;
 	enum kvm_mr_change change;
 
@@ -1011,58 +1070,19 @@ int __kvm_set_memory_region(struct kvm *kvm,
 			return r;
 	}
 
-	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
-	if (!slots) {
-		r = -ENOMEM;
-		goto out_bitmap;
-	}
-	memcpy(slots, __kvm_memslots(kvm, as_id), sizeof(struct kvm_memslots));
-
-	if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
-		slot = id_to_memslot(slots, id);
-		slot->flags |= KVM_MEMSLOT_INVALID;
-
-		/*
-		 * We can re-use the old memslots, the only difference from the
-		 * newly installed memslots is the invalid flag, which will get
-		 * dropped by update_memslots anyway.  We'll also revert to the
-		 * old memslots if preparing the new memory region fails.
-		 */
-		slots = install_new_memslots(kvm, as_id, slots);
-
-		/* From this point no new shadow pages pointing to a deleted,
-		 * or moved, memslot will be created.
-		 *
-		 * validation of sp->gfn happens in:
-		 *	- gfn_to_hva (kvm_read_guest, gfn_to_pfn)
-		 *	- kvm_is_visible_gfn (mmu_check_roots)
-		 */
-		kvm_arch_flush_shadow_memslot(kvm, slot);
-	}
-
-	r = kvm_arch_prepare_memory_region(kvm, &new, mem, change);
-	if (r)
-		goto out_slots;
-
 	/* actual memory is freed via old in kvm_free_memslot below */
 	if (change == KVM_MR_DELETE) {
 		new.dirty_bitmap = NULL;
 		memset(&new.arch, 0, sizeof(new.arch));
 	}
 
-	update_memslots(slots, &new, change);
-	slots = install_new_memslots(kvm, as_id, slots);
-
-	kvm_arch_commit_memory_region(kvm, mem, &old, &new, change);
+	r = kvm_set_memslot(kvm, mem, &old, &new, as_id, change);
+	if (r)
+		goto out_bitmap;
 
 	kvm_free_memslot(kvm, &old, &new);
-	kvfree(slots);
 	return 0;
 
-out_slots:
-	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
-		slots = install_new_memslots(kvm, as_id, slots);
-	kvfree(slots);
 out_bitmap:
 	if (new.dirty_bitmap && !old.dirty_bitmap)
 		kvm_destroy_dirty_bitmap(&new);
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 09/15] KVM: Move memslot deletion to helper function
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (7 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 08/15] KVM: Move setting of memslot into helper routine Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-23  9:29   ` Christoffer Dall
  2019-10-22  0:35 ` [PATCH v2 10/15] KVM: Simplify kvm_free_memslot() and all its descendents Sean Christopherson
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Move memslot deletion into its own routine so that the success path for
other memslot updates does not need to use kvm_free_memslot(), i.e. can
explicitly destroy the dirty bitmap when necessary.  This paves the way
for dropping @dont from kvm_free_memslot(), i.e. all callers now pass
NULL for @dont.

Add a comment above the code to make a copy of the existing memslot
prior to deletion, it is not at all obvious that the pointer will become
stale due sorting and/or installation of new memslots.

Note, kvm_arch_commit_memory_region() allows an architecture to free
resources when moving a memslot or changing its flags, i.e. implement
logic similar to the dirty bitmap is handling, if such functionality is
needed in the future.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 73 +++++++++++++++++++++++++++------------------
 1 file changed, 44 insertions(+), 29 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index b534865754b6..38bb446cef85 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -968,6 +968,27 @@ static int kvm_set_memslot(struct kvm *kvm,
 	return r;
 }
 
+static int kvm_delete_memslot(struct kvm *kvm,
+			      const struct kvm_userspace_memory_region *mem,
+			      struct kvm_memory_slot *old, int as_id)
+{
+	struct kvm_memory_slot new;
+	int r;
+
+	if (!old->npages)
+		return -EINVAL;
+
+	memset(&new, 0, sizeof(new));
+	new.id = old->id;
+
+	r = kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE);
+	if (r)
+		return r;
+
+	kvm_free_memslot(kvm, old, NULL);
+	return 0;
+}
+
 /*
  * Allocate some memory and give it an address in the guest physical address
  * space.
@@ -1017,7 +1038,15 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if (npages > KVM_MEM_MAX_NR_PAGES)
 		return -EINVAL;
 
-	new = old = *slot;
+	/*
+	 * Make a full copy of the old memslot, the pointer will become stale
+	 * when the memslots are re-sorted by update_memslots().
+	 */
+	old = *slot;
+	if (!mem->memory_size)
+		return kvm_delete_memslot(kvm, mem, &old, as_id);
+
+	new = old;
 
 	new.id = id;
 	new.base_gfn = base_gfn;
@@ -1025,29 +1054,20 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	new.flags = mem->flags;
 	new.userspace_addr = mem->userspace_addr;
 
-	if (npages) {
-		if (!old.npages)
-			change = KVM_MR_CREATE;
-		else { /* Modify an existing slot. */
-			if ((new.userspace_addr != old.userspace_addr) ||
-			    (npages != old.npages) ||
-			    ((new.flags ^ old.flags) & KVM_MEM_READONLY))
-				return -EINVAL;
-
-			if (base_gfn != old.base_gfn)
-				change = KVM_MR_MOVE;
-			else if (new.flags != old.flags)
-				change = KVM_MR_FLAGS_ONLY;
-			else /* Nothing to change. */
-				return 0;
-		}
-	} else {
-		if (!old.npages)
+	if (!old.npages) {
+		change = KVM_MR_CREATE;
+	} else { /* Modify an existing slot. */
+		if ((new.userspace_addr != old.userspace_addr) ||
+		    (npages != old.npages) ||
+		    ((new.flags ^ old.flags) & KVM_MEM_READONLY))
 			return -EINVAL;
 
-		change = KVM_MR_DELETE;
-		new.base_gfn = 0;
-		new.flags = 0;
+		if (base_gfn != old.base_gfn)
+			change = KVM_MR_MOVE;
+		else if (new.flags != old.flags)
+			change = KVM_MR_FLAGS_ONLY;
+		else /* Nothing to change. */
+			return 0;
 	}
 
 	if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
@@ -1070,17 +1090,12 @@ int __kvm_set_memory_region(struct kvm *kvm,
 			return r;
 	}
 
-	/* actual memory is freed via old in kvm_free_memslot below */
-	if (change == KVM_MR_DELETE) {
-		new.dirty_bitmap = NULL;
-		memset(&new.arch, 0, sizeof(new.arch));
-	}
-
 	r = kvm_set_memslot(kvm, mem, &old, &new, as_id, change);
 	if (r)
 		goto out_bitmap;
 
-	kvm_free_memslot(kvm, &old, &new);
+	if (old.dirty_bitmap && !new.dirty_bitmap)
+		kvm_destroy_dirty_bitmap(&old);
 	return 0;
 
 out_bitmap:
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 10/15] KVM: Simplify kvm_free_memslot() and all its descendents
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (8 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 09/15] KVM: Move memslot deletion to helper function Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 11/15] KVM: Clean up local variable usage in __kvm_set_memory_region() Sean Christopherson
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Now that all callers of kvm_free_memslot() pass NULL for @dont, remove
the param from the top-level routine and all arch's implementations.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/mips/include/asm/kvm_host.h      |  2 +-
 arch/powerpc/include/asm/kvm_ppc.h    |  6 ++----
 arch/powerpc/kvm/book3s.c             |  5 ++---
 arch/powerpc/kvm/book3s_hv.c          |  9 +++------
 arch/powerpc/kvm/book3s_pr.c          |  3 +--
 arch/powerpc/kvm/booke.c              |  3 +--
 arch/powerpc/kvm/powerpc.c            |  5 ++---
 arch/s390/include/asm/kvm_host.h      |  2 +-
 arch/x86/include/asm/kvm_page_track.h |  3 +--
 arch/x86/kvm/page_track.c             | 15 ++++++---------
 arch/x86/kvm/x86.c                    | 19 +++++++------------
 include/linux/kvm_host.h              |  3 +--
 virt/kvm/arm/mmu.c                    |  3 +--
 virt/kvm/kvm_main.c                   | 18 +++++++-----------
 14 files changed, 36 insertions(+), 60 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 41204a49cf95..2c343c346b79 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -1133,7 +1133,7 @@ extern unsigned long kvm_mips_get_ramsize(struct kvm *kvm);
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_free_memslot(struct kvm *kvm,
-		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+					 struct kvm_memory_slot *slot) {}
 static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 495901ae0070..bca17ffeff20 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -201,8 +201,7 @@ extern void kvm_free_hpt_cma(struct page *page, unsigned long nr_pages);
 extern int kvmppc_core_init_vm(struct kvm *kvm);
 extern void kvmppc_core_destroy_vm(struct kvm *kvm);
 extern void kvmppc_core_free_memslot(struct kvm *kvm,
-				     struct kvm_memory_slot *free,
-				     struct kvm_memory_slot *dont);
+				     struct kvm_memory_slot *slot);
 extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
 				const struct kvm_userspace_memory_region *mem);
@@ -290,8 +289,7 @@ struct kvmppc_ops {
 	int (*test_age_hva)(struct kvm *kvm, unsigned long hva);
 	void (*set_spte_hva)(struct kvm *kvm, unsigned long hva, pte_t pte);
 	void (*mmu_destroy)(struct kvm_vcpu *vcpu);
-	void (*free_memslot)(struct kvm_memory_slot *free,
-			     struct kvm_memory_slot *dont);
+	void (*free_memslot)(struct kvm_memory_slot *slot);
 	int (*init_vm)(struct kvm *kvm);
 	void (*destroy_vm)(struct kvm *kvm);
 	int (*get_smmu_info)(struct kvm *kvm, struct kvm_ppc_smmu_info *info);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 111b39a5250d..a5d4a1014fdf 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -834,10 +834,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 	return kvm->arch.kvm_ops->get_dirty_log(kvm, log);
 }
 
-void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			      struct kvm_memory_slot *dont)
+void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
-	kvm->arch.kvm_ops->free_memslot(free, dont);
+	kvm->arch.kvm_ops->free_memslot(slot);
 }
 
 void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4e5664379329..14906f7c12c5 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4452,13 +4452,10 @@ static int kvm_vm_ioctl_get_dirty_log_hv(struct kvm *kvm,
 	return r;
 }
 
-static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *free,
-					struct kvm_memory_slot *dont)
+static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *slot)
 {
-	if (!dont || free->arch.rmap != dont->arch.rmap) {
-		vfree(free->arch.rmap);
-		free->arch.rmap = NULL;
-	}
+	vfree(slot->arch.rmap);
+	slot->arch.rmap = NULL;
 }
 
 static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 5fceb1da5fde..5368a5dbac22 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1918,8 +1918,7 @@ static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm,
 	return;
 }
 
-static void kvmppc_core_free_memslot_pr(struct kvm_memory_slot *free,
-					struct kvm_memory_slot *dont)
+static void kvmppc_core_free_memslot_pr(struct kvm_memory_slot *slot)
 {
 	return;
 }
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index cf2845e147c5..a22ff567724a 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1801,8 +1801,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 	return -ENOTSUPP;
 }
 
-void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			      struct kvm_memory_slot *dont)
+void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
 }
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index bb427543f8a3..622f77d73c24 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -681,10 +681,9 @@ long kvm_arch_dev_ioctl(struct file *filp,
 	return -EINVAL;
 }
 
-void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			   struct kvm_memory_slot *dont)
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
-	kvmppc_core_free_memslot(kvm, free, dont);
+	kvmppc_core_free_memslot(kvm, slot);
 }
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index abe60268335d..43301e8a5cbd 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -916,7 +916,7 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_free_memslot(struct kvm *kvm,
-		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+					 struct kvm_memory_slot *slot) {}
 static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
 static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
 static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
index 172f9749dbb2..87bd6025d91d 100644
--- a/arch/x86/include/asm/kvm_page_track.h
+++ b/arch/x86/include/asm/kvm_page_track.h
@@ -49,8 +49,7 @@ struct kvm_page_track_notifier_node {
 void kvm_page_track_init(struct kvm *kvm);
 void kvm_page_track_cleanup(struct kvm *kvm);
 
-void kvm_page_track_free_memslot(struct kvm_memory_slot *free,
-				 struct kvm_memory_slot *dont);
+void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
 int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
 				  unsigned long npages);
 
diff --git a/arch/x86/kvm/page_track.c b/arch/x86/kvm/page_track.c
index 3521e2d176f2..d125ec379c79 100644
--- a/arch/x86/kvm/page_track.c
+++ b/arch/x86/kvm/page_track.c
@@ -19,17 +19,14 @@
 
 #include "mmu.h"
 
-void kvm_page_track_free_memslot(struct kvm_memory_slot *free,
-				 struct kvm_memory_slot *dont)
+void kvm_page_track_free_memslot(struct kvm_memory_slot *slot)
 {
 	int i;
 
-	for (i = 0; i < KVM_PAGE_TRACK_MAX; i++)
-		if (!dont || free->arch.gfn_track[i] !=
-		      dont->arch.gfn_track[i]) {
-			kvfree(free->arch.gfn_track[i]);
-			free->arch.gfn_track[i] = NULL;
-		}
+	for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) {
+		kvfree(slot->arch.gfn_track[i]);
+		slot->arch.gfn_track[i] = NULL;
+	}
 }
 
 int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
@@ -48,7 +45,7 @@ int kvm_page_track_create_memslot(struct kvm_memory_slot *slot,
 	return 0;
 
 track_free:
-	kvm_page_track_free_memslot(slot, NULL);
+	kvm_page_track_free_memslot(slot);
 	return -ENOMEM;
 }
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index eb812bb18599..adf384ef5969 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9576,27 +9576,22 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kvm_hv_destroy_vm(kvm);
 }
 
-void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			   struct kvm_memory_slot *dont)
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
 	int i;
 
 	for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
-		if (!dont || free->arch.rmap[i] != dont->arch.rmap[i]) {
-			kvfree(free->arch.rmap[i]);
-			free->arch.rmap[i] = NULL;
-		}
+		kvfree(slot->arch.rmap[i]);
+		slot->arch.rmap[i] = NULL;
+
 		if (i == 0)
 			continue;
 
-		if (!dont || free->arch.lpage_info[i - 1] !=
-			     dont->arch.lpage_info[i - 1]) {
-			kvfree(free->arch.lpage_info[i - 1]);
-			free->arch.lpage_info[i - 1] = NULL;
-		}
+		kvfree(slot->arch.lpage_info[i - 1]);
+		slot->arch.lpage_info[i - 1] = NULL;
 	}
 
-	kvm_page_track_free_memslot(free, dont);
+	kvm_page_track_free_memslot(slot);
 }
 
 static int kvm_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 23a2373931cb..bfb8194d0366 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -677,8 +677,7 @@ int kvm_set_memory_region(struct kvm *kvm,
 			  const struct kvm_userspace_memory_region *mem);
 int __kvm_set_memory_region(struct kvm *kvm,
 			    const struct kvm_userspace_memory_region *mem);
-void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			   struct kvm_memory_slot *dont);
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot);
 void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen);
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index f264de85f648..f3241b268d49 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2353,8 +2353,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 	return ret;
 }
 
-void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			   struct kvm_memory_slot *dont)
+void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
 }
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 38bb446cef85..5f0d68c6e2dc 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -549,18 +549,14 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot)
 	memslot->dirty_bitmap = NULL;
 }
 
-/*
- * Free any memory in @free but not in @dont.
- */
-static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
-			      struct kvm_memory_slot *dont)
+static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
-	if (!dont || free->dirty_bitmap != dont->dirty_bitmap)
-		kvm_destroy_dirty_bitmap(free);
+	kvm_destroy_dirty_bitmap(slot);
 
-	kvm_arch_free_memslot(kvm, free, dont);
+	kvm_arch_free_memslot(kvm, slot);
 
-	free->npages = 0;
+	slot->flags = 0;
+	slot->npages = 0;
 }
 
 static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots)
@@ -571,7 +567,7 @@ static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots)
 		return;
 
 	kvm_for_each_memslot(memslot, slots)
-		kvm_free_memslot(kvm, memslot, NULL);
+		kvm_free_memslot(kvm, memslot);
 
 	kvfree(slots);
 }
@@ -985,7 +981,7 @@ static int kvm_delete_memslot(struct kvm *kvm,
 	if (r)
 		return r;
 
-	kvm_free_memslot(kvm, old, NULL);
+	kvm_free_memslot(kvm, old);
 	return 0;
 }
 
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 11/15] KVM: Clean up local variable usage in __kvm_set_memory_region()
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (9 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 10/15] KVM: Simplify kvm_free_memslot() and all its descendents Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions Sean Christopherson
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Clean up __kvm_set_memory_region() to achieve several goals:

  - Remove local variables that serve no real purpose
  - Improve the readability of the code
  - Better show the relationship between the 'old' and 'new' memslot
  - Prepare for dynamically sizing memslots.

Note, using 'tmp' to hold the initial memslot is not strictly necessary
at this juncture, e.g. 'old' could be directly copied from
id_to_memslot(), but keep the pointer usage as id_to_memslot() will be
able to return a NULL pointer once memslots are dynamically sized.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 virt/kvm/kvm_main.c | 47 +++++++++++++++++++++++----------------------
 1 file changed, 24 insertions(+), 23 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5f0d68c6e2dc..c3e481986099 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -996,13 +996,11 @@ static int kvm_delete_memslot(struct kvm *kvm,
 int __kvm_set_memory_region(struct kvm *kvm,
 			    const struct kvm_userspace_memory_region *mem)
 {
-	int r;
-	gfn_t base_gfn;
-	unsigned long npages;
-	struct kvm_memory_slot *slot;
 	struct kvm_memory_slot old, new;
-	int as_id, id;
+	struct kvm_memory_slot *tmp;
 	enum kvm_mr_change change;
+	int as_id, id;
+	int r;
 
 	r = check_memory_region_flags(mem);
 	if (r)
@@ -1027,52 +1025,55 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
 		return -EINVAL;
 
-	slot = id_to_memslot(__kvm_memslots(kvm, as_id), id);
-	base_gfn = mem->guest_phys_addr >> PAGE_SHIFT;
-	npages = mem->memory_size >> PAGE_SHIFT;
-
-	if (npages > KVM_MEM_MAX_NR_PAGES)
-		return -EINVAL;
-
 	/*
 	 * Make a full copy of the old memslot, the pointer will become stale
 	 * when the memslots are re-sorted by update_memslots().
 	 */
-	old = *slot;
+	tmp = id_to_memslot(__kvm_memslots(kvm, as_id), id);
+	old = *tmp;
+	tmp = NULL;
+
 	if (!mem->memory_size)
 		return kvm_delete_memslot(kvm, mem, &old, as_id);
 
-	new = old;
-
 	new.id = id;
-	new.base_gfn = base_gfn;
-	new.npages = npages;
+	new.base_gfn = mem->guest_phys_addr >> PAGE_SHIFT;
+	new.npages = mem->memory_size >> PAGE_SHIFT;
 	new.flags = mem->flags;
 	new.userspace_addr = mem->userspace_addr;
 
+	if (new.npages > KVM_MEM_MAX_NR_PAGES)
+		return -EINVAL;
+
 	if (!old.npages) {
 		change = KVM_MR_CREATE;
+		new.dirty_bitmap = NULL;
+		memset(&new.arch, 0, sizeof(new.arch));
 	} else { /* Modify an existing slot. */
 		if ((new.userspace_addr != old.userspace_addr) ||
-		    (npages != old.npages) ||
+		    (new.npages != old.npages) ||
 		    ((new.flags ^ old.flags) & KVM_MEM_READONLY))
 			return -EINVAL;
 
-		if (base_gfn != old.base_gfn)
+		if (new.base_gfn != old.base_gfn)
 			change = KVM_MR_MOVE;
 		else if (new.flags != old.flags)
 			change = KVM_MR_FLAGS_ONLY;
 		else /* Nothing to change. */
 			return 0;
+
+		/* Copy dirty_bitmap and arch from the current memslot. */
+		new.dirty_bitmap = old.dirty_bitmap;
+		memcpy(&new.arch, &old.arch, sizeof(new.arch));
 	}
 
 	if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
 		/* Check for overlaps */
-		kvm_for_each_memslot(slot, __kvm_memslots(kvm, as_id)) {
-			if (slot->id == id)
+		kvm_for_each_memslot(tmp, __kvm_memslots(kvm, as_id)) {
+			if (tmp->id == id)
 				continue;
-			if (!((base_gfn + npages <= slot->base_gfn) ||
-			      (base_gfn >= slot->base_gfn + slot->npages)))
+			if (!((new.base_gfn + new.npages <= tmp->base_gfn) ||
+			      (new.base_gfn >= tmp->base_gfn + tmp->npages)))
 				return -EEXIST;
 		}
 	}
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (10 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 11/15] KVM: Clean up local variable usage in __kvm_set_memory_region() Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-23  9:29   ` Christoffer Dall
  2019-10-24 10:28   ` kbuild test robot
  2019-10-22  0:35 ` [PATCH v2 13/15] KVM: Ensure validity of memslot with respect to kvm_get_dirty_log() Sean Christopherson
                   ` (5 subsequent siblings)
  17 siblings, 2 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Move the implementations of KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG
for CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT into common KVM code.
The arch specific implemenations are extremely similar, differing
only in whether the dirty log needs to be sync'd from hardware (x86)
and how the TLBs are flushed.  Add new arch hooks to handle sync
and TLB flush; the sync will also be used for non-generic dirty log
support in a future patch (s390).

The ulterior motive for providing a common implementation is to
eliminate the dependency between arch and common code with respect to
the memslot referenced by the dirty log, i.e. to make it obvious in the
code that the validity of the memslot is guaranteed, as a future patch
will rework memslot handling such that id_to_memslot() can return NULL.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/mips/kvm/mips.c      | 62 ++---------------------------
 arch/powerpc/kvm/book3s.c |  5 +++
 arch/powerpc/kvm/booke.c  |  5 +++
 arch/s390/kvm/kvm-s390.c  |  5 +--
 arch/x86/kvm/x86.c        | 60 ++--------------------------
 include/linux/kvm_host.h  | 20 ++++------
 virt/kvm/arm/arm.c        | 47 ++--------------------
 virt/kvm/kvm_main.c       | 84 ++++++++++++++++++++++++++++++++-------
 8 files changed, 99 insertions(+), 189 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 713e5465edb0..5c4feeeee447 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -965,69 +965,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl,
 	return r;
 }
 
-/**
- * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
- * @kvm: kvm instance
- * @log: slot id and address to which we copy the log
- *
- * Steps 1-4 below provide general overview of dirty page logging. See
- * kvm_get_dirty_log_protect() function description for additional details.
- *
- * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
- * always flush the TLB (step 4) even if previous step failed  and the dirty
- * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
- * does not preclude user space subsequent dirty log read. Flushing TLB ensures
- * writes will be marked dirty for next log read.
- *
- *   1. Take a snapshot of the bit and clear it if needed.
- *   2. Write protect the corresponding page.
- *   3. Copy the snapshot to the userspace.
- *   4. Flush TLB's if needed.
- */
-int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
 {
-	struct kvm_memslots *slots;
-	struct kvm_memory_slot *memslot;
-	bool flush = false;
-	int r;
 
-	mutex_lock(&kvm->slots_lock);
-
-	r = kvm_get_dirty_log_protect(kvm, log, &flush);
-
-	if (flush) {
-		slots = kvm_memslots(kvm);
-		memslot = id_to_memslot(slots, log->slot);
-
-		/* Let implementation handle TLB/GVA invalidation */
-		kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
-	}
-
-	mutex_unlock(&kvm->slots_lock);
-	return r;
 }
 
-int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *log)
+void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
-	struct kvm_memslots *slots;
-	struct kvm_memory_slot *memslot;
-	bool flush = false;
-	int r;
-
-	mutex_lock(&kvm->slots_lock);
-
-	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
-
-	if (flush) {
-		slots = kvm_memslots(kvm);
-		memslot = id_to_memslot(slots, log->slot);
-
-		/* Let implementation handle TLB/GVA invalidation */
-		kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
-	}
-
-	mutex_unlock(&kvm->slots_lock);
-	return r;
+	/* Let implementation handle TLB/GVA invalidation */
+	kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
 }
 
 long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index a5d4a1014fdf..84f66c1592a3 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -829,6 +829,11 @@ int kvmppc_core_check_requests(struct kvm_vcpu *vcpu)
 	return vcpu->kvm->arch.kvm_ops->check_requests(vcpu);
 }
 
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
+{
+
+}
+
 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 {
 	return kvm->arch.kvm_ops->get_dirty_log(kvm, log);
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index a22ff567724a..35a4ef89a1db 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1796,6 +1796,11 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
 	return r;
 }
 
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
+{
+
+}
+
 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 {
 	return -ENOTSUPP;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 21f61dae7db1..c0e9929bdb34 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -572,8 +572,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	return r;
 }
 
-static void kvm_s390_sync_dirty_log(struct kvm *kvm,
-				    struct kvm_memory_slot *memslot)
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
 {
 	int i;
 	gfn_t cur_gfn, last_gfn;
@@ -633,7 +632,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
 	if (!memslot->dirty_bitmap)
 		goto out;
 
-	kvm_s390_sync_dirty_log(kvm, memslot);
+	kvm_arch_sync_dirty_log(kvm, memslot);
 	r = kvm_get_dirty_log(kvm, log, &is_dirty);
 	if (r)
 		goto out;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index adf384ef5969..3d3d70aac031 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4615,77 +4615,23 @@ static int kvm_vm_ioctl_reinject(struct kvm *kvm,
 	return 0;
 }
 
-/**
- * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
- * @kvm: kvm instance
- * @log: slot id and address to which we copy the log
- *
- * Steps 1-4 below provide general overview of dirty page logging. See
- * kvm_get_dirty_log_protect() function description for additional details.
- *
- * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
- * always flush the TLB (step 4) even if previous step failed  and the dirty
- * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
- * does not preclude user space subsequent dirty log read. Flushing TLB ensures
- * writes will be marked dirty for next log read.
- *
- *   1. Take a snapshot of the bit and clear it if needed.
- *   2. Write protect the corresponding page.
- *   3. Copy the snapshot to the userspace.
- *   4. Flush TLB's if needed.
- */
-int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
 {
-	bool flush = false;
-	int r;
-
-	mutex_lock(&kvm->slots_lock);
-
 	/*
 	 * Flush potentially hardware-cached dirty pages to dirty_bitmap.
 	 */
 	if (kvm_x86_ops->flush_log_dirty)
 		kvm_x86_ops->flush_log_dirty(kvm);
-
-	r = kvm_get_dirty_log_protect(kvm, log, &flush);
-
-	/*
-	 * All the TLBs can be flushed out of mmu lock, see the comments in
-	 * kvm_mmu_slot_remove_write_access().
-	 */
-	lockdep_assert_held(&kvm->slots_lock);
-	if (flush)
-		kvm_flush_remote_tlbs(kvm);
-
-	mutex_unlock(&kvm->slots_lock);
-	return r;
 }
 
-int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *log)
+void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
-	bool flush = false;
-	int r;
-
-	mutex_lock(&kvm->slots_lock);
-
-	/*
-	 * Flush potentially hardware-cached dirty pages to dirty_bitmap.
-	 */
-	if (kvm_x86_ops->flush_log_dirty)
-		kvm_x86_ops->flush_log_dirty(kvm);
-
-	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
-
 	/*
 	 * All the TLBs can be flushed out of mmu lock, see the comments in
 	 * kvm_mmu_slot_remove_write_access().
 	 */
 	lockdep_assert_held(&kvm->slots_lock);
-	if (flush)
-		kvm_flush_remote_tlbs(kvm);
-
-	mutex_unlock(&kvm->slots_lock);
-	return r;
+	kvm_flush_remote_tlbs(kvm);
 }
 
 int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index bfb8194d0366..80dd823a1a8b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -797,23 +797,19 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf);
 
 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext);
 
-int kvm_get_dirty_log(struct kvm *kvm,
-			struct kvm_dirty_log *log, int *is_dirty);
-
-int kvm_get_dirty_log_protect(struct kvm *kvm,
-			      struct kvm_dirty_log *log, bool *flush);
-int kvm_clear_dirty_log_protect(struct kvm *kvm,
-				struct kvm_clear_dirty_log *log, bool *flush);
-
 void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					struct kvm_memory_slot *slot,
 					gfn_t gfn_offset,
 					unsigned long mask);
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot);
 
-int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
-				struct kvm_dirty_log *log);
-int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
-				  struct kvm_clear_dirty_log *log);
+#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot);
+#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
+int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log);
+int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
+		      int *is_dirty);
+#endif
 
 int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
 			bool line_status);
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 86c6aa1cb58e..03f668e812ea 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1203,55 +1203,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 	return r;
 }
 
-/**
- * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
- * @kvm: kvm instance
- * @log: slot id and address to which we copy the log
- *
- * Steps 1-4 below provide general overview of dirty page logging. See
- * kvm_get_dirty_log_protect() function description for additional details.
- *
- * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
- * always flush the TLB (step 4) even if previous step failed  and the dirty
- * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
- * does not preclude user space subsequent dirty log read. Flushing TLB ensures
- * writes will be marked dirty for next log read.
- *
- *   1. Take a snapshot of the bit and clear it if needed.
- *   2. Write protect the corresponding page.
- *   3. Copy the snapshot to the userspace.
- *   4. Flush TLB's if needed.
- */
-int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
 {
-	bool flush = false;
-	int r;
 
-	mutex_lock(&kvm->slots_lock);
-
-	r = kvm_get_dirty_log_protect(kvm, log, &flush);
-
-	if (flush)
-		kvm_flush_remote_tlbs(kvm);
-
-	mutex_unlock(&kvm->slots_lock);
-	return r;
 }
 
-int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *log)
+void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
 {
-	bool flush = false;
-	int r;
-
-	mutex_lock(&kvm->slots_lock);
-
-	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
-
-	if (flush)
-		kvm_flush_remote_tlbs(kvm);
-
-	mutex_unlock(&kvm->slots_lock);
-	return r;
+	kvm_flush_remote_tlbs(kvm);
 }
 
 static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c3e481986099..c0ce5081dd00 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -781,7 +781,7 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
 
 /*
  * Allocation size is twice as large as the actual dirty bitmap size.
- * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
+ * See kvm_vm_ioctl_get_dirty_log() why this is needed.
  */
 static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
 {
@@ -1123,6 +1123,7 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 	return kvm_set_memory_region(kvm, mem);
 }
 
+#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
 int kvm_get_dirty_log(struct kvm *kvm,
 			struct kvm_dirty_log *log, int *is_dirty)
 {
@@ -1156,13 +1157,12 @@ int kvm_get_dirty_log(struct kvm *kvm,
 }
 EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
 
-#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
+#else /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
 /**
  * kvm_get_dirty_log_protect - get a snapshot of dirty pages
  *	and reenable dirty page tracking for the corresponding pages.
  * @kvm:	pointer to kvm instance
  * @log:	slot id and address to which we copy the log
- * @flush:	true if TLB flush is needed by caller
  *
  * We need to keep it in mind that VCPU threads can write to the bitmap
  * concurrently. So, to avoid losing track of dirty pages we keep the
@@ -1179,8 +1179,7 @@ EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
  * exiting to userspace will be logged for the next call.
  *
  */
-int kvm_get_dirty_log_protect(struct kvm *kvm,
-			struct kvm_dirty_log *log, bool *flush)
+static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
 {
 	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
@@ -1188,6 +1187,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 	unsigned long n;
 	unsigned long *dirty_bitmap;
 	unsigned long *dirty_bitmap_buffer;
+	bool flush;
 
 	as_id = log->slot >> 16;
 	id = (u16)log->slot;
@@ -1201,8 +1201,10 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 	if (!dirty_bitmap)
 		return -ENOENT;
 
+	kvm_arch_sync_dirty_log(kvm, memslot);
+
 	n = kvm_dirty_bitmap_bytes(memslot);
-	*flush = false;
+	flush = false;
 	if (kvm->manual_dirty_log_protect) {
 		/*
 		 * Unlike kvm_get_dirty_log, we always return false in *flush,
@@ -1225,7 +1227,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			if (!dirty_bitmap[i])
 				continue;
 
-			*flush = true;
+			flush = true;
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
@@ -1236,21 +1238,55 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 		spin_unlock(&kvm->mmu_lock);
 	}
 
+	if (flush)
+		kvm_arch_dirty_log_tlb_flush(kvm, memslot);
+
 	if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
 		return -EFAULT;
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_get_dirty_log_protect);
+
+
+/**
+ * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
+ * @kvm: kvm instance
+ * @log: slot id and address to which we copy the log
+ *
+ * Steps 1-4 below provide general overview of dirty page logging. See
+ * kvm_get_dirty_log_protect() function description for additional details.
+ *
+ * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
+ * always flush the TLB (step 4) even if previous step failed  and the dirty
+ * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
+ * does not preclude user space subsequent dirty log read. Flushing TLB ensures
+ * writes will be marked dirty for next log read.
+ *
+ *   1. Take a snapshot of the bit and clear it if needed.
+ *   2. Write protect the corresponding page.
+ *   3. Copy the snapshot to the userspace.
+ *   4. Flush TLB's if needed.
+ */
+static int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
+				      struct kvm_dirty_log *log)
+{
+	int r;
+
+	mutex_lock(&kvm->slots_lock);
+
+	r = kvm_get_dirty_log_protect(kvm, log);
+
+	mutex_unlock(&kvm->slots_lock);
+	return r;
+}
 
 /**
  * kvm_clear_dirty_log_protect - clear dirty bits in the bitmap
  *	and reenable dirty page tracking for the corresponding pages.
  * @kvm:	pointer to kvm instance
  * @log:	slot id and address from which to fetch the bitmap of dirty pages
- * @flush:	true if TLB flush is needed by caller
  */
-int kvm_clear_dirty_log_protect(struct kvm *kvm,
-				struct kvm_clear_dirty_log *log, bool *flush)
+static int kvm_clear_dirty_log_protect(struct kvm *kvm,
+				       struct kvm_clear_dirty_log *log)
 {
 	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
@@ -1259,6 +1295,7 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 	unsigned long i, n;
 	unsigned long *dirty_bitmap;
 	unsigned long *dirty_bitmap_buffer;
+	bool flush;
 
 	as_id = log->slot >> 16;
 	id = (u16)log->slot;
@@ -1282,7 +1319,9 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 	    (log->num_pages < memslot->npages - log->first_page && (log->num_pages & 63)))
 	    return -EINVAL;
 
-	*flush = false;
+	kvm_arch_sync_dirty_log(kvm, memslot);
+
+	flush = false;
 	dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot);
 	if (copy_from_user(dirty_bitmap_buffer, log->dirty_bitmap, n))
 		return -EFAULT;
@@ -1305,17 +1344,32 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 		 * a problem if userspace sets them in log->dirty_bitmap.
 		*/
 		if (mask) {
-			*flush = true;
+			flush = true;
 			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
 								offset, mask);
 		}
 	}
 	spin_unlock(&kvm->mmu_lock);
 
+	if (flush)
+		kvm_arch_dirty_log_tlb_flush(kvm, memslot);
+
 	return 0;
 }
-EXPORT_SYMBOL_GPL(kvm_clear_dirty_log_protect);
-#endif
+
+static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
+					struct kvm_clear_dirty_log *log)
+{
+	int r;
+
+	mutex_lock(&kvm->slots_lock);
+
+	r = kvm_clear_dirty_log_protect(kvm, log);
+
+	mutex_unlock(&kvm->slots_lock);
+	return r;
+}
+#endif /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
 
 bool kvm_largepages_enabled(void)
 {
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 13/15] KVM: Ensure validity of memslot with respect to kvm_get_dirty_log()
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (11 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22  0:35 ` [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots Sean Christopherson
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Rework kvm_get_dirty_log() so that it "returns" the associated memslot
on success.  A future patch will rework memslot handling such that
id_to_memslot() can return NULL, returning the memslot makes it more
obvious that the validity of the memslot has been verified, i.e.
precludes the need to add validity checks in the arch code that are
technically unnecessary.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/powerpc/kvm/book3s_pr.c |  6 +-----
 arch/s390/kvm/kvm-s390.c     | 12 ++----------
 include/linux/kvm_host.h     |  2 +-
 virt/kvm/kvm_main.c          | 27 +++++++++++++++++++--------
 4 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 5368a5dbac22..f41a136d247f 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1860,7 +1860,6 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 static int kvm_vm_ioctl_get_dirty_log_pr(struct kvm *kvm,
 					 struct kvm_dirty_log *log)
 {
-	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	struct kvm_vcpu *vcpu;
 	ulong ga, ga_end;
@@ -1870,15 +1869,12 @@ static int kvm_vm_ioctl_get_dirty_log_pr(struct kvm *kvm,
 
 	mutex_lock(&kvm->slots_lock);
 
-	r = kvm_get_dirty_log(kvm, log, &is_dirty);
+	r = kvm_get_dirty_log(kvm, log, &is_dirty, &memslot);
 	if (r)
 		goto out;
 
 	/* If nothing is dirty, don't bother messing with page tables. */
 	if (is_dirty) {
-		slots = kvm_memslots(kvm);
-		memslot = id_to_memslot(slots, log->slot);
-
 		ga = memslot->base_gfn << PAGE_SHIFT;
 		ga_end = ga + (memslot->npages << PAGE_SHIFT);
 
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c0e9929bdb34..a66eb2b9bf71 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -613,9 +613,8 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
 {
 	int r;
 	unsigned long n;
-	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
-	int is_dirty = 0;
+	int is_dirty;
 
 	if (kvm_is_ucontrol(kvm))
 		return -EINVAL;
@@ -626,14 +625,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
 	if (log->slot >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	slots = kvm_memslots(kvm);
-	memslot = id_to_memslot(slots, log->slot);
-	r = -ENOENT;
-	if (!memslot->dirty_bitmap)
-		goto out;
-
-	kvm_arch_sync_dirty_log(kvm, memslot);
-	r = kvm_get_dirty_log(kvm, log, &is_dirty);
+	r = kvm_get_dirty_log(kvm, log, &is_dirty, &memslot);
 	if (r)
 		goto out;
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 80dd823a1a8b..4eb14a8cd9cb 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -808,7 +808,7 @@ void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
 #else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log);
 int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
-		      int *is_dirty);
+		      int *is_dirty, struct kvm_memory_slot **memslot);
 #endif
 
 int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c0ce5081dd00..7e5a88ab57b6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1124,31 +1124,42 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 }
 
 #ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
-int kvm_get_dirty_log(struct kvm *kvm,
-			struct kvm_dirty_log *log, int *is_dirty)
+/**
+ * kvm_get_dirty_log - get a snapshot of dirty pages
+ * @kvm:	pointer to kvm instance
+ * @log:	slot id and address to which we copy the log
+ * @is_dirty:	set to '1' if any dirty pages were found
+ * @memslot:	set to the associated memslot, always valid on success
+ */
+int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
+		      int *is_dirty, struct kvm_memory_slot **memslot)
 {
 	struct kvm_memslots *slots;
-	struct kvm_memory_slot *memslot;
 	int i, as_id, id;
 	unsigned long n;
 	unsigned long any = 0;
 
+	*memslot = NULL;
+	*is_dirty = 0;
+
 	as_id = log->slot >> 16;
 	id = (u16)log->slot;
 	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
 		return -EINVAL;
 
 	slots = __kvm_memslots(kvm, as_id);
-	memslot = id_to_memslot(slots, id);
-	if (!memslot->dirty_bitmap)
+	*memslot = id_to_memslot(slots, id);
+	if (!(*memslot)->dirty_bitmap)
 		return -ENOENT;
 
-	n = kvm_dirty_bitmap_bytes(memslot);
+	kvm_arch_sync_dirty_log(kvm, *memslot);
+
+	n = kvm_dirty_bitmap_bytes(*memslot);
 
 	for (i = 0; !any && i < n/sizeof(long); ++i)
-		any = memslot->dirty_bitmap[i];
+		any = (*memslot)->dirty_bitmap[i];
 
-	if (copy_to_user(log->dirty_bitmap, memslot->dirty_bitmap, n))
+	if (copy_to_user(log->dirty_bitmap, (*memslot)->dirty_bitmap, n))
 		return -EFAULT;
 
 	if (any)
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (12 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 13/15] KVM: Ensure validity of memslot with respect to kvm_get_dirty_log() Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22 14:04   ` Paolo Bonzini
  2019-10-22  0:35 ` [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots Sean Christopherson
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Refactor memslot handling to treat the number of used slots as the de
facto size of the memslot array, e.g. return NULL from id_to_memslot()
when an invalid index is provided instead of relying on npages==0 to
detect an invalid memslot.  Rework the sorting and walking of memslots
in advance of dynamically sizing memslots to aid bisection and debug,
e.g. with luck, a bug in the refactoring will bisect here and/or hit a
WARN instead of randomly corrupting memory.

Alternatively, a global null/invalid memslot could be returned, i.e. so
callers of id_to_memslot() don't have to explicitly check for a NULL
memslot, but that approach runs the risk of introducing difficult-to-
debug issues, e.g. if the global null slot is modified.  Constifying
the return from id_to_memslot() to combat such issues is possible, but
would require a massive refactoring of arch specific code and would
still be susceptible to casting shenanigans.

Add function comments to update_memslots() and search_memslots() to
explicitly (and loudly) state how memslots are sorted.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/powerpc/kvm/book3s_hv.c |   2 +-
 arch/x86/kvm/x86.c           |  14 ++--
 include/linux/kvm_host.h     |  18 +++--
 virt/kvm/arm/mmu.c           |   9 ++-
 virt/kvm/kvm_main.c          | 145 ++++++++++++++++++++++-------------
 5 files changed, 117 insertions(+), 71 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 14906f7c12c5..444c76091f17 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4405,7 +4405,7 @@ static int kvm_vm_ioctl_get_dirty_log_hv(struct kvm *kvm,
 	slots = kvm_memslots(kvm);
 	memslot = id_to_memslot(slots, log->slot);
 	r = -ENOENT;
-	if (!memslot->dirty_bitmap)
+	if (!memslot || !memslot->dirty_bitmap)
 		goto out;
 
 	/*
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3d3d70aac031..ccaca74b2a0f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9437,9 +9437,9 @@ void kvm_arch_sync_events(struct kvm *kvm)
 int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 {
 	int i, r;
-	unsigned long hva;
+	unsigned long hva, uninitialized_var(old_npages);
 	struct kvm_memslots *slots = kvm_memslots(kvm);
-	struct kvm_memory_slot *slot, old;
+	struct kvm_memory_slot *slot;
 
 	/* Called with kvm->slots_lock held.  */
 	if (WARN_ON(id >= KVM_MEM_SLOTS_NUM))
@@ -9447,7 +9447,7 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 
 	slot = id_to_memslot(slots, id);
 	if (size) {
-		if (slot->npages)
+		if (slot && slot->npages)
 			return -EEXIST;
 
 		/*
@@ -9459,13 +9459,13 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 		if (IS_ERR((void *)hva))
 			return PTR_ERR((void *)hva);
 	} else {
-		if (!slot->npages)
+		if (!slot || !slot->npages)
 			return 0;
 
-		hva = 0;
+		hva = slot->userspace_addr;
+		old_npages = slot->npages;
 	}
 
-	old = *slot;
 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
 		struct kvm_userspace_memory_region m;
 
@@ -9480,7 +9480,7 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
 	}
 
 	if (!size)
-		vm_munmap(old.userspace_addr, old.npages * PAGE_SIZE);
+		vm_munmap(hva, old_npages * PAGE_SIZE);
 
 	return 0;
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 4eb14a8cd9cb..3f8a7760bb79 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -580,10 +580,11 @@ static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu)
 	BUG();
 }
 
-#define kvm_for_each_memslot(memslot, slots)	\
-	for (memslot = &slots->memslots[0];	\
-	      memslot < slots->memslots + KVM_MEM_SLOTS_NUM && memslot->npages;\
-		memslot++)
+#define kvm_for_each_memslot(memslot, slots)				\
+	for (memslot = &slots->memslots[0];				\
+	     memslot < slots->memslots + slots->used_slots; memslot++)	\
+		if (WARN_ON_ONCE(!memslot->npages)) {			\
+		} else
 
 int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id);
 void kvm_vcpu_uninit(struct kvm_vcpu *vcpu);
@@ -643,12 +644,15 @@ static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu)
 	return __kvm_memslots(vcpu->kvm, as_id);
 }
 
-static inline struct kvm_memory_slot *
-id_to_memslot(struct kvm_memslots *slots, int id)
+static inline
+struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
 {
 	int index = slots->id_to_index[id];
 	struct kvm_memory_slot *slot;
 
+	if (index < 0)
+		return NULL;
+
 	slot = &slots->memslots[index];
 
 	WARN_ON(slot->id != id);
@@ -993,6 +997,8 @@ bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args);
  * used in non-modular code in arch/powerpc/kvm/book3s_hv_rm_mmu.c.
  * gfn_to_memslot() itself isn't here as an inline because that would
  * bloat other code too much.
+ *
+ * IMPORTANT: Slots are sorted from highest GFN to lowest GFN!
  */
 static inline struct kvm_memory_slot *
 search_memslots(struct kvm_memslots *slots, gfn_t gfn)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index f3241b268d49..7ea3321cabb8 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1536,8 +1536,13 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 	struct kvm_memory_slot *memslot = id_to_memslot(slots, slot);
-	phys_addr_t start = memslot->base_gfn << PAGE_SHIFT;
-	phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
+	phys_addr_t start, end;
+
+	if (WARN_ON_ONCE(!memslot))
+		return;
+
+	start = memslot->base_gfn << PAGE_SHIFT;
+	end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
 
 	spin_lock(&kvm->mmu_lock);
 	stage2_wp_range(kvm, start, end);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7e5a88ab57b6..177caac395de 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -535,7 +535,7 @@ static struct kvm_memslots *kvm_alloc_memslots(void)
 		return NULL;
 
 	for (i = 0; i < KVM_MEM_SLOTS_NUM; i++)
-		slots->id_to_index[i] = slots->memslots[i].id = i;
+		slots->id_to_index[i] = slots->memslots[i].id = -1;
 
 	return slots;
 }
@@ -794,64 +794,94 @@ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
 	return 0;
 }
 
+static inline void kvm_shift_memslots_deleted(struct kvm_memslots *slots,
+					      struct kvm_memory_slot *new)
+{
+	struct kvm_memory_slot *mslots = slots->memslots;
+	int i;
+
+	if (WARN_ON(slots->id_to_index[new->id] == -1))
+		return;
+
+	slots->used_slots--;
+
+	for (i = slots->id_to_index[new->id]; i < slots->used_slots; i++) {
+		mslots[i] = mslots[i + 1];
+		slots->id_to_index[mslots[i].id] = i;
+	}
+	mslots[i] = *new;
+	slots->id_to_index[new->id] = -1;
+}
+
+static inline int kvm_shift_memslots_forward(struct kvm_memslots *slots,
+					     struct kvm_memory_slot *new)
+{
+	struct kvm_memory_slot *mslots = slots->memslots;
+	int i;
+
+	if (WARN_ON_ONCE(slots->id_to_index[new->id] == -1) ||
+	    WARN_ON_ONCE(!slots->used_slots))
+		return -1;
+
+	for (i = slots->id_to_index[new->id]; i < slots->used_slots - 1; i++) {
+		if (new->base_gfn > mslots[i + 1].base_gfn)
+			break;
+
+		WARN_ON_ONCE(new->base_gfn == mslots[i + 1].base_gfn);
+
+		/* Shift the next memslot forward one and update its index. */
+		mslots[i] = mslots[i + 1];
+		slots->id_to_index[mslots[i].id] = i;
+	}
+	return i;
+}
+
+static inline int kvm_shift_memslots_back(struct kvm_memslots *slots,
+					  struct kvm_memory_slot *new,
+					  int start)
+{
+	struct kvm_memory_slot *mslots = slots->memslots;
+	int i;
+
+	for (i = start; i > 0; i--) {
+		if (new->base_gfn < mslots[i - 1].base_gfn)
+			break;
+
+		WARN_ON_ONCE(new->base_gfn == mslots[i - 1].base_gfn);
+
+		/* Shift the next memslot back one and update its index. */
+		mslots[i] = mslots[i - 1];
+		slots->id_to_index[mslots[i].id] = i;
+	}
+	return i;
+}
+
 /*
  * Insert memslot and re-sort memslots based on their GFN,
  * so binary search could be used to lookup GFN.
  * Sorting algorithm takes advantage of having initially
  * sorted array and known changed memslot position.
+ *
+ * IMPORTANT: Slots are sorted from highest GFN to lowest GFN!
  */
 static void update_memslots(struct kvm_memslots *slots,
 			    struct kvm_memory_slot *new,
 			    enum kvm_mr_change change)
 {
-	int id = new->id;
-	int i = slots->id_to_index[id];
-	struct kvm_memory_slot *mslots = slots->memslots;
-
-	WARN_ON(mslots[i].id != id);
-	switch (change) {
-	case KVM_MR_CREATE:
-		slots->used_slots++;
-		WARN_ON(mslots[i].npages || !new->npages);
-		break;
-	case KVM_MR_DELETE:
-		slots->used_slots--;
-		WARN_ON(new->npages || !mslots[i].npages);
-		break;
-	default:
-		break;
-	}
-
-	while (i < KVM_MEM_SLOTS_NUM - 1 &&
-	       new->base_gfn <= mslots[i + 1].base_gfn) {
-		if (!mslots[i + 1].npages)
-			break;
-		mslots[i] = mslots[i + 1];
-		slots->id_to_index[mslots[i].id] = i;
-		i++;
+	int i;
+
+	if (change == KVM_MR_DELETE) {
+		kvm_shift_memslots_deleted(slots, new);
+	} else {
+		if (change == KVM_MR_CREATE)
+			i = slots->used_slots++;
+		else
+			i = kvm_shift_memslots_forward(slots, new);
+		i = kvm_shift_memslots_back(slots, new, i);
+
+		slots->memslots[i] = *new;
+		slots->id_to_index[new->id] = i;
 	}
-
-	/*
-	 * The ">=" is needed when creating a slot with base_gfn == 0,
-	 * so that it moves before all those with base_gfn == npages == 0.
-	 *
-	 * On the other hand, if new->npages is zero, the above loop has
-	 * already left i pointing to the beginning of the empty part of
-	 * mslots, and the ">=" would move the hole backwards in this
-	 * case---which is wrong.  So skip the loop when deleting a slot.
-	 */
-	if (new->npages) {
-		while (i > 0 &&
-		       new->base_gfn >= mslots[i - 1].base_gfn) {
-			mslots[i] = mslots[i - 1];
-			slots->id_to_index[mslots[i].id] = i;
-			i--;
-		}
-	} else
-		WARN_ON_ONCE(i != slots->used_slots);
-
-	mslots[i] = *new;
-	slots->id_to_index[mslots[i].id] = i;
 }
 
 static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem)
@@ -1030,8 +1060,13 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	 * when the memslots are re-sorted by update_memslots().
 	 */
 	tmp = id_to_memslot(__kvm_memslots(kvm, as_id), id);
-	old = *tmp;
-	tmp = NULL;
+	if (tmp) {
+		old = *tmp;
+		tmp = NULL;
+	} else {
+		memset(&old, 0, sizeof(old));
+		old.id = id;
+	}
 
 	if (!mem->memory_size)
 		return kvm_delete_memslot(kvm, mem, &old, as_id);
@@ -1149,7 +1184,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
 
 	slots = __kvm_memslots(kvm, as_id);
 	*memslot = id_to_memslot(slots, id);
-	if (!(*memslot)->dirty_bitmap)
+	if (!(*memslot) || !(*memslot)->dirty_bitmap)
 		return -ENOENT;
 
 	kvm_arch_sync_dirty_log(kvm, *memslot);
@@ -1207,10 +1242,10 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
 
 	slots = __kvm_memslots(kvm, as_id);
 	memslot = id_to_memslot(slots, id);
+	if (!memslot || !memslot->dirty_bitmap)
+		return -ENOENT;
 
 	dirty_bitmap = memslot->dirty_bitmap;
-	if (!dirty_bitmap)
-		return -ENOENT;
 
 	kvm_arch_sync_dirty_log(kvm, memslot);
 
@@ -1318,10 +1353,10 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm,
 
 	slots = __kvm_memslots(kvm, as_id);
 	memslot = id_to_memslot(slots, id);
+	if (!memslot || !memslot->dirty_bitmap)
+		return -ENOENT;
 
 	dirty_bitmap = memslot->dirty_bitmap;
-	if (!dirty_bitmap)
-		return -ENOENT;
 
 	n = ALIGN(log->num_pages, BITS_PER_LONG) / 8;
 
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (13 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots Sean Christopherson
@ 2019-10-22  0:35 ` Sean Christopherson
  2019-10-22 14:04   ` Paolo Bonzini
  2019-10-22 13:59 ` [PATCH v2 00/15] KVM: Dynamically size memslot arrays Paolo Bonzini
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22  0:35 UTC (permalink / raw)
  To: James Hogan, Paul Mackerras, Christian Borntraeger,
	Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Marc Zyngier
  Cc: linux-arm-kernel, Wanpeng Li, kvm, David Hildenbrand,
	Joerg Roedel, Cornelia Huck, linux-mips, Sean Christopherson,
	linux-kernel, kvm-ppc, Vitaly Kuznetsov, kvmarm, Jim Mattson

Now that the memslot logic doesn't assume memslots are always non-NULL,
dynamically size the array of memslots instead of unconditionally
allocating memory for the maximum number of memslots.

Note, because a to-be-deleted memslot must first be invalidated, the
array size cannot be immediately reduced when deleting a memslot.
However, consecutive deletions will realize the memory savings, i.e.
a second deletion will trim the entry.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 include/linux/kvm_host.h |  5 ++++-
 virt/kvm/kvm_main.c      | 31 ++++++++++++++++++++++++++++---
 2 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 3f8a7760bb79..9e3a68257e80 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -433,11 +433,14 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
  */
 struct kvm_memslots {
 	u64 generation;
-	struct kvm_memory_slot memslots[KVM_MEM_SLOTS_NUM];
 	/* The mapping table from slot id to the index in memslots[]. */
 	short id_to_index[KVM_MEM_SLOTS_NUM];
 	atomic_t lru_slot;
 	int used_slots;
+	struct kvm_memory_slot memslots[];
+	/*
+	 * WARNING: 'memslots' is dynamically-sized.  It *MUST* be at the end.
+	 */
 };
 
 struct kvm {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 177caac395de..131b2dd7db72 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -535,7 +535,7 @@ static struct kvm_memslots *kvm_alloc_memslots(void)
 		return NULL;
 
 	for (i = 0; i < KVM_MEM_SLOTS_NUM; i++)
-		slots->id_to_index[i] = slots->memslots[i].id = -1;
+		slots->id_to_index[i] = -1;
 
 	return slots;
 }
@@ -934,6 +934,32 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
 	return old_memslots;
 }
 
+/*
+ * Note, at a minimum, the current number of used slots must be allocated, even
+ * when deleting a memslot, as we need a complete duplicate of the memslots for
+ * use when invalidating a memslot prior to deleting/moving the memslot.
+ */
+static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old,
+					     enum kvm_mr_change change)
+{
+	struct kvm_memslots *slots;
+	size_t old_size, new_size;
+
+	old_size = sizeof(struct kvm_memslots) +
+		   (sizeof(struct kvm_memory_slot) * old->used_slots);
+
+	if (change == KVM_MR_CREATE)
+		new_size = old_size + sizeof(struct kvm_memory_slot);
+	else
+		new_size = old_size;
+
+	slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT);
+	if (likely(slots))
+		memcpy(slots, old, old_size);
+
+	return slots;
+}
+
 static int kvm_set_memslot(struct kvm *kvm,
 			   const struct kvm_userspace_memory_region *mem,
 			   const struct kvm_memory_slot *old,
@@ -944,10 +970,9 @@ static int kvm_set_memslot(struct kvm *kvm,
 	struct kvm_memslots *slots;
 	int r;
 
-	slots = kvzalloc(sizeof(struct kvm_memslots), GFP_KERNEL_ACCOUNT);
+	slots = kvm_dup_memslots(__kvm_memslots(kvm, as_id), change);
 	if (!slots)
 		return -ENOMEM;
-	memcpy(slots, __kvm_memslots(kvm, as_id), sizeof(struct kvm_memslots));
 
 	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) {
 		/*
-- 
2.22.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 00/15] KVM: Dynamically size memslot arrays
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (14 preceding siblings ...)
  2019-10-22  0:35 ` [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots Sean Christopherson
@ 2019-10-22 13:59 ` Paolo Bonzini
  2019-10-23 18:56   ` Christian Borntraeger
  2019-10-22 14:04 ` Paolo Bonzini
  2019-10-23  9:39 ` Christoffer Dall
  17 siblings, 1 reply; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-22 13:59 UTC (permalink / raw)
  To: Sean Christopherson, James Hogan, Paul Mackerras,
	Christian Borntraeger, Janosch Frank, Radim Krčmář,
	Marc Zyngier
  Cc: Wanpeng Li, kvm, David Hildenbrand, Marc Zyngier, Joerg Roedel,
	Cornelia Huck, linux-mips, kvm-ppc, linux-kernel, Paul Mackerras,
	Christian Borntraeger, linux-arm-kernel, Vitaly Kuznetsov,
	kvmarm, Jim Mattson

On 22/10/19 02:35, Sean Christopherson wrote:
> The end goal of this series is to dynamically size the memslot array so
> that KVM allocates memory based on the number of memslots in use, as
> opposed to unconditionally allocating memory for the maximum number of
> memslots.  On x86, each memslot consumes 88 bytes, and so with 2 address
> spaces of 512 memslots, each VM consumes ~90k bytes for the memslots.
> E.g. given a VM that uses a total of 30 memslots, dynamic sizing reduces
> the memory footprint from 90k to ~2.6k bytes.
> 
> The changes required to support dynamic sizing are relatively small,
> e.g. are essentially contained in patches 12/13 and 13/13.  Patches 1-11
> clean up the memslot code, which has gotten quite crusy, especially
> __kvm_set_memory_region().  The clean up is likely not strictly necessary
> to switch to dynamic sizing, but I didn't have a remotely reasonable
> level of confidence in the correctness of the dynamic sizing without first
> doing the clean up.
> 
> Testing, especially non-x86 platforms, would be greatly appreciated.  The
> non-x86 changes are for all intents and purposes untested, e.g. I compile
> tested pieces of the code by copying them into x86, but that's it.  In
> theory, the vast majority of the functional changes are arch agnostic, in
> theory...
> 
> v2:
>   - Split "Drop kvm_arch_create_memslot()" into three patches to move
>     minor functional changes to standalone patches [Janosch].
>   - Rebase to latest kvm/queue (f0574a1cea5b, "KVM: x86: fix ...")
>   - Collect an Acked-by and a Reviewed-by
> 
> Sean Christopherson (15):
>   KVM: Reinstall old memslots if arch preparation fails
>   KVM: Don't free new memslot if allocation of said memslot fails
>   KVM: PPC: Move memslot memory allocation into prepare_memory_region()
>   KVM: x86: Allocate memslot resources during prepare_memory_region()
>   KVM: Drop kvm_arch_create_memslot()
>   KVM: Explicitly free allocated-but-unused dirty bitmap
>   KVM: Refactor error handling for setting memory region
>   KVM: Move setting of memslot into helper routine
>   KVM: Move memslot deletion to helper function
>   KVM: Simplify kvm_free_memslot() and all its descendents
>   KVM: Clean up local variable usage in __kvm_set_memory_region()
>   KVM: Provide common implementation for generic dirty log functions
>   KVM: Ensure validity of memslot with respect to kvm_get_dirty_log()
>   KVM: Terminate memslot walks via used_slots
>   KVM: Dynamically size memslot array based on number of used slots
> 
>  arch/mips/include/asm/kvm_host.h      |   2 +-
>  arch/mips/kvm/mips.c                  |  68 +---
>  arch/powerpc/include/asm/kvm_ppc.h    |  14 +-
>  arch/powerpc/kvm/book3s.c             |  22 +-
>  arch/powerpc/kvm/book3s_hv.c          |  36 +-
>  arch/powerpc/kvm/book3s_pr.c          |  20 +-
>  arch/powerpc/kvm/booke.c              |  17 +-
>  arch/powerpc/kvm/powerpc.c            |  13 +-
>  arch/s390/include/asm/kvm_host.h      |   2 +-
>  arch/s390/kvm/kvm-s390.c              |  21 +-
>  arch/x86/include/asm/kvm_page_track.h |   3 +-
>  arch/x86/kvm/page_track.c             |  15 +-
>  arch/x86/kvm/x86.c                    | 100 ++---
>  include/linux/kvm_host.h              |  48 +--
>  virt/kvm/arm/arm.c                    |  47 +--
>  virt/kvm/arm/mmu.c                    |  18 +-
>  virt/kvm/kvm_main.c                   | 546 ++++++++++++++++----------
>  17 files changed, 467 insertions(+), 525 deletions(-)
> 

Christian, Marc, Paul, can you help testing patches 1-13?

Thanks,
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22  0:35 ` [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots Sean Christopherson
@ 2019-10-22 14:04   ` Paolo Bonzini
  2019-10-22 15:28     ` Sean Christopherson
  0 siblings, 1 reply; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-22 14:04 UTC (permalink / raw)
  To: Sean Christopherson, James Hogan, Paul Mackerras,
	Christian Borntraeger, Janosch Frank, Radim Krčmář,
	Marc Zyngier
  Cc: Wanpeng Li, kvm, David Hildenbrand, Joerg Roedel, Cornelia Huck,
	linux-mips, kvm-ppc, linux-kernel, linux-arm-kernel,
	Vitaly Kuznetsov, kvmarm, Jim Mattson

On 22/10/19 02:35, Sean Christopherson wrote:
> +static inline int kvm_shift_memslots_forward(struct kvm_memslots *slots,
> +					     struct kvm_memory_slot *new)
> +{
> +	struct kvm_memory_slot *mslots = slots->memslots;
> +	int i;
> +
> +	if (WARN_ON_ONCE(slots->id_to_index[new->id] == -1) ||
> +	    WARN_ON_ONCE(!slots->used_slots))
> +		return -1;
> +
> +	for (i = slots->id_to_index[new->id]; i < slots->used_slots - 1; i++) {
> +		if (new->base_gfn > mslots[i + 1].base_gfn)
> +			break;
> +
> +		WARN_ON_ONCE(new->base_gfn == mslots[i + 1].base_gfn);
> +
> +		/* Shift the next memslot forward one and update its index. */
> +		mslots[i] = mslots[i + 1];
> +		slots->id_to_index[mslots[i].id] = i;
> +	}
> +	return i;
> +}
> +
> +static inline int kvm_shift_memslots_back(struct kvm_memslots *slots,
> +					  struct kvm_memory_slot *new,
> +					  int start)

This new implementation of the insertion sort loses the comments that
were there in the old one.  Please keep them as function comments.

Paolo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots
  2019-10-22  0:35 ` [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots Sean Christopherson
@ 2019-10-22 14:04   ` Paolo Bonzini
  2019-10-22 15:22     ` Sean Christopherson
  0 siblings, 1 reply; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-22 14:04 UTC (permalink / raw)
  To: Sean Christopherson, James Hogan, Paul Mackerras,
	Christian Borntraeger, Janosch Frank, Radim Krčmář,
	Marc Zyngier
  Cc: Wanpeng Li, kvm, David Hildenbrand, Joerg Roedel, Cornelia Huck,
	linux-mips, kvm-ppc, linux-kernel, linux-arm-kernel,
	Vitaly Kuznetsov, kvmarm, Jim Mattson

On 22/10/19 02:35, Sean Christopherson wrote:
> +	struct kvm_memory_slot memslots[];
> +	/*
> +	 * WARNING: 'memslots' is dynamically-sized.  It *MUST* be at the end.
> +	 */

Isn't that obvious from the flexible array member?

Paolo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 00/15] KVM: Dynamically size memslot arrays
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (15 preceding siblings ...)
  2019-10-22 13:59 ` [PATCH v2 00/15] KVM: Dynamically size memslot arrays Paolo Bonzini
@ 2019-10-22 14:04 ` Paolo Bonzini
  2019-10-23  9:39 ` Christoffer Dall
  17 siblings, 0 replies; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-22 14:04 UTC (permalink / raw)
  To: Sean Christopherson, James Hogan, Paul Mackerras,
	Christian Borntraeger, Janosch Frank, Radim Krčmář,
	Marc Zyngier
  Cc: Wanpeng Li, kvm, David Hildenbrand, Joerg Roedel, Cornelia Huck,
	linux-mips, kvm-ppc, linux-kernel, linux-arm-kernel,
	Vitaly Kuznetsov, kvmarm, Jim Mattson

On 22/10/19 02:35, Sean Christopherson wrote:
> The end goal of this series is to dynamically size the memslot array so
> that KVM allocates memory based on the number of memslots in use, as
> opposed to unconditionally allocating memory for the maximum number of
> memslots.  On x86, each memslot consumes 88 bytes, and so with 2 address
> spaces of 512 memslots, each VM consumes ~90k bytes for the memslots.
> E.g. given a VM that uses a total of 30 memslots, dynamic sizing reduces
> the memory footprint from 90k to ~2.6k bytes.
> 
> The changes required to support dynamic sizing are relatively small,
> e.g. are essentially contained in patches 12/13 and 13/13.  Patches 1-11
> clean up the memslot code, which has gotten quite crusy, especially
> __kvm_set_memory_region().  The clean up is likely not strictly necessary
> to switch to dynamic sizing, but I didn't have a remotely reasonable
> level of confidence in the correctness of the dynamic sizing without first
> doing the clean up.
> 
> Testing, especially non-x86 platforms, would be greatly appreciated.  The
> non-x86 changes are for all intents and purposes untested, e.g. I compile
> tested pieces of the code by copying them into x86, but that's it.  In
> theory, the vast majority of the functional changes are arch agnostic, in
> theory...
> 
> v2:
>   - Split "Drop kvm_arch_create_memslot()" into three patches to move
>     minor functional changes to standalone patches [Janosch].
>   - Rebase to latest kvm/queue (f0574a1cea5b, "KVM: x86: fix ...")
>   - Collect an Acked-by and a Reviewed-by

I only have some cosmetic changes on patches 14-15.  Let's wait for
testing results.

Paolo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots
  2019-10-22 14:04   ` Paolo Bonzini
@ 2019-10-22 15:22     ` Sean Christopherson
  0 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22 15:22 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On Tue, Oct 22, 2019 at 04:04:21PM +0200, Paolo Bonzini wrote:
> On 22/10/19 02:35, Sean Christopherson wrote:
> > +	struct kvm_memory_slot memslots[];
> > +	/*
> > +	 * WARNING: 'memslots' is dynamically-sized.  It *MUST* be at the end.
> > +	 */
> 
> Isn't that obvious from the flexible array member?

Probably.  It's also unnecessary as the compiler (at least as of gcc 5.4)
will throw an error if it's not at the end.  I think I even verified this
during development and still decided to leave the obnoxious warning in for
some reason...

include/linux/kvm_host.h:436:25: error: flexible array member not at end of struct
  struct kvm_memory_slot memslots[];
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22 14:04   ` Paolo Bonzini
@ 2019-10-22 15:28     ` Sean Christopherson
  2019-10-22 15:30       ` Paolo Bonzini
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22 15:28 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On Tue, Oct 22, 2019 at 04:04:18PM +0200, Paolo Bonzini wrote:
> On 22/10/19 02:35, Sean Christopherson wrote:
> > +static inline int kvm_shift_memslots_forward(struct kvm_memslots *slots,
> > +					     struct kvm_memory_slot *new)
> > +{
> > +	struct kvm_memory_slot *mslots = slots->memslots;
> > +	int i;
> > +
> > +	if (WARN_ON_ONCE(slots->id_to_index[new->id] == -1) ||
> > +	    WARN_ON_ONCE(!slots->used_slots))
> > +		return -1;
> > +
> > +	for (i = slots->id_to_index[new->id]; i < slots->used_slots - 1; i++) {
> > +		if (new->base_gfn > mslots[i + 1].base_gfn)
> > +			break;
> > +
> > +		WARN_ON_ONCE(new->base_gfn == mslots[i + 1].base_gfn);
> > +
> > +		/* Shift the next memslot forward one and update its index. */
> > +		mslots[i] = mslots[i + 1];
> > +		slots->id_to_index[mslots[i].id] = i;
> > +	}
> > +	return i;
> > +}
> > +
> > +static inline int kvm_shift_memslots_back(struct kvm_memslots *slots,
> > +					  struct kvm_memory_slot *new,
> > +					  int start)
> 
> This new implementation of the insertion sort loses the comments that
> were there in the old one.  Please keep them as function comments.

I assume you're talking about this blurb in particular?

	 * The ">=" is needed when creating a slot with base_gfn == 0,
	 * so that it moves before all those with base_gfn == npages == 0.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22 15:28     ` Sean Christopherson
@ 2019-10-22 15:30       ` Paolo Bonzini
  2019-10-22 15:52         ` Sean Christopherson
  0 siblings, 1 reply; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-22 15:30 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On 22/10/19 17:28, Sean Christopherson wrote:
> On Tue, Oct 22, 2019 at 04:04:18PM +0200, Paolo Bonzini wrote:
>> On 22/10/19 02:35, Sean Christopherson wrote:
>>> +static inline int kvm_shift_memslots_forward(struct kvm_memslots *slots,
>>> +					     struct kvm_memory_slot *new)
>>> +{
>>> +	struct kvm_memory_slot *mslots = slots->memslots;
>>> +	int i;
>>> +
>>> +	if (WARN_ON_ONCE(slots->id_to_index[new->id] == -1) ||
>>> +	    WARN_ON_ONCE(!slots->used_slots))
>>> +		return -1;
>>> +
>>> +	for (i = slots->id_to_index[new->id]; i < slots->used_slots - 1; i++) {
>>> +		if (new->base_gfn > mslots[i + 1].base_gfn)
>>> +			break;
>>> +
>>> +		WARN_ON_ONCE(new->base_gfn == mslots[i + 1].base_gfn);
>>> +
>>> +		/* Shift the next memslot forward one and update its index. */
>>> +		mslots[i] = mslots[i + 1];
>>> +		slots->id_to_index[mslots[i].id] = i;
>>> +	}
>>> +	return i;
>>> +}
>>> +
>>> +static inline int kvm_shift_memslots_back(struct kvm_memslots *slots,
>>> +					  struct kvm_memory_slot *new,
>>> +					  int start)
>>
>> This new implementation of the insertion sort loses the comments that
>> were there in the old one.  Please keep them as function comments.
> 
> I assume you're talking about this blurb in particular?
> 
> 	 * The ">=" is needed when creating a slot with base_gfn == 0,
> 	 * so that it moves before all those with base_gfn == npages == 0.

Yes, well all of the comments.  You can also keep them in the caller, as
you prefer.

Paolo

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22 15:30       ` Paolo Bonzini
@ 2019-10-22 15:52         ` Sean Christopherson
  2019-10-22 15:53           ` Paolo Bonzini
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-10-22 15:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On Tue, Oct 22, 2019 at 05:30:58PM +0200, Paolo Bonzini wrote:
> On 22/10/19 17:28, Sean Christopherson wrote:
> > On Tue, Oct 22, 2019 at 04:04:18PM +0200, Paolo Bonzini wrote:
> >> On 22/10/19 02:35, Sean Christopherson wrote:
> >>> +static inline int kvm_shift_memslots_forward(struct kvm_memslots *slots,
> >>> +					     struct kvm_memory_slot *new)
> >>> +{
> >>> +	struct kvm_memory_slot *mslots = slots->memslots;
> >>> +	int i;
> >>> +
> >>> +	if (WARN_ON_ONCE(slots->id_to_index[new->id] == -1) ||
> >>> +	    WARN_ON_ONCE(!slots->used_slots))
> >>> +		return -1;
> >>> +
> >>> +	for (i = slots->id_to_index[new->id]; i < slots->used_slots - 1; i++) {
> >>> +		if (new->base_gfn > mslots[i + 1].base_gfn)
> >>> +			break;
> >>> +
> >>> +		WARN_ON_ONCE(new->base_gfn == mslots[i + 1].base_gfn);
> >>> +
> >>> +		/* Shift the next memslot forward one and update its index. */
> >>> +		mslots[i] = mslots[i + 1];
> >>> +		slots->id_to_index[mslots[i].id] = i;
> >>> +	}
> >>> +	return i;
> >>> +}
> >>> +
> >>> +static inline int kvm_shift_memslots_back(struct kvm_memslots *slots,
> >>> +					  struct kvm_memory_slot *new,
> >>> +					  int start)
> >>
> >> This new implementation of the insertion sort loses the comments that
> >> were there in the old one.  Please keep them as function comments.
> > 
> > I assume you're talking about this blurb in particular?
> > 
> > 	 * The ">=" is needed when creating a slot with base_gfn == 0,
> > 	 * so that it moves before all those with base_gfn == npages == 0.
> 
> Yes, well all of the comments.  You can also keep them in the caller, as
> you prefer.

The primary function comment is still there, the only other comment that I
dropped was the second half of the above comment:

	 *
	 * On the other hand, if new->npages is zero, the above loop has
	 * already left i pointing to the beginning of the empty part of
	 * mslots, and the ">=" would move the hole backwards in this
	 * case---which is wrong.  So skip the loop when deleting a slot.
	 */

Which doesn't carry forward very well.  Is there another comment I'm
overlooking?

Anyways, I'm not at all opposed to adding comments, just want to make sure
I'm not forgetting something.  If it's ok with you, I'll comment the code
and/or functions and reply here to refine them without having to respin
the whole series.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22 15:52         ` Sean Christopherson
@ 2019-10-22 15:53           ` Paolo Bonzini
  2019-10-24 19:38             ` Sean Christopherson
  0 siblings, 1 reply; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-22 15:53 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On 22/10/19 17:52, Sean Christopherson wrote:
> 
> Anyways, I'm not at all opposed to adding comments, just want to make sure
> I'm not forgetting something.  If it's ok with you, I'll comment the code
> and/or functions and reply here to refine them without having to respin
> the whole series.

Yes, I agree this is better.

Paolo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails
  2019-10-22  0:35 ` [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails Sean Christopherson
@ 2019-10-23  9:29   ` Christoffer Dall
  0 siblings, 0 replies; 37+ messages in thread
From: Christoffer Dall @ 2019-10-23  9:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	Marc Zyngier, Paolo Bonzini, Vitaly Kuznetsov, kvmarm,
	linux-arm-kernel, Jim Mattson

On Mon, Oct 21, 2019 at 05:35:23PM -0700, Sean Christopherson wrote:
> Reinstall the old memslots if preparing the new memory region fails
> after invalidating a to-be-{re}moved memslot.
> 
> Remove the superfluous 'old_memslots' variable so that it's somewhat
> clear that the error handling path needs to free the unused memslots,
> not simply the 'old' memslots.
> 
> Fixes: bc6678a33d9b9 ("KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU update")
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  virt/kvm/kvm_main.c | 23 ++++++++++++-----------
>  1 file changed, 12 insertions(+), 11 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 67ef3f2e19e8..9afd706dc038 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -924,7 +924,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  	unsigned long npages;
>  	struct kvm_memory_slot *slot;
>  	struct kvm_memory_slot old, new;
> -	struct kvm_memslots *slots = NULL, *old_memslots;
> +	struct kvm_memslots *slots;
>  	int as_id, id;
>  	enum kvm_mr_change change;
>  
> @@ -1032,7 +1032,13 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		slot = id_to_memslot(slots, id);
>  		slot->flags |= KVM_MEMSLOT_INVALID;
>  
> -		old_memslots = install_new_memslots(kvm, as_id, slots);
> +		/*
> +		 * We can re-use the old memslots, the only difference from the
> +		 * newly installed memslots is the invalid flag, which will get
> +		 * dropped by update_memslots anyway.  We'll also revert to the
> +		 * old memslots if preparing the new memory region fails.
> +		 */
> +		slots = install_new_memslots(kvm, as_id, slots);
>  
>  		/* From this point no new shadow pages pointing to a deleted,
>  		 * or moved, memslot will be created.
> @@ -1042,13 +1048,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		 *	- kvm_is_visible_gfn (mmu_check_roots)
>  		 */
>  		kvm_arch_flush_shadow_memslot(kvm, slot);
> -
> -		/*
> -		 * We can re-use the old_memslots from above, the only difference
> -		 * from the currently installed memslots is the invalid flag.  This
> -		 * will get overwritten by update_memslots anyway.
> -		 */
> -		slots = old_memslots;
>  	}
>  
>  	r = kvm_arch_prepare_memory_region(kvm, &new, mem, change);
> @@ -1062,15 +1061,17 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  	}
>  
>  	update_memslots(slots, &new, change);
> -	old_memslots = install_new_memslots(kvm, as_id, slots);
> +	slots = install_new_memslots(kvm, as_id, slots);
>  
>  	kvm_arch_commit_memory_region(kvm, mem, &old, &new, change);
>  
>  	kvm_free_memslot(kvm, &old, &new);
> -	kvfree(old_memslots);
> +	kvfree(slots);
>  	return 0;
>  
>  out_slots:
> +	if (change == KVM_MR_DELETE || change == KVM_MR_MOVE)
> +		slots = install_new_memslots(kvm, as_id, slots);
>  	kvfree(slots);
>  out_free:
>  	kvm_free_memslot(kvm, &new, &old);
> -- 
> 2.22.0

Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 02/15] KVM: Don't free new memslot if allocation of said memslot fails
  2019-10-22  0:35 ` [PATCH v2 02/15] KVM: Don't free new memslot if allocation of said memslot fails Sean Christopherson
@ 2019-10-23  9:29   ` Christoffer Dall
  0 siblings, 0 replies; 37+ messages in thread
From: Christoffer Dall @ 2019-10-23  9:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	Marc Zyngier, Paolo Bonzini, Vitaly Kuznetsov, kvmarm,
	linux-arm-kernel, Jim Mattson

On Mon, Oct 21, 2019 at 05:35:24PM -0700, Sean Christopherson wrote:
> The two implementations of kvm_arch_create_memslot() in x86 and PPC are
> both good citizens and free up all local resources if creation fails.
> Return immediately (via a superfluous goto) instead of calling
> kvm_free_memslot().
> 
> Note, the call to kvm_free_memslot() is effectively an expensive nop in
> this case as there are no resources to be freed.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  virt/kvm/kvm_main.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 9afd706dc038..2cb38b2148cb 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1014,7 +1014,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
>  		new.userspace_addr = mem->userspace_addr;
>  
>  		if (kvm_arch_create_memslot(kvm, &new, npages))
> -			goto out_free;
> +			goto out;
>  	}
>  
>  	/* Allocate page dirty bitmap if needed */
> -- 
> 2.22.0
> 

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 09/15] KVM: Move memslot deletion to helper function
  2019-10-22  0:35 ` [PATCH v2 09/15] KVM: Move memslot deletion to helper function Sean Christopherson
@ 2019-10-23  9:29   ` Christoffer Dall
  0 siblings, 0 replies; 37+ messages in thread
From: Christoffer Dall @ 2019-10-23  9:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	Marc Zyngier, Paolo Bonzini, Vitaly Kuznetsov, kvmarm,
	linux-arm-kernel, Jim Mattson

Hi Sean,

On Mon, Oct 21, 2019 at 05:35:31PM -0700, Sean Christopherson wrote:
> Move memslot deletion into its own routine so that the success path for
> other memslot updates does not need to use kvm_free_memslot(), i.e. can
> explicitly destroy the dirty bitmap when necessary.  This paves the way
> for dropping @dont from kvm_free_memslot(), i.e. all callers now pass
> NULL for @dont.
> 
> Add a comment above the code to make a copy of the existing memslot
> prior to deletion, it is not at all obvious that the pointer will become
> stale due sorting and/or installation of new memslots.

nit: due to / during

> 
> Note, kvm_arch_commit_memory_region() allows an architecture to free
> resources when moving a memslot or changing its flags, i.e. implement
> logic similar to the dirty bitmap is handling, if such functionality is

nit: s/is handling/handling/

> needed in the future.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Otherwise looks good to me.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions
  2019-10-22  0:35 ` [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions Sean Christopherson
@ 2019-10-23  9:29   ` Christoffer Dall
  2019-10-24 10:28   ` kbuild test robot
  1 sibling, 0 replies; 37+ messages in thread
From: Christoffer Dall @ 2019-10-23  9:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	Marc Zyngier, Paolo Bonzini, Vitaly Kuznetsov, kvmarm,
	linux-arm-kernel, Jim Mattson

On Mon, Oct 21, 2019 at 05:35:34PM -0700, Sean Christopherson wrote:
> Move the implementations of KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG
> for CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT into common KVM code.
> The arch specific implemenations are extremely similar, differing
> only in whether the dirty log needs to be sync'd from hardware (x86)
> and how the TLBs are flushed.  Add new arch hooks to handle sync
> and TLB flush; the sync will also be used for non-generic dirty log
> support in a future patch (s390).
> 
> The ulterior motive for providing a common implementation is to
> eliminate the dependency between arch and common code with respect to
> the memslot referenced by the dirty log, i.e. to make it obvious in the
> code that the validity of the memslot is guaranteed, as a future patch
> will rework memslot handling such that id_to_memslot() can return NULL.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

For the Arm parts:

Acked-by: Christoffer Dall <christoffer.dall@arm.com>

> ---
>  arch/mips/kvm/mips.c      | 62 ++---------------------------
>  arch/powerpc/kvm/book3s.c |  5 +++
>  arch/powerpc/kvm/booke.c  |  5 +++
>  arch/s390/kvm/kvm-s390.c  |  5 +--
>  arch/x86/kvm/x86.c        | 60 ++--------------------------
>  include/linux/kvm_host.h  | 20 ++++------
>  virt/kvm/arm/arm.c        | 47 ++--------------------
>  virt/kvm/kvm_main.c       | 84 ++++++++++++++++++++++++++++++++-------
>  8 files changed, 99 insertions(+), 189 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 713e5465edb0..5c4feeeee447 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -965,69 +965,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl,
>  	return r;
>  }
>  
> -/**
> - * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
> - * @kvm: kvm instance
> - * @log: slot id and address to which we copy the log
> - *
> - * Steps 1-4 below provide general overview of dirty page logging. See
> - * kvm_get_dirty_log_protect() function description for additional details.
> - *
> - * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
> - * always flush the TLB (step 4) even if previous step failed  and the dirty
> - * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
> - * does not preclude user space subsequent dirty log read. Flushing TLB ensures
> - * writes will be marked dirty for next log read.
> - *
> - *   1. Take a snapshot of the bit and clear it if needed.
> - *   2. Write protect the corresponding page.
> - *   3. Copy the snapshot to the userspace.
> - *   4. Flush TLB's if needed.
> - */
> -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
>  {
> -	struct kvm_memslots *slots;
> -	struct kvm_memory_slot *memslot;
> -	bool flush = false;
> -	int r;
>  
> -	mutex_lock(&kvm->slots_lock);
> -
> -	r = kvm_get_dirty_log_protect(kvm, log, &flush);
> -
> -	if (flush) {
> -		slots = kvm_memslots(kvm);
> -		memslot = id_to_memslot(slots, log->slot);
> -
> -		/* Let implementation handle TLB/GVA invalidation */
> -		kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
> -	}
> -
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
>  }
>  
> -int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *log)
> +void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
>  {
> -	struct kvm_memslots *slots;
> -	struct kvm_memory_slot *memslot;
> -	bool flush = false;
> -	int r;
> -
> -	mutex_lock(&kvm->slots_lock);
> -
> -	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
> -
> -	if (flush) {
> -		slots = kvm_memslots(kvm);
> -		memslot = id_to_memslot(slots, log->slot);
> -
> -		/* Let implementation handle TLB/GVA invalidation */
> -		kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
> -	}
> -
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
> +	/* Let implementation handle TLB/GVA invalidation */
> +	kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
>  }
>  
>  long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
> diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> index a5d4a1014fdf..84f66c1592a3 100644
> --- a/arch/powerpc/kvm/book3s.c
> +++ b/arch/powerpc/kvm/book3s.c
> @@ -829,6 +829,11 @@ int kvmppc_core_check_requests(struct kvm_vcpu *vcpu)
>  	return vcpu->kvm->arch.kvm_ops->check_requests(vcpu);
>  }
>  
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
> +{
> +
> +}
> +
>  int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
>  {
>  	return kvm->arch.kvm_ops->get_dirty_log(kvm, log);
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index a22ff567724a..35a4ef89a1db 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -1796,6 +1796,11 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
> +{
> +
> +}
> +
>  int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
>  {
>  	return -ENOTSUPP;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 21f61dae7db1..c0e9929bdb34 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -572,8 +572,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	return r;
>  }
>  
> -static void kvm_s390_sync_dirty_log(struct kvm *kvm,
> -				    struct kvm_memory_slot *memslot)
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
>  {
>  	int i;
>  	gfn_t cur_gfn, last_gfn;
> @@ -633,7 +632,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
>  	if (!memslot->dirty_bitmap)
>  		goto out;
>  
> -	kvm_s390_sync_dirty_log(kvm, memslot);
> +	kvm_arch_sync_dirty_log(kvm, memslot);
>  	r = kvm_get_dirty_log(kvm, log, &is_dirty);
>  	if (r)
>  		goto out;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index adf384ef5969..3d3d70aac031 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4615,77 +4615,23 @@ static int kvm_vm_ioctl_reinject(struct kvm *kvm,
>  	return 0;
>  }
>  
> -/**
> - * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
> - * @kvm: kvm instance
> - * @log: slot id and address to which we copy the log
> - *
> - * Steps 1-4 below provide general overview of dirty page logging. See
> - * kvm_get_dirty_log_protect() function description for additional details.
> - *
> - * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
> - * always flush the TLB (step 4) even if previous step failed  and the dirty
> - * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
> - * does not preclude user space subsequent dirty log read. Flushing TLB ensures
> - * writes will be marked dirty for next log read.
> - *
> - *   1. Take a snapshot of the bit and clear it if needed.
> - *   2. Write protect the corresponding page.
> - *   3. Copy the snapshot to the userspace.
> - *   4. Flush TLB's if needed.
> - */
> -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
>  {
> -	bool flush = false;
> -	int r;
> -
> -	mutex_lock(&kvm->slots_lock);
> -
>  	/*
>  	 * Flush potentially hardware-cached dirty pages to dirty_bitmap.
>  	 */
>  	if (kvm_x86_ops->flush_log_dirty)
>  		kvm_x86_ops->flush_log_dirty(kvm);
> -
> -	r = kvm_get_dirty_log_protect(kvm, log, &flush);
> -
> -	/*
> -	 * All the TLBs can be flushed out of mmu lock, see the comments in
> -	 * kvm_mmu_slot_remove_write_access().
> -	 */
> -	lockdep_assert_held(&kvm->slots_lock);
> -	if (flush)
> -		kvm_flush_remote_tlbs(kvm);
> -
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
>  }
>  
> -int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *log)
> +void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
>  {
> -	bool flush = false;
> -	int r;
> -
> -	mutex_lock(&kvm->slots_lock);
> -
> -	/*
> -	 * Flush potentially hardware-cached dirty pages to dirty_bitmap.
> -	 */
> -	if (kvm_x86_ops->flush_log_dirty)
> -		kvm_x86_ops->flush_log_dirty(kvm);
> -
> -	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
> -
>  	/*
>  	 * All the TLBs can be flushed out of mmu lock, see the comments in
>  	 * kvm_mmu_slot_remove_write_access().
>  	 */
>  	lockdep_assert_held(&kvm->slots_lock);
> -	if (flush)
> -		kvm_flush_remote_tlbs(kvm);
> -
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
> +	kvm_flush_remote_tlbs(kvm);
>  }
>  
>  int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event,
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index bfb8194d0366..80dd823a1a8b 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -797,23 +797,19 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf);
>  
>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext);
>  
> -int kvm_get_dirty_log(struct kvm *kvm,
> -			struct kvm_dirty_log *log, int *is_dirty);
> -
> -int kvm_get_dirty_log_protect(struct kvm *kvm,
> -			      struct kvm_dirty_log *log, bool *flush);
> -int kvm_clear_dirty_log_protect(struct kvm *kvm,
> -				struct kvm_clear_dirty_log *log, bool *flush);
> -
>  void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>  					struct kvm_memory_slot *slot,
>  					gfn_t gfn_offset,
>  					unsigned long mask);
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot);
>  
> -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
> -				struct kvm_dirty_log *log);
> -int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
> -				  struct kvm_clear_dirty_log *log);
> +#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> +void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot);
> +#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
> +int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log);
> +int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
> +		      int *is_dirty);
> +#endif
>  
>  int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
>  			bool line_status);
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 86c6aa1cb58e..03f668e812ea 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -1203,55 +1203,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  	return r;
>  }
>  
> -/**
> - * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
> - * @kvm: kvm instance
> - * @log: slot id and address to which we copy the log
> - *
> - * Steps 1-4 below provide general overview of dirty page logging. See
> - * kvm_get_dirty_log_protect() function description for additional details.
> - *
> - * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
> - * always flush the TLB (step 4) even if previous step failed  and the dirty
> - * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
> - * does not preclude user space subsequent dirty log read. Flushing TLB ensures
> - * writes will be marked dirty for next log read.
> - *
> - *   1. Take a snapshot of the bit and clear it if needed.
> - *   2. Write protect the corresponding page.
> - *   3. Copy the snapshot to the userspace.
> - *   4. Flush TLB's if needed.
> - */
> -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
>  {
> -	bool flush = false;
> -	int r;
>  
> -	mutex_lock(&kvm->slots_lock);
> -
> -	r = kvm_get_dirty_log_protect(kvm, log, &flush);
> -
> -	if (flush)
> -		kvm_flush_remote_tlbs(kvm);
> -
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
>  }
>  
> -int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, struct kvm_clear_dirty_log *log)
> +void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
>  {
> -	bool flush = false;
> -	int r;
> -
> -	mutex_lock(&kvm->slots_lock);
> -
> -	r = kvm_clear_dirty_log_protect(kvm, log, &flush);
> -
> -	if (flush)
> -		kvm_flush_remote_tlbs(kvm);
> -
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
> +	kvm_flush_remote_tlbs(kvm);
>  }
>  
>  static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c3e481986099..c0ce5081dd00 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -781,7 +781,7 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
>  
>  /*
>   * Allocation size is twice as large as the actual dirty bitmap size.
> - * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> + * See kvm_vm_ioctl_get_dirty_log() why this is needed.
>   */
>  static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
>  {
> @@ -1123,6 +1123,7 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
>  	return kvm_set_memory_region(kvm, mem);
>  }
>  
> +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
>  int kvm_get_dirty_log(struct kvm *kvm,
>  			struct kvm_dirty_log *log, int *is_dirty)
>  {
> @@ -1156,13 +1157,12 @@ int kvm_get_dirty_log(struct kvm *kvm,
>  }
>  EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
>  
> -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> +#else /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
>  /**
>   * kvm_get_dirty_log_protect - get a snapshot of dirty pages
>   *	and reenable dirty page tracking for the corresponding pages.
>   * @kvm:	pointer to kvm instance
>   * @log:	slot id and address to which we copy the log
> - * @flush:	true if TLB flush is needed by caller
>   *
>   * We need to keep it in mind that VCPU threads can write to the bitmap
>   * concurrently. So, to avoid losing track of dirty pages we keep the
> @@ -1179,8 +1179,7 @@ EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
>   * exiting to userspace will be logged for the next call.
>   *
>   */
> -int kvm_get_dirty_log_protect(struct kvm *kvm,
> -			struct kvm_dirty_log *log, bool *flush)
> +static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
>  {
>  	struct kvm_memslots *slots;
>  	struct kvm_memory_slot *memslot;
> @@ -1188,6 +1187,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  	unsigned long n;
>  	unsigned long *dirty_bitmap;
>  	unsigned long *dirty_bitmap_buffer;
> +	bool flush;
>  
>  	as_id = log->slot >> 16;
>  	id = (u16)log->slot;
> @@ -1201,8 +1201,10 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  	if (!dirty_bitmap)
>  		return -ENOENT;
>  
> +	kvm_arch_sync_dirty_log(kvm, memslot);
> +
>  	n = kvm_dirty_bitmap_bytes(memslot);
> -	*flush = false;
> +	flush = false;
>  	if (kvm->manual_dirty_log_protect) {
>  		/*
>  		 * Unlike kvm_get_dirty_log, we always return false in *flush,
> @@ -1225,7 +1227,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  			if (!dirty_bitmap[i])
>  				continue;
>  
> -			*flush = true;
> +			flush = true;
>  			mask = xchg(&dirty_bitmap[i], 0);
>  			dirty_bitmap_buffer[i] = mask;
>  
> @@ -1236,21 +1238,55 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  		spin_unlock(&kvm->mmu_lock);
>  	}
>  
> +	if (flush)
> +		kvm_arch_dirty_log_tlb_flush(kvm, memslot);
> +
>  	if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
>  		return -EFAULT;
>  	return 0;
>  }
> -EXPORT_SYMBOL_GPL(kvm_get_dirty_log_protect);
> +
> +
> +/**
> + * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
> + * @kvm: kvm instance
> + * @log: slot id and address to which we copy the log
> + *
> + * Steps 1-4 below provide general overview of dirty page logging. See
> + * kvm_get_dirty_log_protect() function description for additional details.
> + *
> + * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
> + * always flush the TLB (step 4) even if previous step failed  and the dirty
> + * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
> + * does not preclude user space subsequent dirty log read. Flushing TLB ensures
> + * writes will be marked dirty for next log read.
> + *
> + *   1. Take a snapshot of the bit and clear it if needed.
> + *   2. Write protect the corresponding page.
> + *   3. Copy the snapshot to the userspace.
> + *   4. Flush TLB's if needed.
> + */
> +static int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
> +				      struct kvm_dirty_log *log)
> +{
> +	int r;
> +
> +	mutex_lock(&kvm->slots_lock);
> +
> +	r = kvm_get_dirty_log_protect(kvm, log);
> +
> +	mutex_unlock(&kvm->slots_lock);
> +	return r;
> +}
>  
>  /**
>   * kvm_clear_dirty_log_protect - clear dirty bits in the bitmap
>   *	and reenable dirty page tracking for the corresponding pages.
>   * @kvm:	pointer to kvm instance
>   * @log:	slot id and address from which to fetch the bitmap of dirty pages
> - * @flush:	true if TLB flush is needed by caller
>   */
> -int kvm_clear_dirty_log_protect(struct kvm *kvm,
> -				struct kvm_clear_dirty_log *log, bool *flush)
> +static int kvm_clear_dirty_log_protect(struct kvm *kvm,
> +				       struct kvm_clear_dirty_log *log)
>  {
>  	struct kvm_memslots *slots;
>  	struct kvm_memory_slot *memslot;
> @@ -1259,6 +1295,7 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
>  	unsigned long i, n;
>  	unsigned long *dirty_bitmap;
>  	unsigned long *dirty_bitmap_buffer;
> +	bool flush;
>  
>  	as_id = log->slot >> 16;
>  	id = (u16)log->slot;
> @@ -1282,7 +1319,9 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
>  	    (log->num_pages < memslot->npages - log->first_page && (log->num_pages & 63)))
>  	    return -EINVAL;
>  
> -	*flush = false;
> +	kvm_arch_sync_dirty_log(kvm, memslot);
> +
> +	flush = false;
>  	dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot);
>  	if (copy_from_user(dirty_bitmap_buffer, log->dirty_bitmap, n))
>  		return -EFAULT;
> @@ -1305,17 +1344,32 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
>  		 * a problem if userspace sets them in log->dirty_bitmap.
>  		*/
>  		if (mask) {
> -			*flush = true;
> +			flush = true;
>  			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
>  								offset, mask);
>  		}
>  	}
>  	spin_unlock(&kvm->mmu_lock);
>  
> +	if (flush)
> +		kvm_arch_dirty_log_tlb_flush(kvm, memslot);
> +
>  	return 0;
>  }
> -EXPORT_SYMBOL_GPL(kvm_clear_dirty_log_protect);
> -#endif
> +
> +static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
> +					struct kvm_clear_dirty_log *log)
> +{
> +	int r;
> +
> +	mutex_lock(&kvm->slots_lock);
> +
> +	r = kvm_clear_dirty_log_protect(kvm, log);
> +
> +	mutex_unlock(&kvm->slots_lock);
> +	return r;
> +}
> +#endif /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */
>  
>  bool kvm_largepages_enabled(void)
>  {
> -- 
> 2.22.0
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 00/15] KVM: Dynamically size memslot arrays
  2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
                   ` (16 preceding siblings ...)
  2019-10-22 14:04 ` Paolo Bonzini
@ 2019-10-23  9:39 ` Christoffer Dall
  17 siblings, 0 replies; 37+ messages in thread
From: Christoffer Dall @ 2019-10-23  9:39 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	Marc Zyngier, Paolo Bonzini, Vitaly Kuznetsov, kvmarm,
	linux-arm-kernel, Jim Mattson

On Mon, Oct 21, 2019 at 05:35:22PM -0700, Sean Christopherson wrote:
> The end goal of this series is to dynamically size the memslot array so
> that KVM allocates memory based on the number of memslots in use, as
> opposed to unconditionally allocating memory for the maximum number of
> memslots.  On x86, each memslot consumes 88 bytes, and so with 2 address
> spaces of 512 memslots, each VM consumes ~90k bytes for the memslots.
> E.g. given a VM that uses a total of 30 memslots, dynamic sizing reduces
> the memory footprint from 90k to ~2.6k bytes.
> 
> The changes required to support dynamic sizing are relatively small,
> e.g. are essentially contained in patches 12/13 and 13/13.  Patches 1-11
> clean up the memslot code, which has gotten quite crusy, especially
> __kvm_set_memory_region().  The clean up is likely not strictly necessary
> to switch to dynamic sizing, but I didn't have a remotely reasonable
> level of confidence in the correctness of the dynamic sizing without first
> doing the clean up.
> 
> Testing, especially non-x86 platforms, would be greatly appreciated.  The
> non-x86 changes are for all intents and purposes untested, e.g. I compile
> tested pieces of the code by copying them into x86, but that's it.  In
> theory, the vast majority of the functional changes are arch agnostic, in
> theory...

I've built this for arm/arm64, and I've ran my usual set of tests which
pass fine.  I've also run the selftest framework's tests for the dirty
logging and the migration loop test for arm64, and they pass fine.

You can add my (for arm64):

Tested-by: Christoffer Dall <christoffer.dall@arm.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 00/15] KVM: Dynamically size memslot arrays
  2019-10-22 13:59 ` [PATCH v2 00/15] KVM: Dynamically size memslot arrays Paolo Bonzini
@ 2019-10-23 18:56   ` Christian Borntraeger
  0 siblings, 0 replies; 37+ messages in thread
From: Christian Borntraeger @ 2019-10-23 18:56 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson, James Hogan, Paul Mackerras,
	Janosch Frank, Radim Krčmář,
	Marc Zyngier
  Cc: Wanpeng Li, kvm, David Hildenbrand, Marc Zyngier, Joerg Roedel,
	Cornelia Huck, linux-mips, kvm-ppc, linux-kernel,
	linux-arm-kernel, Vitaly Kuznetsov, kvmarm, Jim Mattson



On 22.10.19 15:59, Paolo Bonzini wrote:
> On 22/10/19 02:35, Sean Christopherson wrote:
>> The end goal of this series is to dynamically size the memslot array so
>> that KVM allocates memory based on the number of memslots in use, as
>> opposed to unconditionally allocating memory for the maximum number of
>> memslots.  On x86, each memslot consumes 88 bytes, and so with 2 address
>> spaces of 512 memslots, each VM consumes ~90k bytes for the memslots.
>> E.g. given a VM that uses a total of 30 memslots, dynamic sizing reduces
>> the memory footprint from 90k to ~2.6k bytes.
>>
>> The changes required to support dynamic sizing are relatively small,
>> e.g. are essentially contained in patches 12/13 and 13/13.  Patches 1-11
>> clean up the memslot code, which has gotten quite crusy, especially
>> __kvm_set_memory_region().  The clean up is likely not strictly necessary
>> to switch to dynamic sizing, but I didn't have a remotely reasonable
>> level of confidence in the correctness of the dynamic sizing without first
>> doing the clean up.
>>
>> Testing, especially non-x86 platforms, would be greatly appreciated.  The
>> non-x86 changes are for all intents and purposes untested, e.g. I compile
>> tested pieces of the code by copying them into x86, but that's it.  In
>> theory, the vast majority of the functional changes are arch agnostic, in
>> theory...
>>
>> v2:
>>   - Split "Drop kvm_arch_create_memslot()" into three patches to move
>>     minor functional changes to standalone patches [Janosch].
>>   - Rebase to latest kvm/queue (f0574a1cea5b, "KVM: x86: fix ...")
>>   - Collect an Acked-by and a Reviewed-by
>>
>> Sean Christopherson (15):
>>   KVM: Reinstall old memslots if arch preparation fails
>>   KVM: Don't free new memslot if allocation of said memslot fails
>>   KVM: PPC: Move memslot memory allocation into prepare_memory_region()
>>   KVM: x86: Allocate memslot resources during prepare_memory_region()
>>   KVM: Drop kvm_arch_create_memslot()
>>   KVM: Explicitly free allocated-but-unused dirty bitmap
>>   KVM: Refactor error handling for setting memory region
>>   KVM: Move setting of memslot into helper routine
>>   KVM: Move memslot deletion to helper function
>>   KVM: Simplify kvm_free_memslot() and all its descendents
>>   KVM: Clean up local variable usage in __kvm_set_memory_region()
>>   KVM: Provide common implementation for generic dirty log functions
>>   KVM: Ensure validity of memslot with respect to kvm_get_dirty_log()
>>   KVM: Terminate memslot walks via used_slots
>>   KVM: Dynamically size memslot array based on number of used slots
>>
>>  arch/mips/include/asm/kvm_host.h      |   2 +-
>>  arch/mips/kvm/mips.c                  |  68 +---
>>  arch/powerpc/include/asm/kvm_ppc.h    |  14 +-
>>  arch/powerpc/kvm/book3s.c             |  22 +-
>>  arch/powerpc/kvm/book3s_hv.c          |  36 +-
>>  arch/powerpc/kvm/book3s_pr.c          |  20 +-
>>  arch/powerpc/kvm/booke.c              |  17 +-
>>  arch/powerpc/kvm/powerpc.c            |  13 +-
>>  arch/s390/include/asm/kvm_host.h      |   2 +-
>>  arch/s390/kvm/kvm-s390.c              |  21 +-
>>  arch/x86/include/asm/kvm_page_track.h |   3 +-
>>  arch/x86/kvm/page_track.c             |  15 +-
>>  arch/x86/kvm/x86.c                    | 100 ++---
>>  include/linux/kvm_host.h              |  48 +--
>>  virt/kvm/arm/arm.c                    |  47 +--
>>  virt/kvm/arm/mmu.c                    |  18 +-
>>  virt/kvm/kvm_main.c                   | 546 ++++++++++++++++----------
>>  17 files changed, 467 insertions(+), 525 deletions(-)
>>
> 
> Christian, Marc, Paul, can you help testing patches 1-13?

I see no obvious breakage on s390, but I will do some more testing in the next days.

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions
  2019-10-22  0:35 ` [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions Sean Christopherson
  2019-10-23  9:29   ` Christoffer Dall
@ 2019-10-24 10:28   ` kbuild test robot
  1 sibling, 0 replies; 37+ messages in thread
From: kbuild test robot @ 2019-10-24 10:28 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, kvm, James Hogan, linux-kernel, Paul Mackerras,
	kvmarm, Janosch Frank, Marc Zyngier, Joerg Roedel,
	David Hildenbrand, Christian Borntraeger, kvm-ppc,
	linux-arm-kernel, Jim Mattson, kbuild-all, Cornelia Huck,
	linux-mips, Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov

[-- Attachment #1: Type: text/plain, Size: 2704 bytes --]

Hi Sean,

I love your patch! Yet something to improve:

[auto build test ERROR on kvm/linux-next]
[cannot apply to v5.4-rc4 next-20191024]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Sean-Christopherson/KVM-Dynamically-size-memslot-arrays/20191024-144336
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: mips-malta_kvm_defconfig (attached as .config)
compiler: mipsel-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=mips 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/mips/kvm/mips.c: In function 'kvm_arch_dirty_log_tlb_flush':
>> arch/mips/kvm/mips.c:976:48: error: 'memslot' undeclared (first use in this function); did you mean 'memset'?
     kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
                                                   ^~~~~~~
                                                   memset
   arch/mips/kvm/mips.c:976:48: note: each undeclared identifier is reported only once for each function it appears in

vim +976 arch/mips/kvm/mips.c

669e846e6c4e13 arch/mips/kvm/kvm_mips.c Sanjay Lal          2012-11-21  972  
3e2c881a0e3a6e arch/mips/kvm/mips.c     Sean Christopherson 2019-10-21  973  void kvm_arch_dirty_log_tlb_flush(struct kvm *kvm, struct kvm_memory_slot *slot)
2a31b9db153530 arch/mips/kvm/mips.c     Paolo Bonzini       2018-10-23  974  {
2a31b9db153530 arch/mips/kvm/mips.c     Paolo Bonzini       2018-10-23  975  	/* Let implementation handle TLB/GVA invalidation */
2a31b9db153530 arch/mips/kvm/mips.c     Paolo Bonzini       2018-10-23 @976  	kvm_mips_callbacks->flush_shadow_memslot(kvm, memslot);
2a31b9db153530 arch/mips/kvm/mips.c     Paolo Bonzini       2018-10-23  977  }
2a31b9db153530 arch/mips/kvm/mips.c     Paolo Bonzini       2018-10-23  978  

:::::: The code at line 976 was first introduced by commit
:::::: 2a31b9db153530df4aa02dac8c32837bf5f47019 kvm: introduce manual dirty log reprotect

:::::: TO: Paolo Bonzini <pbonzini@redhat.com>
:::::: CC: Paolo Bonzini <pbonzini@redhat.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 20625 bytes --]

[-- Attachment #3: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 03/15] KVM: PPC: Move memslot memory allocation into prepare_memory_region()
  2019-10-22  0:35 ` [PATCH v2 03/15] KVM: PPC: Move memslot memory allocation into prepare_memory_region() Sean Christopherson
@ 2019-10-24 11:55   ` kbuild test robot
  0 siblings, 0 replies; 37+ messages in thread
From: kbuild test robot @ 2019-10-24 11:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wanpeng Li, kvm, James Hogan, linux-kernel, Paul Mackerras,
	kvmarm, Janosch Frank, Marc Zyngier, Joerg Roedel,
	David Hildenbrand, Christian Borntraeger, kvm-ppc,
	linux-arm-kernel, Jim Mattson, kbuild-all, Cornelia Huck,
	linux-mips, Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov

[-- Attachment #1: Type: text/plain, Size: 2648 bytes --]

Hi Sean,

I love your patch! Yet something to improve:

[auto build test ERROR on kvm/linux-next]
[cannot apply to v5.4-rc4 next-20191024]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Sean-Christopherson/KVM-Dynamically-size-memslot-arrays/20191024-144336
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   arch/powerpc/kvm/powerpc.c: In function 'kvm_arch_prepare_memory_region':
>> arch/powerpc/kvm/powerpc.c:701:9: error: too many arguments to function 'kvmppc_core_prepare_memory_region'
     return kvmppc_core_prepare_memory_region(kvm, memslot, mem, change);
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from arch/powerpc/kvm/powerpc.c:24:0:
   arch/powerpc/include/asm/kvm_ppc.h:206:12: note: declared here
    extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
>> arch/powerpc/kvm/book3s.c:848:5: error: conflicting types for 'kvmppc_core_prepare_memory_region'
    int kvmppc_core_prepare_memory_region(struct kvm *kvm,
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from arch/powerpc/kvm/book3s.c:30:0:
   arch/powerpc/include/asm/kvm_ppc.h:206:12: note: previous declaration of 'kvmppc_core_prepare_memory_region' was here
    extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

vim +/kvmppc_core_prepare_memory_region +701 arch/powerpc/kvm/powerpc.c

   695	
   696	int kvm_arch_prepare_memory_region(struct kvm *kvm,
   697					   struct kvm_memory_slot *memslot,
   698					   const struct kvm_userspace_memory_region *mem,
   699					   enum kvm_mr_change change)
   700	{
 > 701		return kvmppc_core_prepare_memory_region(kvm, memslot, mem, change);
   702	}
   703	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25428 bytes --]

[-- Attachment #3: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-22 15:53           ` Paolo Bonzini
@ 2019-10-24 19:38             ` Sean Christopherson
  2019-10-24 19:42               ` Sean Christopherson
  2019-10-24 20:24               ` Paolo Bonzini
  0 siblings, 2 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-24 19:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On Tue, Oct 22, 2019 at 05:53:27PM +0200, Paolo Bonzini wrote:
> On 22/10/19 17:52, Sean Christopherson wrote:
> > 
> > Anyways, I'm not at all opposed to adding comments, just want to make sure
> > I'm not forgetting something.  If it's ok with you, I'll comment the code
> > and/or functions and reply here to refine them without having to respin
> > the whole series.
> 
> Yes, I agree this is better.

Here's what I ended up with.  I also added kvm_memslot_insert_back() to
help document the purpose of incrementing used_slots, and renamed
kvm_shift_memslots_forward()->kvm_memslot_move_backward() and
kvm_shift_memslots_backward()->kvm_memslot_move_forward() because I was
having trouble reconciling having the comments focus on the changed
memslot while the names of the functions reflected what happens to the
other memslots.



/*
 * Delete a memslot by decrementing the number of used slots and shifting all
 * other entries in the array forward one spot.
 */
static inline void kvm_memslot_delete(struct kvm_memslots *slots,
				      struct kvm_memory_slot *memslot)
{
	struct kvm_memory_slot *mslots = slots->memslots;
	int i;

	if (WARN_ON(slots->id_to_index[memslot->id] == -1))
		return;

	slots->used_slots--;

	for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
		mslots[i] = mslots[i + 1];
		slots->id_to_index[mslots[i].id] = i;
	}
	mslots[i] = *memslot;
	slots->id_to_index[memslot->id] = -1;
}

/*
 * "Insert" a new memslot by incrementing the number of used slots.  Returns
 * the new slot's initial index into the memslots array.
 */
static inline int kvm_memslot_insert_back(struct kvm_memslots *slots)
{
	return slots->used_slots++;
}

/*
 * Move a changed memslot backwards in the array by shifting existing slots
 * with a higher GFN toward the front of the array.  Note, the changed memslot
 * itself is not preserved in the array, i.e. not swapped at this time, only
 * its new index into the array is update.  Returns the changed memslot's
 * current index into the memslots array.
 */
static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
					    struct kvm_memory_slot *memslot)
{
	struct kvm_memory_slot *mslots = slots->memslots;
	int i;

	if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
	    WARN_ON_ONCE(!slots->used_slots))
		return -1;

	for (i = slots->id_to_index[memslot->id]; i < slots->used_slots - 1; i++) {
		if (memslot->base_gfn > mslots[i + 1].base_gfn)
			break;

		WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn);

		/* Shift the next memslot forward one and update its index. */
		mslots[i] = mslots[i + 1];
		slots->id_to_index[mslots[i].id] = i;
	}
	return i;
}

/*
 * Move a changed memslot forwards in the array by shifting existing slots with
 * a lower GFN toward the back of the array.  Note, the changed memslot itself
 * is not preserved in the array, i.e. not swapped at this time, only its new
 * index into the array is updated.  Returns the changed memslot's final index
 * into the memslots array.
 */
static inline int kvm_memslot_move_forward(struct kvm_memslots *slots,
					   struct kvm_memory_slot *memslot,
					   int start)
{
	struct kvm_memory_slot *mslots = slots->memslots;
	int i;

	for (i = start; i > 0; i--) {
		if (memslot->base_gfn < mslots[i - 1].base_gfn)
			break;

		WARN_ON_ONCE(memslot->base_gfn == mslots[i - 1].base_gfn);

		/* Shift the next memslot back one and update its index. */
		mslots[i] = mslots[i - 1];
		slots->id_to_index[mslots[i].id] = i;
	}
	return i;
}

/*
 * Re-sort memslots based on their GFN to account for an added, deleted, or
 * moved memslot.  Sorting memslots allows using a binary search during memslot
 * lookup.
 *
 * IMPORTANT: Slots are sorted from highest GFN to lowest GFN!  I.e. the entry
 * at memslots[0] has the highest GFN.
 *
 * The sorting algorithm takes advantage of having initially sorted memslots
 * and knowing the position of the changed memslot.  Sorting is also optimized
 * by not swapping the updated memslot and instead only shifting other memslots
 * and tracking the new index for the update memslot.  Only once its final
 * index is known is the updated memslot copied into its position in the array.
 *
 *  - When deleting a memslot, the deleted memslot simply needs to be moved to
 *    the end of the array.
 *
 *  - When creating a memslot, the algorithm "inserts" the new memslot at the
 *    end of the array and then it forward to its correct location.
 *
 *  - When moving a memslot, the algorithm first moves the updated memslot
 *    backward to handle the scenario where the memslot's GFN was changed to a
 *    lower value.  update_memslots() then falls through and runs the same flow
 *    as creating a memslot to move the memslot forward to handle the scenario
 *    where its GFN was changed to a higher value.
 *
 * Note, slots are sorted from highest->lowest instead of lowest->highest for
 * historical reasons.  Originally, invalid memslots where denoted by having
 * GFN=0, thus sorting from highest->lowest naturally sorted invalid memslots
 * to the end of the array.  The current algorithm uses dedicated logic when
 * deleting a memslot and thus does not rely on invalid memslots having GFN=0.
 */
static void update_memslots(struct kvm_memslots *slots,
			    struct kvm_memory_slot *memslot,
			    enum kvm_mr_change change)
{
	int i;

	if (change == KVM_MR_DELETE) {
		kvm_memslot_delete(slots, memslot);
	} else {
		if (change == KVM_MR_CREATE)
			i = kvm_memslot_insert_back(slots);
		else
			i = kvm_memslot_move_backward(slots, memslot);
		i = kvm_memslot_move_forward(slots, memslot, i);

		/*
		 * Copy the memslot to its new position in memslots and update
		 * its index accordingly.
		 */
		slots->memslots[i] = *memslot;
		slots->id_to_index[memslot->id] = i;
	}
}

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-24 19:38             ` Sean Christopherson
@ 2019-10-24 19:42               ` Sean Christopherson
  2019-10-24 20:24               ` Paolo Bonzini
  1 sibling, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-24 19:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On Thu, Oct 24, 2019 at 12:38:56PM -0700, Sean Christopherson wrote:
> On Tue, Oct 22, 2019 at 05:53:27PM +0200, Paolo Bonzini wrote:
> > On 22/10/19 17:52, Sean Christopherson wrote:
> > > 
> > > Anyways, I'm not at all opposed to adding comments, just want to make sure
> > > I'm not forgetting something.  If it's ok with you, I'll comment the code
> > > and/or functions and reply here to refine them without having to respin
> > > the whole series.
> > 
> > Yes, I agree this is better.
> 
> Here's what I ended up with.  I also added kvm_memslot_insert_back() to
> help document the purpose of incrementing used_slots, and renamed
> kvm_shift_memslots_forward()->kvm_memslot_move_backward() and
> kvm_shift_memslots_backward()->kvm_memslot_move_forward() because I was
> having trouble reconciling having the comments focus on the changed
> memslot while the names of the functions reflected what happens to the
> other memslots.

Oh, and I need to respin the series to fix build bugs on MIPS and PPC,
I'll wait to do that until I get a thumbs up on this code.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-24 19:38             ` Sean Christopherson
  2019-10-24 19:42               ` Sean Christopherson
@ 2019-10-24 20:24               ` Paolo Bonzini
  2019-10-24 20:48                 ` Sean Christopherson
  1 sibling, 1 reply; 37+ messages in thread
From: Paolo Bonzini @ 2019-10-24 20:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On 24/10/19 21:38, Sean Christopherson wrote:
> only
>  * its new index into the array is update.

s/update/tracked/?

  Returns the changed memslot's
>  * current index into the memslots array.
>  */
> static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
> 					    struct kvm_memory_slot *memslot)
> {
> 	struct kvm_memory_slot *mslots = slots->memslots;
> 	int i;
> 
> 	if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
> 	    WARN_ON_ONCE(!slots->used_slots))
> 		return -1;
> 
> 	for (i = slots->id_to_index[memslot->id]; i < slots->used_slots - 1; i++) {
> 		if (memslot->base_gfn > mslots[i + 1].base_gfn)
> 			break;
> 
> 		WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn);
> 
> 		/* Shift the next memslot forward one and update its index. */
> 		mslots[i] = mslots[i + 1];
> 		slots->id_to_index[mslots[i].id] = i;
> 	}
> 	return i;
> }
> 
> /*
>  * Move a changed memslot forwards in the array by shifting existing slots with
>  * a lower GFN toward the back of the array.  Note, the changed memslot itself
>  * is not preserved in the array, i.e. not swapped at this time, only its new
>  * index into the array is updated

Same here?

>  * Note, slots are sorted from highest->lowest instead of lowest->highest for
>  * historical reasons.

Not just that, the largest slot (with all RAM above 4GB) is also often
at the highest address at least on x86.  But we could sort them by size
now, so I agree to call these historical reasons.

The code itself is fine, thanks for the work on documenting it.

Paolo

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots
  2019-10-24 20:24               ` Paolo Bonzini
@ 2019-10-24 20:48                 ` Sean Christopherson
  0 siblings, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-10-24 20:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Cornelia Huck, Wanpeng Li, Janosch Frank, kvm, James Hogan,
	Joerg Roedel, David Hildenbrand, linux-mips, kvm-ppc,
	linux-kernel, Paul Mackerras, Christian Borntraeger,
	linux-arm-kernel, Marc Zyngier, Vitaly Kuznetsov, kvmarm,
	Jim Mattson

On Thu, Oct 24, 2019 at 10:24:09PM +0200, Paolo Bonzini wrote:
> On 24/10/19 21:38, Sean Christopherson wrote:
> > only
> >  * its new index into the array is update.
> 
> s/update/tracked/?

Ya, tracked is better.  Waffled between updated and tracked, chose poorly :-)

>   Returns the changed memslot's
> >  * current index into the memslots array.
> >  */
> > static inline int kvm_memslot_move_backward(struct kvm_memslots *slots,
> > 					    struct kvm_memory_slot *memslot)
> > {
> > 	struct kvm_memory_slot *mslots = slots->memslots;
> > 	int i;
> > 
> > 	if (WARN_ON_ONCE(slots->id_to_index[memslot->id] == -1) ||
> > 	    WARN_ON_ONCE(!slots->used_slots))
> > 		return -1;
> > 
> > 	for (i = slots->id_to_index[memslot->id]; i < slots->used_slots - 1; i++) {
> > 		if (memslot->base_gfn > mslots[i + 1].base_gfn)
> > 			break;
> > 
> > 		WARN_ON_ONCE(memslot->base_gfn == mslots[i + 1].base_gfn);
> > 
> > 		/* Shift the next memslot forward one and update its index. */
> > 		mslots[i] = mslots[i + 1];
> > 		slots->id_to_index[mslots[i].id] = i;
> > 	}
> > 	return i;
> > }
> > 
> > /*
> >  * Move a changed memslot forwards in the array by shifting existing slots with
> >  * a lower GFN toward the back of the array.  Note, the changed memslot itself
> >  * is not preserved in the array, i.e. not swapped at this time, only its new
> >  * index into the array is updated
> 
> Same here?
> 
> >  * Note, slots are sorted from highest->lowest instead of lowest->highest for
> >  * historical reasons.
> 
> Not just that, the largest slot (with all RAM above 4GB) is also often
> at the highest address at least on x86.

Ah, increasing the odds of a quick hit on lookup...but only when using a
linear search.  The binary search starts in the middle, so that
optimization is also historical :-)

> But we could sort them by size now, so I agree to call these historical
> reasons.

That wouldn't work with the binary search though.

> The code itself is fine, thanks for the work on documenting it.
> 
> Paolo
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2019-10-24 20:48 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-22  0:35 [PATCH v2 00/15] KVM: Dynamically size memslot arrays Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 01/15] KVM: Reinstall old memslots if arch preparation fails Sean Christopherson
2019-10-23  9:29   ` Christoffer Dall
2019-10-22  0:35 ` [PATCH v2 02/15] KVM: Don't free new memslot if allocation of said memslot fails Sean Christopherson
2019-10-23  9:29   ` Christoffer Dall
2019-10-22  0:35 ` [PATCH v2 03/15] KVM: PPC: Move memslot memory allocation into prepare_memory_region() Sean Christopherson
2019-10-24 11:55   ` kbuild test robot
2019-10-22  0:35 ` [PATCH v2 04/15] KVM: x86: Allocate memslot resources during prepare_memory_region() Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 05/15] KVM: Drop kvm_arch_create_memslot() Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 06/15] KVM: Explicitly free allocated-but-unused dirty bitmap Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 07/15] KVM: Refactor error handling for setting memory region Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 08/15] KVM: Move setting of memslot into helper routine Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 09/15] KVM: Move memslot deletion to helper function Sean Christopherson
2019-10-23  9:29   ` Christoffer Dall
2019-10-22  0:35 ` [PATCH v2 10/15] KVM: Simplify kvm_free_memslot() and all its descendents Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 11/15] KVM: Clean up local variable usage in __kvm_set_memory_region() Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 12/15] KVM: Provide common implementation for generic dirty log functions Sean Christopherson
2019-10-23  9:29   ` Christoffer Dall
2019-10-24 10:28   ` kbuild test robot
2019-10-22  0:35 ` [PATCH v2 13/15] KVM: Ensure validity of memslot with respect to kvm_get_dirty_log() Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 14/15] KVM: Terminate memslot walks via used_slots Sean Christopherson
2019-10-22 14:04   ` Paolo Bonzini
2019-10-22 15:28     ` Sean Christopherson
2019-10-22 15:30       ` Paolo Bonzini
2019-10-22 15:52         ` Sean Christopherson
2019-10-22 15:53           ` Paolo Bonzini
2019-10-24 19:38             ` Sean Christopherson
2019-10-24 19:42               ` Sean Christopherson
2019-10-24 20:24               ` Paolo Bonzini
2019-10-24 20:48                 ` Sean Christopherson
2019-10-22  0:35 ` [PATCH v2 15/15] KVM: Dynamically size memslot array based on number of used slots Sean Christopherson
2019-10-22 14:04   ` Paolo Bonzini
2019-10-22 15:22     ` Sean Christopherson
2019-10-22 13:59 ` [PATCH v2 00/15] KVM: Dynamically size memslot arrays Paolo Bonzini
2019-10-23 18:56   ` Christian Borntraeger
2019-10-22 14:04 ` Paolo Bonzini
2019-10-23  9:39 ` Christoffer Dall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).