linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/11] KVM: multiple address spaces (for SMM)
@ 2015-05-18 13:48 Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 01/11] KVM: introduce kvm_alloc/free_memslots Paolo Bonzini
                   ` (11 more replies)
  0 siblings, 12 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

This implements Avi's suggestion of having two separate kvm_memslots for
regular and SMM operation, corresponding to different address spaces.

All in all, the surgery is limited even though there are a few preparatory
patches that touch all architectures.

The amount of added code for the vcpu-specific versions of kvm_read_guest
and kvm_write_guest is smaller, and duplication is limited to a couple of
functions.  Even the rmap parts, which scared me a lot when the first
version OOPSed on me, :) are actually very easy.

Patches 1-6 are preparatory cleanups that can be applied separately,
while the others will be posted in v2 of the SMM patches.

Patches 7-8 add the new functions (this time in virt/kvm/kvm_main.c).
Architectures can then define a function kvm_arch_vcpu_memslots_id that
returns the active address space id for a given VCPU.  The address space
ID must be passed to KVM_SET_USER_MEMORY_REGION and KVM_GET_DIRTY_LOG,
using the high 16 bits of the slot id.

Patch 9 then does VCPU-specific accesses in x86, and patch 10 loops
over the SMM slots as well on memslot iterations.

Patch 11 then introduces the SMRAM address space, which is very simple
after all the legwork.

Thanks for the reviews,

Paolo

Paolo Bonzini (11):
  KVM: introduce kvm_alloc/free_memslots
  KVM: use kvm_memslots whenever possible
  KVM: const-ify uses of struct kvm_userspace_memory_region
  KVM: add memslots argument to kvm_arch_memslots_updated
  KVM: add "new" argument to kvm_arch_commit_memory_region
  KVM: x86: pass kvm_mmu_page to gfn_to_rmap
  KVM: add vcpu-specific functions to read/write/translate GFNs
  KVM: implement multiple address spaces
  KVM: x86: use vcpu-specific functions to read/write/translate GFNs
  KVM: x86: work on all available address spaces
  KVM: x86: add SMM to the MMU role, support SMRAM address space

 Documentation/virtual/kvm/api.txt        |  12 ++
 arch/arm/kvm/mmu.c                       |  10 +-
 arch/mips/include/asm/kvm_host.h         |   2 +-
 arch/mips/kvm/mips.c                     |   9 +-
 arch/powerpc/include/asm/kvm_book3s_64.h |   2 +-
 arch/powerpc/include/asm/kvm_host.h      |   2 +-
 arch/powerpc/include/asm/kvm_ppc.h       |  10 +-
 arch/powerpc/kvm/book3s.c                |   9 +-
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c             |  15 +-
 arch/powerpc/kvm/book3s_pr.c             |  11 +-
 arch/powerpc/kvm/booke.c                 |   7 +-
 arch/powerpc/kvm/powerpc.c               |   7 +-
 arch/s390/include/asm/kvm_host.h         |   2 +-
 arch/s390/kvm/kvm-s390.c                 |   9 +-
 arch/x86/include/asm/kvm_host.h          |  29 +--
 arch/x86/kvm/mmu.c                       | 143 +++++++-------
 arch/x86/kvm/paging_tmpl.h               |  14 +-
 arch/x86/kvm/svm.c                       |  16 +-
 arch/x86/kvm/vmx.c                       |  34 ++--
 arch/x86/kvm/x86.c                       |  99 +++++++---
 include/linux/kvm_host.h                 |  54 +++++-
 include/linux/kvm_types.h                |   1 +
 include/uapi/linux/kvm.h                 |   1 +
 virt/kvm/kvm_main.c                      | 314 ++++++++++++++++++++++---------
 25 files changed, 548 insertions(+), 266 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/11] KVM: introduce kvm_alloc/free_memslots
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 02/11] KVM: use kvm_memslots whenever possible Paolo Bonzini
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

kvm_alloc_memslots is extracted out of previously scattered code
that was in kvm_init_memslots_id and kvm_create_vm.

kvm_free_memslot and kvm_free_memslots are new names of
kvm_free_physmem and kvm_free_physmem_slot, but they also take
an explicit pointer to struct kvm_memslots.

This will simplify the transition to multiple address spaces,
each represented by one pointer to struct kvm_memslots.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 virt/kvm/kvm_main.c | 104 +++++++++++++++++++++++++++-------------------------
 1 file changed, 55 insertions(+), 49 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 0fcc5d28f3a9..b2be0f7c3946 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -438,13 +438,60 @@ static int kvm_init_mmu_notifier(struct kvm *kvm)
 
 #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */
 
-static void kvm_init_memslots_id(struct kvm *kvm)
+static struct kvm_memslots *kvm_alloc_memslots(void)
 {
 	int i;
-	struct kvm_memslots *slots = kvm->memslots;
+	struct kvm_memslots *slots;
 
+	slots = kvm_kvzalloc(sizeof(struct kvm_memslots));
+	if (!slots)
+		return NULL;
+
+	/*
+	 * Init kvm generation close to the maximum to easily test the
+	 * code of handling generation number wrap-around.
+	 */
+	slots->generation = -150;
 	for (i = 0; i < KVM_MEM_SLOTS_NUM; i++)
 		slots->id_to_index[i] = slots->memslots[i].id = i;
+
+	return slots;
+}
+
+static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot)
+{
+	if (!memslot->dirty_bitmap)
+		return;
+
+	kvfree(memslot->dirty_bitmap);
+	memslot->dirty_bitmap = NULL;
+}
+
+/*
+ * Free any memory in @free but not in @dont.
+ */
+static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
+			      struct kvm_memory_slot *dont)
+{
+	if (!dont || free->dirty_bitmap != dont->dirty_bitmap)
+		kvm_destroy_dirty_bitmap(free);
+
+	kvm_arch_free_memslot(kvm, free, dont);
+
+	free->npages = 0;
+}
+
+static void kvm_free_memslots(struct kvm *kvm, struct kvm_memslots *slots)
+{
+	struct kvm_memory_slot *memslot;
+
+	if (!slots)
+		return;
+
+	kvm_for_each_memslot(memslot, slots)
+		kvm_free_memslot(kvm, memslot, NULL);
+
+	kvfree(slots);
 }
 
 static struct kvm *kvm_create_vm(unsigned long type)
@@ -470,17 +517,10 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
 
 	r = -ENOMEM;
-	kvm->memslots = kvm_kvzalloc(sizeof(struct kvm_memslots));
+	kvm->memslots = kvm_alloc_memslots();
 	if (!kvm->memslots)
 		goto out_err_no_srcu;
 
-	/*
-	 * Init kvm generation close to the maximum to easily test the
-	 * code of handling generation number wrap-around.
-	 */
-	kvm->memslots->generation = -150;
-
-	kvm_init_memslots_id(kvm);
 	if (init_srcu_struct(&kvm->srcu))
 		goto out_err_no_srcu;
 	if (init_srcu_struct(&kvm->irq_srcu))
@@ -521,7 +561,7 @@ out_err_no_srcu:
 out_err_no_disable:
 	for (i = 0; i < KVM_NR_BUSES; i++)
 		kfree(kvm->buses[i]);
-	kvfree(kvm->memslots);
+	kvm_free_memslots(kvm, kvm->memslots);
 	kvm_arch_free_vm(kvm);
 	return ERR_PTR(r);
 }
@@ -538,40 +578,6 @@ void *kvm_kvzalloc(unsigned long size)
 		return kzalloc(size, GFP_KERNEL);
 }
 
-static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot)
-{
-	if (!memslot->dirty_bitmap)
-		return;
-
-	kvfree(memslot->dirty_bitmap);
-	memslot->dirty_bitmap = NULL;
-}
-
-/*
- * Free any memory in @free but not in @dont.
- */
-static void kvm_free_physmem_slot(struct kvm *kvm, struct kvm_memory_slot *free,
-				  struct kvm_memory_slot *dont)
-{
-	if (!dont || free->dirty_bitmap != dont->dirty_bitmap)
-		kvm_destroy_dirty_bitmap(free);
-
-	kvm_arch_free_memslot(kvm, free, dont);
-
-	free->npages = 0;
-}
-
-static void kvm_free_physmem(struct kvm *kvm)
-{
-	struct kvm_memslots *slots = kvm->memslots;
-	struct kvm_memory_slot *memslot;
-
-	kvm_for_each_memslot(memslot, slots)
-		kvm_free_physmem_slot(kvm, memslot, NULL);
-
-	kvfree(kvm->memslots);
-}
-
 static void kvm_destroy_devices(struct kvm *kvm)
 {
 	struct list_head *node, *tmp;
@@ -605,7 +611,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
 #endif
 	kvm_arch_destroy_vm(kvm);
 	kvm_destroy_devices(kvm);
-	kvm_free_physmem(kvm);
+	kvm_free_memslots(kvm, kvm->memslots);
 	cleanup_srcu_struct(&kvm->irq_srcu);
 	cleanup_srcu_struct(&kvm->srcu);
 	kvm_arch_free_vm(kvm);
@@ -896,7 +902,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if (r)
 		goto out_slots;
 
-	/* actual memory is freed via old in kvm_free_physmem_slot below */
+	/* actual memory is freed via old in kvm_free_memslot below */
 	if (change == KVM_MR_DELETE) {
 		new.dirty_bitmap = NULL;
 		memset(&new.arch, 0, sizeof(new.arch));
@@ -907,7 +913,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 
 	kvm_arch_commit_memory_region(kvm, mem, &old, change);
 
-	kvm_free_physmem_slot(kvm, &old, &new);
+	kvm_free_memslot(kvm, &old, &new);
 	kvfree(old_memslots);
 
 	/*
@@ -929,7 +935,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 out_slots:
 	kvfree(slots);
 out_free:
-	kvm_free_physmem_slot(kvm, &new, &old);
+	kvm_free_memslot(kvm, &new, &old);
 out:
 	return r;
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/11] KVM: use kvm_memslots whenever possible
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 01/11] KVM: introduce kvm_alloc/free_memslots Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 03/11] KVM: const-ify uses of struct kvm_userspace_memory_region Paolo Bonzini
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

kvm_memslots provides lockdep checking.  Use it consistently instead of
explicit dereferencing of kvm->memslots.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/arm/kvm/mmu.c                  |  3 ++-
 arch/mips/kvm/mips.c                |  4 +++-
 arch/powerpc/kvm/book3s_64_mmu_hv.c |  2 +-
 arch/powerpc/kvm/book3s_hv.c        |  8 ++++++--
 arch/powerpc/kvm/book3s_pr.c        |  4 +++-
 arch/s390/kvm/kvm-s390.c            |  4 +++-
 arch/x86/kvm/x86.c                  |  4 +++-
 virt/kvm/kvm_main.c                 | 16 ++++++++++------
 8 files changed, 31 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 1d5accbd3dcf..6f0f8f3ac7df 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1155,7 +1155,8 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
  */
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
 {
-	struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot);
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+	struct kvm_memory_slot *memslot = id_to_memslot(slots, slot);
 	phys_addr_t start = memslot->base_gfn << PAGE_SHIFT;
 	phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
 
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index a8e660a44474..bc5ddd973b44 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -968,6 +968,7 @@ out:
 /* Get (and clear) the dirty memory log for a memory slot. */
 int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	unsigned long ga, ga_end;
 	int is_dirty = 0;
@@ -982,7 +983,8 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 
 	/* If nothing is dirty, don't bother messing with page tables. */
 	if (is_dirty) {
-		memslot = id_to_memslot(kvm->memslots, log->slot);
+		slots = kvm_memslots(kvm);
+		memslot = id_to_memslot(slots, log->slot);
 
 		ga = memslot->base_gfn << PAGE_SHIFT;
 		ga_end = ga + (memslot->npages << PAGE_SHIFT);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 1a4acf8bf4f4..dab68b7af3f2 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -650,7 +650,7 @@ static void kvmppc_rmap_reset(struct kvm *kvm)
 	int srcu_idx;
 
 	srcu_idx = srcu_read_lock(&kvm->srcu);
-	slots = kvm->memslots;
+	slots = kvm_memslots(kvm);
 	kvm_for_each_memslot(memslot, slots) {
 		/*
 		 * This assumes it is acceptable to lose reference and
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 48d3c5d2ecc9..7bc6137ece74 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2320,6 +2320,7 @@ static int kvm_vm_ioctl_get_smmu_info_hv(struct kvm *kvm,
 static int kvm_vm_ioctl_get_dirty_log_hv(struct kvm *kvm,
 					 struct kvm_dirty_log *log)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	int r;
 	unsigned long n;
@@ -2330,7 +2331,8 @@ static int kvm_vm_ioctl_get_dirty_log_hv(struct kvm *kvm,
 	if (log->slot >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	memslot = id_to_memslot(kvm->memslots, log->slot);
+	slots = kvm_memslots(kvm);
+	memslot = id_to_memslot(slots, log->slot);
 	r = -ENOENT;
 	if (!memslot->dirty_bitmap)
 		goto out;
@@ -2383,6 +2385,7 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 				const struct kvm_memory_slot *old)
 {
 	unsigned long npages = mem->memory_size >> PAGE_SHIFT;
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 
 	if (npages && old->npages) {
@@ -2392,7 +2395,8 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 		 * since the rmap array starts out as all zeroes,
 		 * i.e. no pages are dirty.
 		 */
-		memslot = id_to_memslot(kvm->memslots, mem->slot);
+		slots = kvm_memslots(kvm);
+		memslot = id_to_memslot(slots, mem->slot);
 		kvmppc_hv_get_dirty_log(kvm, memslot, NULL);
 	}
 }
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index f57383941d03..c01dfc798c66 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1530,6 +1530,7 @@ out:
 static int kvm_vm_ioctl_get_dirty_log_pr(struct kvm *kvm,
 					 struct kvm_dirty_log *log)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	struct kvm_vcpu *vcpu;
 	ulong ga, ga_end;
@@ -1545,7 +1546,8 @@ static int kvm_vm_ioctl_get_dirty_log_pr(struct kvm *kvm,
 
 	/* If nothing is dirty, don't bother messing with page tables. */
 	if (is_dirty) {
-		memslot = id_to_memslot(kvm->memslots, log->slot);
+		slots = kvm_memslots(kvm);
+		memslot = id_to_memslot(slots, log->slot);
 
 		ga = memslot->base_gfn << PAGE_SHIFT;
 		ga_end = ga + (memslot->npages << PAGE_SHIFT);
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index d461f8a15c07..a05107e9b2bf 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -236,6 +236,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
 {
 	int r;
 	unsigned long n;
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	int is_dirty = 0;
 
@@ -245,7 +246,8 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
 	if (log->slot >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	memslot = id_to_memslot(kvm->memslots, log->slot);
+	slots = kvm_memslots(kvm);
+	memslot = id_to_memslot(slots, log->slot);
 	r = -ENOENT;
 	if (!memslot->dirty_bitmap)
 		goto out;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4ea3113c85e3..ec3e5d40e414 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8121,6 +8121,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 				const struct kvm_memory_slot *old,
 				enum kvm_mr_change change)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *new;
 	int nr_mmu_pages = 0;
 
@@ -8142,7 +8143,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
 
 	/* It's OK to get 'new' slot here as it has already been installed */
-	new = id_to_memslot(kvm->memslots, mem->slot);
+	slots = kvm_memslots(kvm);
+	new = id_to_memslot(slots, mem->slot);
 
 	/*
 	 * Dirty logging tracks sptes in 4k granularity, meaning that large
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index b2be0f7c3946..1d2254f22d02 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -732,7 +732,7 @@ static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
 static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
 		struct kvm_memslots *slots)
 {
-	struct kvm_memslots *old_memslots = kvm->memslots;
+	struct kvm_memslots *old_memslots = kvm_memslots(kvm);
 
 	/*
 	 * Set the low bit in the generation, which disables SPTE caching
@@ -797,7 +797,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
 		goto out;
 
-	slot = id_to_memslot(kvm->memslots, mem->slot);
+	slot = id_to_memslot(kvm_memslots(kvm), mem->slot);
 	base_gfn = mem->guest_phys_addr >> PAGE_SHIFT;
 	npages = mem->memory_size >> PAGE_SHIFT;
 
@@ -840,7 +840,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
 		/* Check for overlaps */
 		r = -EEXIST;
-		kvm_for_each_memslot(slot, kvm->memslots) {
+		kvm_for_each_memslot(slot, kvm_memslots(kvm)) {
 			if ((slot->id >= KVM_USER_MEM_SLOTS) ||
 			    (slot->id == mem->slot))
 				continue;
@@ -871,7 +871,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	slots = kvm_kvzalloc(sizeof(struct kvm_memslots));
 	if (!slots)
 		goto out_free;
-	memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
+	memcpy(slots, kvm_memslots(kvm), sizeof(struct kvm_memslots));
 
 	if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
 		slot = id_to_memslot(slots, mem->slot);
@@ -964,6 +964,7 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 int kvm_get_dirty_log(struct kvm *kvm,
 			struct kvm_dirty_log *log, int *is_dirty)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	int r, i;
 	unsigned long n;
@@ -973,7 +974,8 @@ int kvm_get_dirty_log(struct kvm *kvm,
 	if (log->slot >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	memslot = id_to_memslot(kvm->memslots, log->slot);
+	slots = kvm_memslots(kvm);
+	memslot = id_to_memslot(slots, log->slot);
 	r = -ENOENT;
 	if (!memslot->dirty_bitmap)
 		goto out;
@@ -1022,6 +1024,7 @@ EXPORT_SYMBOL_GPL(kvm_get_dirty_log);
 int kvm_get_dirty_log_protect(struct kvm *kvm,
 			struct kvm_dirty_log *log, bool *is_dirty)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
 	int r, i;
 	unsigned long n;
@@ -1032,7 +1035,8 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 	if (log->slot >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	memslot = id_to_memslot(kvm->memslots, log->slot);
+	slots = kvm_memslots(kvm);
+	memslot = id_to_memslot(slots, log->slot);
 
 	dirty_bitmap = memslot->dirty_bitmap;
 	r = -ENOENT;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/11] KVM: const-ify uses of struct kvm_userspace_memory_region
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 01/11] KVM: introduce kvm_alloc/free_memslots Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 02/11] KVM: use kvm_memslots whenever possible Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 04/11] KVM: add memslots argument to kvm_arch_memslots_updated Paolo Bonzini
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

Architecture-specific helpers are not supposed to muck with
struct kvm_userspace_memory_region contents.  Add const to
enforce this.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/arm/kvm/mmu.c                 |  4 ++--
 arch/mips/kvm/mips.c               |  4 ++--
 arch/powerpc/include/asm/kvm_ppc.h |  4 ++--
 arch/powerpc/kvm/book3s.c          |  4 ++--
 arch/powerpc/kvm/book3s_hv.c       |  4 ++--
 arch/powerpc/kvm/book3s_pr.c       |  4 ++--
 arch/powerpc/kvm/booke.c           |  4 ++--
 arch/powerpc/kvm/powerpc.c         |  4 ++--
 arch/s390/kvm/kvm-s390.c           |  4 ++--
 arch/x86/kvm/x86.c                 |  4 ++--
 include/linux/kvm_host.h           |  8 ++++----
 virt/kvm/kvm_main.c                | 13 +++++++------
 12 files changed, 31 insertions(+), 30 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 6f0f8f3ac7df..b5c023a37aec 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1719,7 +1719,7 @@ out:
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   const struct kvm_memory_slot *old,
 				   enum kvm_mr_change change)
 {
@@ -1734,7 +1734,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
 	hva_t hva = mem->userspace_addr;
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index bc5ddd973b44..5963e2e8a6d7 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -198,14 +198,14 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
 	return 0;
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   const struct kvm_memory_slot *old,
 				   enum kvm_mr_change change)
 {
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index b8475daad884..ebc6bedfc08c 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -182,9 +182,9 @@ extern int kvmppc_core_create_memslot(struct kvm *kvm,
 				      unsigned long npages);
 extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
-				struct kvm_userspace_memory_region *mem);
+				const struct kvm_userspace_memory_region *mem);
 extern void kvmppc_core_commit_memory_region(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old);
 extern int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm,
 				      struct kvm_ppc_smmu_info *info);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 453a8a47a467..60aa0726dccc 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -757,13 +757,13 @@ void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot)
 
 int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
-				struct kvm_userspace_memory_region *mem)
+				const struct kvm_userspace_memory_region *mem)
 {
 	return kvm->arch.kvm_ops->prepare_memory_region(kvm, memslot, mem);
 }
 
 void kvmppc_core_commit_memory_region(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old)
 {
 	kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old);
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7bc6137ece74..e20d438edb51 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2375,13 +2375,13 @@ static int kvmppc_core_create_memslot_hv(struct kvm_memory_slot *slot,
 
 static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
 					struct kvm_memory_slot *memslot,
-					struct kvm_userspace_memory_region *mem)
+					const struct kvm_userspace_memory_region *mem)
 {
 	return 0;
 }
 
 static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old)
 {
 	unsigned long npages = mem->memory_size >> PAGE_SHIFT;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index c01dfc798c66..0873e766df1b 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1573,13 +1573,13 @@ static void kvmppc_core_flush_memslot_pr(struct kvm *kvm,
 
 static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm,
 					struct kvm_memory_slot *memslot,
-					struct kvm_userspace_memory_region *mem)
+					const struct kvm_userspace_memory_region *mem)
 {
 	return 0;
 }
 
 static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old)
 {
 	return;
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 3872ab31c80a..518e3a8b351f 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1784,13 +1784,13 @@ int kvmppc_core_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 
 int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				      struct kvm_memory_slot *memslot,
-				      struct kvm_userspace_memory_region *mem)
+				      const struct kvm_userspace_memory_region *mem)
 {
 	return 0;
 }
 
 void kvmppc_core_commit_memory_region(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old)
 {
 }
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 8cd1f80fdc70..5985bb2a332b 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -595,14 +595,14 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
 	return kvmppc_core_prepare_memory_region(kvm, memslot, mem);
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   const struct kvm_memory_slot *old,
 				   enum kvm_mr_change change)
 {
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index a05107e9b2bf..994f9c37f25f 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2582,7 +2582,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 /* Section: memory related */
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot,
-				   struct kvm_userspace_memory_region *mem,
+				   const struct kvm_userspace_memory_region *mem,
 				   enum kvm_mr_change change)
 {
 	/* A few sanity checks. We can have memory slots which have to be
@@ -2600,7 +2600,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old,
 				enum kvm_mr_change change)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ec3e5d40e414..278041b55010 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8039,7 +8039,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm)
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				enum kvm_mr_change change)
 {
 	/*
@@ -8117,7 +8117,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
 }
 
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old,
 				enum kvm_mr_change change)
 {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 19d09a08885b..fcdb072f900b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -502,9 +502,9 @@ enum kvm_mr_change {
 };
 
 int kvm_set_memory_region(struct kvm *kvm,
-			  struct kvm_userspace_memory_region *mem);
+			  const struct kvm_userspace_memory_region *mem);
 int __kvm_set_memory_region(struct kvm *kvm,
-			    struct kvm_userspace_memory_region *mem);
+			    const struct kvm_userspace_memory_region *mem);
 void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 			   struct kvm_memory_slot *dont);
 int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
@@ -512,10 +512,10 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 void kvm_arch_memslots_updated(struct kvm *kvm);
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				enum kvm_mr_change change);
 void kvm_arch_commit_memory_region(struct kvm *kvm,
-				struct kvm_userspace_memory_region *mem,
+				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old,
 				enum kvm_mr_change change);
 bool kvm_largepages_enabled(void);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 1d2254f22d02..c798f55edec3 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -715,7 +715,7 @@ static void update_memslots(struct kvm_memslots *slots,
 	slots->id_to_index[mslots[i].id] = i;
 }
 
-static int check_memory_region_flags(struct kvm_userspace_memory_region *mem)
+static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem)
 {
 	u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
 
@@ -765,7 +765,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
  * Must be called holding kvm->slots_lock for write.
  */
 int __kvm_set_memory_region(struct kvm *kvm,
-			    struct kvm_userspace_memory_region *mem)
+			    const struct kvm_userspace_memory_region *mem)
 {
 	int r;
 	gfn_t base_gfn;
@@ -804,9 +804,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if (npages > KVM_MEM_MAX_NR_PAGES)
 		goto out;
 
-	if (!npages)
-		mem->flags &= ~KVM_MEM_LOG_DIRTY_PAGES;
-
 	new = old = *slot;
 
 	new.id = mem->slot;
@@ -942,7 +939,7 @@ out:
 EXPORT_SYMBOL_GPL(__kvm_set_memory_region);
 
 int kvm_set_memory_region(struct kvm *kvm,
-			  struct kvm_userspace_memory_region *mem)
+			  const struct kvm_userspace_memory_region *mem)
 {
 	int r;
 
@@ -958,6 +955,10 @@ static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 {
 	if (mem->slot >= KVM_USER_MEM_SLOTS)
 		return -EINVAL;
+
+	if (!mem->memory_size)
+		mem->flags &= ~KVM_MEM_LOG_DIRTY_PAGES;
+
 	return kvm_set_memory_region(kvm, mem);
 }
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/11] KVM: add memslots argument to kvm_arch_memslots_updated
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (2 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 03/11] KVM: const-ify uses of struct kvm_userspace_memory_region Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 05/11] KVM: add "new" argument to kvm_arch_commit_memory_region Paolo Bonzini
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

Prepare for the case of multiple address spaces.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/arm/kvm/mmu.c                  | 2 +-
 arch/mips/include/asm/kvm_host.h    | 2 +-
 arch/powerpc/include/asm/kvm_host.h | 2 +-
 arch/s390/include/asm/kvm_host.h    | 2 +-
 arch/x86/kvm/x86.c                  | 2 +-
 include/linux/kvm_host.h            | 2 +-
 include/linux/kvm_types.h           | 1 +
 virt/kvm/kvm_main.c                 | 2 +-
 8 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index b5c023a37aec..e9ac084d21ea 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1839,7 +1839,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 	return 0;
 }
 
-void kvm_arch_memslots_updated(struct kvm *kvm)
+void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
 {
 }
 
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 4c25823563fe..e8c8d9d0c45f 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -839,7 +839,7 @@ static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_free_memslot(struct kvm *kvm,
 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
-static inline void kvm_arch_memslots_updated(struct kvm *kvm) {}
+static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
 static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
 static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
 		struct kvm_memory_slot *slot) {}
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index a193a13cf08b..d91f65b28e32 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -698,7 +698,7 @@ struct kvm_vcpu_arch {
 static inline void kvm_arch_hardware_disable(void) {}
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-static inline void kvm_arch_memslots_updated(struct kvm *kvm) {}
+static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
 static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_exit(void) {}
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 444c412307cb..3024acbe1f9d 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -636,7 +636,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_free_memslot(struct kvm *kvm,
 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
-static inline void kvm_arch_memslots_updated(struct kvm *kvm) {}
+static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
 static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
 static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
 		struct kvm_memory_slot *slot) {}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 278041b55010..5903f1cbc868 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8028,7 +8028,7 @@ out_free:
 	return -ENOMEM;
 }
 
-void kvm_arch_memslots_updated(struct kvm *kvm)
+void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
 {
 	/*
 	 * memslots->generation has been incremented.
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index fcdb072f900b..8dd27576e8b8 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -509,7 +509,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 			   struct kvm_memory_slot *dont);
 int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
 			    unsigned long npages);
-void kvm_arch_memslots_updated(struct kvm *kvm);
+void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots);
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
 				struct kvm_memory_slot *memslot,
 				const struct kvm_userspace_memory_region *mem,
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 931da7e917cf..1b47a185c2f0 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -28,6 +28,7 @@ struct kvm_run;
 struct kvm_userspace_memory_region;
 struct kvm_vcpu;
 struct kvm_vcpu_init;
+struct kvm_memslots;
 
 enum kvm_mr_change;
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c798f55edec3..ca6e7fcaceb5 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -751,7 +751,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
 	 */
 	slots->generation++;
 
-	kvm_arch_memslots_updated(kvm);
+	kvm_arch_memslots_updated(kvm, slots);
 
 	return old_memslots;
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/11] KVM: add "new" argument to kvm_arch_commit_memory_region
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (3 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 04/11] KVM: add memslots argument to kvm_arch_memslots_updated Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap Paolo Bonzini
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

This lets the function access the new memory slot without going through
kvm_memslots and id_to_memslot.  It will simplify the code when more
than one address space will be supported.

Unfortunately, the "const"ness of the new argument must be casted
away in two places.  Fixing KVM to accept const struct kvm_memory_slot
pointers would require modifications in pretty much all architectures,
and is left for later.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/arm/kvm/mmu.c                 |  1 +
 arch/mips/kvm/mips.c               |  1 +
 arch/powerpc/include/asm/kvm_ppc.h |  6 ++++--
 arch/powerpc/kvm/book3s.c          |  5 +++--
 arch/powerpc/kvm/book3s_hv.c       |  3 ++-
 arch/powerpc/kvm/book3s_pr.c       |  3 ++-
 arch/powerpc/kvm/booke.c           |  3 ++-
 arch/powerpc/kvm/powerpc.c         |  3 ++-
 arch/s390/kvm/kvm-s390.c           |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 +-
 arch/x86/kvm/mmu.c                 |  6 ++++--
 arch/x86/kvm/x86.c                 | 13 +++++--------
 include/linux/kvm_host.h           |  1 +
 virt/kvm/kvm_main.c                |  2 +-
 14 files changed, 30 insertions(+), 20 deletions(-)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index e9ac084d21ea..7f473e6d3bf5 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -1721,6 +1721,7 @@ out:
 void kvm_arch_commit_memory_region(struct kvm *kvm,
 				   const struct kvm_userspace_memory_region *mem,
 				   const struct kvm_memory_slot *old,
+				   const struct kvm_memory_slot *new,
 				   enum kvm_mr_change change)
 {
 	/*
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 5963e2e8a6d7..cd4c129ce743 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -207,6 +207,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 void kvm_arch_commit_memory_region(struct kvm *kvm,
 				   const struct kvm_userspace_memory_region *mem,
 				   const struct kvm_memory_slot *old,
+				   const struct kvm_memory_slot *new,
 				   enum kvm_mr_change change)
 {
 	unsigned long npages = 0;
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index ebc6bedfc08c..bee0d2344594 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -185,7 +185,8 @@ extern int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem);
 extern void kvmppc_core_commit_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
-				const struct kvm_memory_slot *old);
+				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new);
 extern int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm,
 				      struct kvm_ppc_smmu_info *info);
 extern void kvmppc_core_flush_memslot(struct kvm *kvm,
@@ -246,7 +247,8 @@ struct kvmppc_ops {
 				     struct kvm_userspace_memory_region *mem);
 	void (*commit_memory_region)(struct kvm *kvm,
 				     struct kvm_userspace_memory_region *mem,
-				     const struct kvm_memory_slot *old);
+				     const struct kvm_memory_slot *old,
+				     const struct kvm_memory_slot *new);
 	int (*unmap_hva)(struct kvm *kvm, unsigned long hva);
 	int (*unmap_hva_range)(struct kvm *kvm, unsigned long start,
 			   unsigned long end);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 60aa0726dccc..b6fe22a2e0b9 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -764,9 +764,10 @@ int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 
 void kvmppc_core_commit_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
-				const struct kvm_memory_slot *old)
+				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new);
 {
-	kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old);
+	kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new);
 }
 
 int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index e20d438edb51..c3679b7e4343 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2382,7 +2382,8 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm,
 
 static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
-				const struct kvm_memory_slot *old)
+				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new)
 {
 	unsigned long npages = mem->memory_size >> PAGE_SHIFT;
 	struct kvm_memslots *slots;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 0873e766df1b..64891b081ad5 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1580,7 +1580,8 @@ static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm,
 
 static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
-				const struct kvm_memory_slot *old)
+				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new)
 {
 	return;
 }
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 518e3a8b351f..cc5842657161 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -1791,7 +1791,8 @@ int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 
 void kvmppc_core_commit_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
-				const struct kvm_memory_slot *old)
+				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new)
 {
 }
 
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 5985bb2a332b..e5dde32fe71f 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -604,9 +604,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 void kvm_arch_commit_memory_region(struct kvm *kvm,
 				   const struct kvm_userspace_memory_region *mem,
 				   const struct kvm_memory_slot *old,
+				   const struct kvm_memory_slot *new,
 				   enum kvm_mr_change change)
 {
-	kvmppc_core_commit_memory_region(kvm, mem, old);
+	kvmppc_core_commit_memory_region(kvm, mem, old, new);
 }
 
 void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 994f9c37f25f..8ad4b9a5667f 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2602,6 +2602,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 void kvm_arch_commit_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new,
 				enum kvm_mr_change change)
 {
 	int rc;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b7e287d2a171..b872bad8855e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -874,7 +874,7 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
 void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
 				      struct kvm_memory_slot *memslot);
 void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
-					struct kvm_memory_slot *memslot);
+				   const struct kvm_memory_slot *memslot);
 void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
 				   struct kvm_memory_slot *memslot);
 void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm,
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 49c34e632b91..1bf2ae9ca521 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4621,10 +4621,12 @@ restart:
 }
 
 void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
-			struct kvm_memory_slot *memslot)
+				   const struct kvm_memory_slot *memslot)
 {
+	/* FIXME: const-ify all uses of struct kvm_memory_slot.  */
 	spin_lock(&kvm->mmu_lock);
-	slot_handle_leaf(kvm, memslot, kvm_mmu_zap_collapsible_spte, true);
+	slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot,
+			 kvm_mmu_zap_collapsible_spte, true);
 	spin_unlock(&kvm->mmu_lock);
 }
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5903f1cbc868..753cf20c0ec5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8119,13 +8119,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
 void kvm_arch_commit_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new,
 				enum kvm_mr_change change)
 {
-	struct kvm_memslots *slots;
-	struct kvm_memory_slot *new;
 	int nr_mmu_pages = 0;
 
-	if ((mem->slot >= KVM_USER_MEM_SLOTS) && (change == KVM_MR_DELETE)) {
+	if (change == KVM_MR_DELETE && old->id >= KVM_USER_MEM_SLOTS) {
 		int ret;
 
 		ret = vm_munmap(old->userspace_addr,
@@ -8142,10 +8141,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 	if (nr_mmu_pages)
 		kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages);
 
-	/* It's OK to get 'new' slot here as it has already been installed */
-	slots = kvm_memslots(kvm);
-	new = id_to_memslot(slots, mem->slot);
-
 	/*
 	 * Dirty logging tracks sptes in 4k granularity, meaning that large
 	 * sptes have to be split.  If live migration is successful, the guest
@@ -8170,9 +8165,11 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
 	 * been zapped so no dirty logging staff is needed for old slot. For
 	 * KVM_MR_FLAGS_ONLY, the old slot is essentially the same one as the
 	 * new and it's also covered when dealing with the new slot.
+	 *
+	 * FIXME: const-ify all uses of struct kvm_memory_slot.
 	 */
 	if (change != KVM_MR_DELETE)
-		kvm_mmu_slot_apply_flags(kvm, new);
+		kvm_mmu_slot_apply_flags(kvm, (struct kvm_memory_slot *) new);
 }
 
 void kvm_arch_flush_shadow_all(struct kvm *kvm)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 8dd27576e8b8..f35fd89c7919 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -517,6 +517,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 void kvm_arch_commit_memory_region(struct kvm *kvm,
 				const struct kvm_userspace_memory_region *mem,
 				const struct kvm_memory_slot *old,
+				const struct kvm_memory_slot *new,
 				enum kvm_mr_change change);
 bool kvm_largepages_enabled(void);
 void kvm_disable_largepages(void);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index ca6e7fcaceb5..2818fca43732 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -908,7 +908,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	update_memslots(slots, &new);
 	old_memslots = install_new_memslots(kvm, slots);
 
-	kvm_arch_commit_memory_region(kvm, mem, &old, change);
+	kvm_arch_commit_memory_region(kvm, mem, &old, &new, change);
 
 	kvm_free_memslot(kvm, &old, &new);
 	kvfree(old_memslots);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (4 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 05/11] KVM: add "new" argument to kvm_arch_commit_memory_region Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-20  8:30   ` Xiao Guangrong
  2015-05-18 13:48 ` [PATCH 07/11] KVM: add vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

This is always available, and we can use the role to look up the right
memslots array.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 1bf2ae9ca521..817ae3ece05c 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1039,12 +1039,12 @@ static unsigned long *__gfn_to_rmap(gfn_t gfn, int level,
 /*
  * Take gfn and return the reverse mapping to it.
  */
-static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, int level)
+static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, struct kvm_mmu_page *sp)
 {
 	struct kvm_memory_slot *slot;
 
 	slot = gfn_to_memslot(kvm, gfn);
-	return __gfn_to_rmap(gfn, level, slot);
+	return __gfn_to_rmap(gfn, sp->role.level, slot);
 }
 
 static bool rmap_can_add(struct kvm_vcpu *vcpu)
@@ -1062,7 +1062,7 @@ static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 
 	sp = page_header(__pa(spte));
 	kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn);
-	rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp->role.level);
+	rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp);
 	return pte_list_add(vcpu, spte, rmapp);
 }
 
@@ -1074,7 +1074,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
 
 	sp = page_header(__pa(spte));
 	gfn = kvm_mmu_page_get_gfn(sp, spte - sp->spt);
-	rmapp = gfn_to_rmap(kvm, gfn, sp->role.level);
+	rmapp = gfn_to_rmap(kvm, gfn, sp);
 	pte_list_remove(spte, rmapp);
 }
 
@@ -1608,7 +1608,7 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 
 	sp = page_header(__pa(spte));
 
-	rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp->role.level);
+	rmapp = gfn_to_rmap(vcpu->kvm, gfn, sp);
 
 	kvm_unmap_rmapp(vcpu->kvm, rmapp, NULL, gfn, sp->role.level, 0);
 	kvm_flush_remote_tlbs(vcpu->kvm);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/11] KVM: add vcpu-specific functions to read/write/translate GFNs
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (5 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 08/11] KVM: implement multiple address spaces Paolo Bonzini
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

We need to hide SMRAM from guests not running in SMM.  Therefore, all uses
of kvm_read_guest* and kvm_write_guest* must be changed to use different
address spaces depending on whether the VCPU is in system management mode.
We need to introduce a new family of functions for this.

For now, the functions are the same as the existing per-VM ones, except
for the type of the first argument, but later they will be changed to
use one of many "struct kvm_memslots" stored in struct kvm.

Whenever possible, slot-based functions are introduced, with two wrappers
for generic and vcpu-based actions.  kvm_read_guest and kvm_write_guest
are copied into kvm_vcpu_read_guest and kvm_vcpu_write_guest.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/linux/kvm_host.h |  19 +++++++
 virt/kvm/kvm_main.c      | 129 ++++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 135 insertions(+), 13 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f35fd89c7919..427a1034a70e 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -471,6 +471,11 @@ static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
 			|| lockdep_is_held(&kvm->slots_lock));
 }
 
+static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu)
+{
+	return kvm_memslots(vcpu->kvm);
+}
+
 static inline struct kvm_memory_slot *
 id_to_memslot(struct kvm_memslots *slots, int id)
 {
@@ -577,6 +582,20 @@ unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn);
 void mark_page_dirty(struct kvm *kvm, gfn_t gfn);
 void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn);
 
+struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu);
+struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn);
+unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn);
+int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset,
+			     int len);
+int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, void *data,
+			       unsigned long len);
+int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t gpa, void *data,
+			unsigned long len);
+int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, const void *data,
+			      int offset, int len);
+int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
+			 unsigned long len);
+
 void kvm_vcpu_block(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);
 int kvm_vcpu_yield_to(struct kvm_vcpu *target);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2818fca43732..2cbda443880b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1100,6 +1100,11 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(gfn_to_memslot);
 
+struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn)
+{
+	return __gfn_to_memslot(kvm_vcpu_memslots(vcpu), gfn);
+}
+
 int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
 {
 	struct kvm_memory_slot *memslot = gfn_to_memslot(kvm, gfn);
@@ -1175,6 +1180,12 @@ unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(gfn_to_hva);
 
+unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn)
+{
+	return gfn_to_hva_many(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, NULL);
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_hva);
+
 /*
  * If writable is set to false, the hva returned by this function is only
  * allowed to be read.
@@ -1529,13 +1540,13 @@ static int next_segment(unsigned long len, int offset)
 		return len;
 }
 
-int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
-			int len)
+static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
+				 void *data, int offset, int len)
 {
 	int r;
 	unsigned long addr;
 
-	addr = gfn_to_hva_prot(kvm, gfn, NULL);
+	addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
 	if (kvm_is_error_hva(addr))
 		return -EFAULT;
 	r = __copy_from_user(data, (void __user *)addr + offset, len);
@@ -1543,8 +1554,25 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
 		return -EFAULT;
 	return 0;
 }
+
+int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
+			int len)
+{
+	struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn);
+
+	return __kvm_read_guest_page(slot, gfn, data, offset, len);
+}
 EXPORT_SYMBOL_GPL(kvm_read_guest_page);
 
+int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data,
+			     int offset, int len)
+{
+	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+
+	return __kvm_read_guest_page(slot, gfn, data, offset, len);
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page);
+
 int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len)
 {
 	gfn_t gfn = gpa >> PAGE_SHIFT;
@@ -1565,15 +1593,33 @@ int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len)
 }
 EXPORT_SYMBOL_GPL(kvm_read_guest);
 
-int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data,
-			  unsigned long len)
+int kvm_vcpu_read_guest(struct kvm_vcpu *vcpu, gpa_t gpa, void *data, unsigned long len)
 {
-	int r;
-	unsigned long addr;
 	gfn_t gfn = gpa >> PAGE_SHIFT;
+	int seg;
 	int offset = offset_in_page(gpa);
+	int ret;
 
-	addr = gfn_to_hva_prot(kvm, gfn, NULL);
+	while ((seg = next_segment(len, offset)) != 0) {
+		ret = kvm_vcpu_read_guest_page(vcpu, gfn, data, offset, seg);
+		if (ret < 0)
+			return ret;
+		offset = 0;
+		len -= seg;
+		data += seg;
+		++gfn;
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest);
+
+static int __kvm_read_guest_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
+			           void *data, int offset, unsigned long len)
+{
+	int r;
+	unsigned long addr;
+
+	addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
 	if (kvm_is_error_hva(addr))
 		return -EFAULT;
 	pagefault_disable();
@@ -1583,16 +1629,35 @@ int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data,
 		return -EFAULT;
 	return 0;
 }
-EXPORT_SYMBOL(kvm_read_guest_atomic);
 
-int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, const void *data,
-			 int offset, int len)
+int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data,
+			  unsigned long len)
+{
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn);
+	int offset = offset_in_page(gpa);
+
+	return __kvm_read_guest_atomic(slot, gpa, data, offset, len);
+}
+EXPORT_SYMBOL_GPL(kvm_read_guest_atomic);
+
+int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa,
+			       void *data, unsigned long len)
+{
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+	int offset = offset_in_page(gpa);
+
+	return __kvm_read_guest_atomic(slot, gpa, data, offset, len);
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic);
+
+static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn,
+			          const void *data, int offset, int len)
 {
 	int r;
-	struct kvm_memory_slot *memslot;
 	unsigned long addr;
 
-	memslot = gfn_to_memslot(kvm, gfn);
 	addr = gfn_to_hva_memslot(memslot, gfn);
 	if (kvm_is_error_hva(addr))
 		return -EFAULT;
@@ -1602,8 +1667,25 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, const void *data,
 	mark_page_dirty_in_slot(memslot, gfn);
 	return 0;
 }
+
+int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn,
+			 const void *data, int offset, int len)
+{
+	struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn);
+
+	return __kvm_write_guest_page(slot, gfn, data, offset, len);
+}
 EXPORT_SYMBOL_GPL(kvm_write_guest_page);
 
+int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn,
+			      const void *data, int offset, int len)
+{
+	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+
+	return __kvm_write_guest_page(slot, gfn, data, offset, len);
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page);
+
 int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data,
 		    unsigned long len)
 {
@@ -1625,6 +1707,27 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data,
 }
 EXPORT_SYMBOL_GPL(kvm_write_guest);
 
+int kvm_vcpu_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
+		         unsigned long len)
+{
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	int seg;
+	int offset = offset_in_page(gpa);
+	int ret;
+
+	while ((seg = next_segment(len, offset)) != 0) {
+		ret = kvm_vcpu_write_guest_page(vcpu, gfn, data, offset, seg);
+		if (ret < 0)
+			return ret;
+		offset = 0;
+		len -= seg;
+		data += seg;
+		++gfn;
+	}
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest);
+
 int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 			      gpa_t gpa, unsigned long len)
 {
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (6 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 07/11] KVM: add vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-19 13:32   ` Radim Krčmář
  2015-05-18 13:48 ` [PATCH 09/11] KVM: x86: use vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virtual/kvm/api.txt        | 12 ++++++
 arch/powerpc/include/asm/kvm_book3s_64.h |  2 +-
 include/linux/kvm_host.h                 | 26 ++++++++++--
 include/uapi/linux/kvm.h                 |  1 +
 virt/kvm/kvm_main.c                      | 70 ++++++++++++++++++++------------
 5 files changed, 79 insertions(+), 32 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 51523b70b6b2..91ab5f2354aa 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -254,6 +254,11 @@ since the last call to this ioctl.  Bit 0 is the first page in the
 memory slot.  Ensure the entire structure is cleared to avoid padding
 issues.
 
+If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 specifies
+the address space for which you want to return the dirty bitmap.
+They must be less than the value that KVM_CHECK_EXTENSION returns for
+the KVM_CAP_MULTI_ADDRESS_SPACE capability.
+
 
 4.9 KVM_SET_MEMORY_ALIAS
 
@@ -924,6 +929,13 @@ slot.  When changing an existing slot, it may be moved in the guest
 physical memory space, or its flags may be modified.  It may not be
 resized.  Slots may not overlap in guest physical address space.
 
+If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 of "slot"
+specifies the address space which is being modified.  They must be
+less than the value that KVM_CHECK_EXTENSION returns for the
+KVM_CAP_MULTI_ADDRESS_SPACE capability.  Slots in separate address spaces
+are unrelated; the restriction on overlapping slots only applies within
+each address space.
+
 Memory for the region is taken starting at the address denoted by the
 field userspace_addr, which must point at user addressable memory for
 the entire memory slot size.  Any object may back this memory, including
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index 3536d12eb798..2aa79c864e91 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -430,7 +430,7 @@ static inline void note_hpte_modification(struct kvm *kvm,
  */
 static inline struct kvm_memslots *kvm_memslots_raw(struct kvm *kvm)
 {
-	return rcu_dereference_raw_notrace(kvm->memslots);
+	return rcu_dereference_raw_notrace(kvm->memslots[0]);
 }
 
 extern void kvmppc_mmu_debugfs_init(struct kvm *kvm);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 427a1034a70e..5c0ee82f6dbd 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -44,6 +44,10 @@
 /* Two fragments for cross MMIO pages. */
 #define KVM_MAX_MMIO_FRAGMENTS	2
 
+#ifndef KVM_ADDRESS_SPACE_NUM
+#define KVM_ADDRESS_SPACE_NUM	1
+#endif
+
 /*
  * For the normal pfn, the highest 12 bits should be zero,
  * so we can mask bit 62 ~ bit 52  to indicate the error pfn,
@@ -331,6 +335,13 @@ struct kvm_kernel_irq_routing_entry {
 #define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS)
 #endif
 
+#ifndef __KVM_VCPU_MULTIPLE_ADDRESS_SPACE
+static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+#endif
+
 /*
  * Note:
  * memslots are not sorted by id anymore, please use id_to_memslot()
@@ -349,7 +360,7 @@ struct kvm {
 	spinlock_t mmu_lock;
 	struct mutex slots_lock;
 	struct mm_struct *mm; /* userspace tied to this vm */
-	struct kvm_memslots *memslots;
+	struct kvm_memslots *memslots[KVM_ADDRESS_SPACE_NUM];
 	struct srcu_struct srcu;
 	struct srcu_struct irq_srcu;
 #ifdef CONFIG_KVM_APIC_ARCHITECTURE
@@ -464,16 +475,23 @@ void kvm_exit(void);
 void kvm_get_kvm(struct kvm *kvm);
 void kvm_put_kvm(struct kvm *kvm);
 
-static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
+static inline struct kvm_memslots *__kvm_memslots(struct kvm *kvm, int as_id)
 {
-	return rcu_dereference_check(kvm->memslots,
+	return rcu_dereference_check(kvm->memslots[as_id],
 			srcu_read_lock_held(&kvm->srcu)
 			|| lockdep_is_held(&kvm->slots_lock));
 }
 
+static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
+{
+	return __kvm_memslots(kvm, 0);
+}
+
 static inline struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu)
 {
-	return kvm_memslots(vcpu->kvm);
+	int as_id = kvm_arch_vcpu_memslots_id(vcpu);
+
+	return __kvm_memslots(vcpu->kvm, as_id);
 }
 
 static inline struct kvm_memory_slot *
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index eace8babd227..5ff1038437e3 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -816,6 +816,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_PPC_HWRNG 115
 #define KVM_CAP_DISABLE_QUIRKS 116
 #define KVM_CAP_X86_SMM 117
+#define KVM_CAP_MULTI_ADDRESS_SPACE 118
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2cbda443880b..6838818e2452 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -517,9 +517,11 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);
 
 	r = -ENOMEM;
-	kvm->memslots = kvm_alloc_memslots();
-	if (!kvm->memslots)
-		goto out_err_no_srcu;
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		kvm->memslots[i] = kvm_alloc_memslots();
+		if (!kvm->memslots[i])
+			goto out_err_no_srcu;
+	}
 
 	if (init_srcu_struct(&kvm->srcu))
 		goto out_err_no_srcu;
@@ -561,7 +563,8 @@ out_err_no_srcu:
 out_err_no_disable:
 	for (i = 0; i < KVM_NR_BUSES; i++)
 		kfree(kvm->buses[i]);
-	kvm_free_memslots(kvm, kvm->memslots);
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+		kvm_free_memslots(kvm, kvm->memslots[i]);
 	kvm_arch_free_vm(kvm);
 	return ERR_PTR(r);
 }
@@ -611,7 +614,8 @@ static void kvm_destroy_vm(struct kvm *kvm)
 #endif
 	kvm_arch_destroy_vm(kvm);
 	kvm_destroy_devices(kvm);
-	kvm_free_memslots(kvm, kvm->memslots);
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+		kvm_free_memslots(kvm, kvm->memslots[i]);
 	cleanup_srcu_struct(&kvm->irq_srcu);
 	cleanup_srcu_struct(&kvm->srcu);
 	kvm_arch_free_vm(kvm);
@@ -730,9 +734,9 @@ static int check_memory_region_flags(const struct kvm_userspace_memory_region *m
 }
 
 static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
-		struct kvm_memslots *slots)
+		int as_id, struct kvm_memslots *slots)
 {
-	struct kvm_memslots *old_memslots = kvm_memslots(kvm);
+	struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id);
 
 	/*
 	 * Set the low bit in the generation, which disables SPTE caching
@@ -741,7 +745,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
 	WARN_ON(old_memslots->generation & 1);
 	slots->generation = old_memslots->generation + 1;
 
-	rcu_assign_pointer(kvm->memslots, slots);
+	rcu_assign_pointer(kvm->memslots[as_id], slots);
 	synchronize_srcu_expedited(&kvm->srcu);
 
 	/*
@@ -773,6 +777,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	struct kvm_memory_slot *slot;
 	struct kvm_memory_slot old, new;
 	struct kvm_memslots *slots = NULL, *old_memslots;
+	int as_id, id;
 	enum kvm_mr_change change;
 
 	r = check_memory_region_flags(mem);
@@ -780,24 +785,27 @@ int __kvm_set_memory_region(struct kvm *kvm,
 		goto out;
 
 	r = -EINVAL;
+	as_id = mem->slot >> 16;
+	id = mem->slot & 65535;
+
 	/* General sanity checks */
 	if (mem->memory_size & (PAGE_SIZE - 1))
 		goto out;
 	if (mem->guest_phys_addr & (PAGE_SIZE - 1))
 		goto out;
 	/* We can read the guest memory with __xxx_user() later on. */
-	if ((mem->slot < KVM_USER_MEM_SLOTS) &&
+	if ((id < KVM_USER_MEM_SLOTS) &&
 	    ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
 	     !access_ok(VERIFY_WRITE,
 			(void __user *)(unsigned long)mem->userspace_addr,
 			mem->memory_size)))
 		goto out;
-	if (mem->slot >= KVM_MEM_SLOTS_NUM)
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
 		goto out;
 	if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
 		goto out;
 
-	slot = id_to_memslot(kvm_memslots(kvm), mem->slot);
+	slot = id_to_memslot(__kvm_memslots(kvm, as_id), id);
 	base_gfn = mem->guest_phys_addr >> PAGE_SHIFT;
 	npages = mem->memory_size >> PAGE_SHIFT;
 
@@ -806,7 +814,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 
 	new = old = *slot;
 
-	new.id = mem->slot;
+	new.id = id;
 	new.base_gfn = base_gfn;
 	new.npages = npages;
 	new.flags = mem->flags;
@@ -837,9 +845,9 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
 		/* Check for overlaps */
 		r = -EEXIST;
-		kvm_for_each_memslot(slot, kvm_memslots(kvm)) {
+		kvm_for_each_memslot(slot, __kvm_memslots(kvm, as_id)) {
 			if ((slot->id >= KVM_USER_MEM_SLOTS) ||
-			    (slot->id == mem->slot))
+			    (slot->id == id))
 				continue;
 			if (!((base_gfn + npages <= slot->base_gfn) ||
 			      (base_gfn >= slot->base_gfn + slot->npages)))
@@ -868,13 +876,13 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	slots = kvm_kvzalloc(sizeof(struct kvm_memslots));
 	if (!slots)
 		goto out_free;
-	memcpy(slots, kvm_memslots(kvm), sizeof(struct kvm_memslots));
+	memcpy(slots, __kvm_memslots(kvm, as_id), sizeof(struct kvm_memslots));
 
 	if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {
-		slot = id_to_memslot(slots, mem->slot);
+		slot = id_to_memslot(slots, id);
 		slot->flags |= KVM_MEMSLOT_INVALID;
 
-		old_memslots = install_new_memslots(kvm, slots);
+		old_memslots = install_new_memslots(kvm, as_id, slots);
 
 		/* slot was deleted or moved, clear iommu mapping */
 		kvm_iommu_unmap_pages(kvm, &old);
@@ -906,7 +914,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
 	}
 
 	update_memslots(slots, &new);
-	old_memslots = install_new_memslots(kvm, slots);
+	old_memslots = install_new_memslots(kvm, as_id, slots);
 
 	kvm_arch_commit_memory_region(kvm, mem, &old, &new, change);
 
@@ -953,7 +961,7 @@ EXPORT_SYMBOL_GPL(kvm_set_memory_region);
 static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm,
 					  struct kvm_userspace_memory_region *mem)
 {
-	if (mem->slot >= KVM_USER_MEM_SLOTS)
+	if ((mem->slot & 65535) >= KVM_USER_MEM_SLOTS)
 		return -EINVAL;
 
 	if (!mem->memory_size)
@@ -967,16 +975,18 @@ int kvm_get_dirty_log(struct kvm *kvm,
 {
 	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
-	int r, i;
+	int r, i, as_id, id;
 	unsigned long n;
 	unsigned long any = 0;
 
 	r = -EINVAL;
-	if (log->slot >= KVM_USER_MEM_SLOTS)
+	as_id = log->slot >> 16;
+	id = log->slot & 65535;
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	slots = kvm_memslots(kvm);
-	memslot = id_to_memslot(slots, log->slot);
+	slots = __kvm_memslots(kvm, as_id);
+	memslot = id_to_memslot(slots, id);
 	r = -ENOENT;
 	if (!memslot->dirty_bitmap)
 		goto out;
@@ -1027,17 +1037,19 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 {
 	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
-	int r, i;
+	int r, i, as_id, id;
 	unsigned long n;
 	unsigned long *dirty_bitmap;
 	unsigned long *dirty_bitmap_buffer;
 
 	r = -EINVAL;
-	if (log->slot >= KVM_USER_MEM_SLOTS)
+	as_id = log->slot >> 16;
+	id = log->slot & 65535;
+	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
 		goto out;
 
-	slots = kvm_memslots(kvm);
-	memslot = id_to_memslot(slots, log->slot);
+	slots = __kvm_memslots(kvm, as_id);
+	memslot = id_to_memslot(slots, id);
 
 	dirty_bitmap = memslot->dirty_bitmap;
 	r = -ENOENT;
@@ -2592,6 +2604,10 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_IRQ_ROUTING:
 		return KVM_MAX_IRQ_ROUTES;
 #endif
+#if KVM_ADDRESS_SPACE_NUM > 1
+	case KVM_CAP_MULTI_ADDRESS_SPACE:
+		return KVM_ADDRESS_SPACE_NUM;
+#endif
 	default:
 		break;
 	}
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/11] KVM: x86: use vcpu-specific functions to read/write/translate GFNs
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (7 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 08/11] KVM: implement multiple address spaces Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 10/11] KVM: x86: work on all available address spaces Paolo Bonzini
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

We need to hide SMRAM from guests not running in SMM.  Therefore,
all uses of kvm_read_guest* and kvm_write_guest* must be changed to
check whether the VCPU is in system management mode and use a
different set of memslots.  Switch from kvm_* to the newly-introduced
kvm_vcpu_*, which call into kvm_arch_vcpu_memslots.

For now, the code is just copied from virt/kvm/kvm_main.c, except
for calls to other read/write functions.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/mmu.c              | 36 ++++++++++++++++++------------------
 arch/x86/kvm/paging_tmpl.h      | 14 +++++++-------
 arch/x86/kvm/svm.c              | 10 +++++-----
 arch/x86/kvm/vmx.c              | 22 +++++++++++-----------
 arch/x86/kvm/x86.c              | 26 +++++++++++++-------------
 6 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b872bad8855e..71232ccc8fe4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -885,7 +885,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
 				   struct kvm_memory_slot *slot,
 				   gfn_t gfn_offset, unsigned long mask);
 void kvm_mmu_zap_all(struct kvm *kvm);
-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
+void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots);
 unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
 void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
 
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 817ae3ece05c..7eff9d249f39 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -223,15 +223,15 @@ static unsigned int get_mmio_spte_generation(u64 spte)
 	return gen;
 }
 
-static unsigned int kvm_current_mmio_generation(struct kvm *kvm)
+static unsigned int kvm_current_mmio_generation(struct kvm_vcpu *vcpu)
 {
-	return kvm_memslots(kvm)->generation & MMIO_GEN_MASK;
+	return kvm_vcpu_memslots(vcpu)->generation & MMIO_GEN_MASK;
 }
 
-static void mark_mmio_spte(struct kvm *kvm, u64 *sptep, u64 gfn,
+static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
 			   unsigned access)
 {
-	unsigned int gen = kvm_current_mmio_generation(kvm);
+	unsigned int gen = kvm_current_mmio_generation(vcpu);
 	u64 mask = generation_mmio_spte_mask(gen);
 
 	access &= ACC_WRITE_MASK | ACC_USER_MASK;
@@ -258,22 +258,22 @@ static unsigned get_mmio_spte_access(u64 spte)
 	return (spte & ~mask) & ~PAGE_MASK;
 }
 
-static bool set_mmio_spte(struct kvm *kvm, u64 *sptep, gfn_t gfn,
+static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
 			  pfn_t pfn, unsigned access)
 {
 	if (unlikely(is_noslot_pfn(pfn))) {
-		mark_mmio_spte(kvm, sptep, gfn, access);
+		mark_mmio_spte(vcpu, sptep, gfn, access);
 		return true;
 	}
 
 	return false;
 }
 
-static bool check_mmio_spte(struct kvm *kvm, u64 spte)
+static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte)
 {
 	unsigned int kvm_gen, spte_gen;
 
-	kvm_gen = kvm_current_mmio_generation(kvm);
+	kvm_gen = kvm_current_mmio_generation(vcpu);
 	spte_gen = get_mmio_spte_generation(spte);
 
 	trace_check_mmio_spte(spte, kvm_gen, spte_gen);
@@ -872,7 +872,7 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn,
 {
 	struct kvm_memory_slot *slot;
 
-	slot = gfn_to_memslot(vcpu->kvm, gfn);
+	slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	if (!slot || slot->flags & KVM_MEMSLOT_INVALID ||
 	      (no_dirty_log && slot->dirty_bitmap))
 		slot = NULL;
@@ -2577,7 +2577,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 	u64 spte;
 	int ret = 0;
 
-	if (set_mmio_spte(vcpu->kvm, sptep, gfn, pfn, pte_access))
+	if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access))
 		return 0;
 
 	spte = PT_PRESENT_MASK;
@@ -3424,7 +3424,7 @@ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 		gfn_t gfn = get_mmio_spte_gfn(spte);
 		unsigned access = get_mmio_spte_access(spte);
 
-		if (!check_mmio_spte(vcpu->kvm, spte))
+		if (!check_mmio_spte(vcpu, spte))
 			return RET_MMIO_PF_INVALID;
 
 		if (direct)
@@ -3496,7 +3496,7 @@ static int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn)
 	arch.direct_map = vcpu->arch.mmu.direct_map;
 	arch.cr3 = vcpu->arch.mmu.get_cr3(vcpu);
 
-	return kvm_setup_async_pf(vcpu, gva, gfn_to_hva(vcpu->kvm, gfn), &arch);
+	return kvm_setup_async_pf(vcpu, gva, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch);
 }
 
 static bool can_do_async_pf(struct kvm_vcpu *vcpu)
@@ -3514,7 +3514,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 	struct kvm_memory_slot *slot;
 	bool async;
 
-	slot = gfn_to_memslot(vcpu->kvm, gfn);
+	slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
 	async = false;
 	*pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable);
 	if (!async)
@@ -3627,7 +3627,7 @@ static void inject_page_fault(struct kvm_vcpu *vcpu,
 	vcpu->arch.mmu.inject_page_fault(vcpu, fault);
 }
 
-static bool sync_mmio_spte(struct kvm *kvm, u64 *sptep, gfn_t gfn,
+static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
 			   unsigned access, int *nr_present)
 {
 	if (unlikely(is_mmio_spte(*sptep))) {
@@ -3637,7 +3637,7 @@ static bool sync_mmio_spte(struct kvm *kvm, u64 *sptep, gfn_t gfn,
 		}
 
 		(*nr_present)++;
-		mark_mmio_spte(kvm, sptep, gfn, access);
+		mark_mmio_spte(vcpu, sptep, gfn, access);
 		return true;
 	}
 
@@ -4147,7 +4147,7 @@ static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,
 		/* Handle a 32-bit guest writing two halves of a 64-bit gpte */
 		*gpa &= ~(gpa_t)7;
 		*bytes = 8;
-		r = kvm_read_guest(vcpu->kvm, *gpa, &gentry, 8);
+		r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8);
 		if (r)
 			gentry = 0;
 		new = (const u8 *)&gentry;
@@ -4773,13 +4773,13 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
 	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
 }
 
-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm)
+void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots)
 {
 	/*
 	 * The very rare case: if the generation-number is round,
 	 * zap all shadow pages.
 	 */
-	if (unlikely(kvm_current_mmio_generation(kvm) == 0)) {
+	if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) {
 		printk_ratelimited(KERN_DEBUG "kvm: zapping shadow pages for mmio generation wraparound\n");
 		kvm_mmu_invalidate_zap_all_pages(kvm);
 	}
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6e6d115fe9b5..8a6c582db4b1 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -511,11 +511,11 @@ static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu,
 		base_gpa = pte_gpa & ~mask;
 		index = (pte_gpa - base_gpa) / sizeof(pt_element_t);
 
-		r = kvm_read_guest_atomic(vcpu->kvm, base_gpa,
+		r = kvm_vcpu_read_guest_atomic(vcpu, base_gpa,
 				gw->prefetch_ptes, sizeof(gw->prefetch_ptes));
 		curr_pte = gw->prefetch_ptes[index];
 	} else
-		r = kvm_read_guest_atomic(vcpu->kvm, pte_gpa,
+		r = kvm_vcpu_read_guest_atomic(vcpu, pte_gpa,
 				  &curr_pte, sizeof(curr_pte));
 
 	return r || curr_pte != gw->ptes[level - 1];
@@ -869,8 +869,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
 			if (!rmap_can_add(vcpu))
 				break;
 
-			if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &gpte,
-						  sizeof(pt_element_t)))
+			if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte,
+						       sizeof(pt_element_t)))
 				break;
 
 			FNAME(update_pte)(vcpu, sp, sptep, &gpte);
@@ -956,8 +956,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
 
-		if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &gpte,
-					  sizeof(pt_element_t)))
+		if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte,
+					       sizeof(pt_element_t)))
 			return -EINVAL;
 
 		if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
@@ -970,7 +970,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 		pte_access &= FNAME(gpte_access)(vcpu, gpte);
 		FNAME(protect_clean_gpte)(&pte_access, gpte);
 
-		if (sync_mmio_spte(vcpu->kvm, &sp->spt[i], gfn, pte_access,
+		if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access,
 		      &nr_present))
 			continue;
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 6afffb82513c..ce22872cfad6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1955,8 +1955,8 @@ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index)
 	u64 pdpte;
 	int ret;
 
-	ret = kvm_read_guest_page(vcpu->kvm, gpa_to_gfn(cr3), &pdpte,
-				  offset_in_page(cr3) + index * 8, 8);
+	ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte,
+				       offset_in_page(cr3) + index * 8, 8);
 	if (ret)
 		return 0;
 	return pdpte;
@@ -2153,7 +2153,7 @@ static int nested_svm_intercept_ioio(struct vcpu_svm *svm)
 	mask = (0xf >> (4 - size)) << start_bit;
 	val = 0;
 
-	if (kvm_read_guest(svm->vcpu.kvm, gpa, &val, iopm_len))
+	if (kvm_vcpu_read_guest(&svm->vcpu, gpa, &val, iopm_len))
 		return NESTED_EXIT_DONE;
 
 	return (val & mask) ? NESTED_EXIT_DONE : NESTED_EXIT_HOST;
@@ -2178,7 +2178,7 @@ static int nested_svm_exit_handled_msr(struct vcpu_svm *svm)
 	/* Offset is in 32 bit units but need in 8 bit units */
 	offset *= 4;
 
-	if (kvm_read_guest(svm->vcpu.kvm, svm->nested.vmcb_msrpm + offset, &value, 4))
+	if (kvm_vcpu_read_guest(&svm->vcpu, svm->nested.vmcb_msrpm + offset, &value, 4))
 		return NESTED_EXIT_DONE;
 
 	return (value & mask) ? NESTED_EXIT_DONE : NESTED_EXIT_HOST;
@@ -2449,7 +2449,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm)
 		p      = msrpm_offsets[i];
 		offset = svm->nested.vmcb_msrpm + (p * 4);
 
-		if (kvm_read_guest(svm->vcpu.kvm, offset, &value, 4))
+		if (kvm_vcpu_read_guest(&svm->vcpu, offset, &value, 4))
 			return false;
 
 		svm->nested.msrpm[p] = svm->msrpm[p] | value;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9904aed78c72..c9eab7440c73 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -7323,7 +7323,7 @@ static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu,
 		bitmap += (port & 0x7fff) / 8;
 
 		if (last_bitmap != bitmap)
-			if (kvm_read_guest(vcpu->kvm, bitmap, &b, 1))
+			if (kvm_vcpu_read_guest(vcpu, bitmap, &b, 1))
 				return true;
 		if (b & (1 << (port & 7)))
 			return true;
@@ -7367,7 +7367,7 @@ static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
 	/* Then read the msr_index'th bit from this bitmap: */
 	if (msr_index < 1024*8) {
 		unsigned char b;
-		if (kvm_read_guest(vcpu->kvm, bitmap + msr_index/8, &b, 1))
+		if (kvm_vcpu_read_guest(vcpu, bitmap + msr_index/8, &b, 1))
 			return true;
 		return 1 & (b >> (msr_index & 7));
 	} else
@@ -9109,8 +9109,8 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
 
 	msr.host_initiated = false;
 	for (i = 0; i < count; i++) {
-		if (kvm_read_guest(vcpu->kvm, gpa + i * sizeof(e),
-				   &e, sizeof(e))) {
+		if (kvm_vcpu_read_guest(vcpu, gpa + i * sizeof(e),
+					&e, sizeof(e))) {
 			pr_warn_ratelimited(
 				"%s cannot read MSR entry (%u, 0x%08llx)\n",
 				__func__, i, gpa + i * sizeof(e));
@@ -9143,9 +9143,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
 
 	for (i = 0; i < count; i++) {
 		struct msr_data msr_info;
-		if (kvm_read_guest(vcpu->kvm,
-				   gpa + i * sizeof(e),
-				   &e, 2 * sizeof(u32))) {
+		if (kvm_vcpu_read_guest(vcpu,
+					gpa + i * sizeof(e),
+					&e, 2 * sizeof(u32))) {
 			pr_warn_ratelimited(
 				"%s cannot read MSR entry (%u, 0x%08llx)\n",
 				__func__, i, gpa + i * sizeof(e));
@@ -9165,10 +9165,10 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count)
 				__func__, i, e.index);
 			return -EINVAL;
 		}
-		if (kvm_write_guest(vcpu->kvm,
-				    gpa + i * sizeof(e) +
-					offsetof(struct vmx_msr_entry, value),
-				    &msr_info.data, sizeof(msr_info.data))) {
+		if (kvm_vcpu_write_guest(vcpu,
+					 gpa + i * sizeof(e) +
+					     offsetof(struct vmx_msr_entry, value),
+					 &msr_info.data, sizeof(msr_info.data))) {
 			pr_warn_ratelimited(
 				"%s cannot write MSR (%u, 0x%x, 0x%llx)\n",
 				__func__, i, e.index, msr_info.data);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 753cf20c0ec5..0b6192fc348a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -475,7 +475,7 @@ EXPORT_SYMBOL_GPL(kvm_require_dr);
 
 /*
  * This function will be used to read from the physical memory of the currently
- * running guest. The difference to kvm_read_guest_page is that this function
+ * running guest. The difference to kvm_vcpu_read_guest_page is that this function
  * can read from guest physical or from the guest's guest physical memory.
  */
 int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
@@ -493,7 +493,7 @@ int kvm_read_guest_page_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 
 	real_gfn = gpa_to_gfn(real_gfn);
 
-	return kvm_read_guest_page(vcpu->kvm, real_gfn, data, offset, len);
+	return kvm_vcpu_read_guest_page(vcpu, real_gfn, data, offset, len);
 }
 EXPORT_SYMBOL_GPL(kvm_read_guest_page_mmu);
 
@@ -2022,7 +2022,7 @@ static int xen_hvm_config(struct kvm_vcpu *vcpu, u64 data)
 		r = PTR_ERR(page);
 		goto out;
 	}
-	if (kvm_write_guest(kvm, page_addr, page, PAGE_SIZE))
+	if (kvm_vcpu_write_guest(vcpu, page_addr, page, PAGE_SIZE))
 		goto out_free;
 	r = 0;
 out_free:
@@ -2122,7 +2122,7 @@ static int set_msr_hyperv(struct kvm_vcpu *vcpu, u32 msr, u64 data)
 			break;
 		}
 		gfn = data >> HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT;
-		addr = gfn_to_hva(vcpu->kvm, gfn);
+		addr = kvm_vcpu_gfn_to_hva(vcpu, gfn);
 		if (kvm_is_error_hva(addr))
 			return 1;
 		if (__clear_user((void __user *)addr, PAGE_SIZE))
@@ -4409,8 +4409,8 @@ static int kvm_read_guest_virt_helper(gva_t addr, void *val, unsigned int bytes,
 
 		if (gpa == UNMAPPED_GVA)
 			return X86EMUL_PROPAGATE_FAULT;
-		ret = kvm_read_guest_page(vcpu->kvm, gpa >> PAGE_SHIFT, data,
-					  offset, toread);
+		ret = kvm_vcpu_read_guest_page(vcpu, gpa >> PAGE_SHIFT, data,
+					       offset, toread);
 		if (ret < 0) {
 			r = X86EMUL_IO_NEEDED;
 			goto out;
@@ -4443,8 +4443,8 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt,
 	offset = addr & (PAGE_SIZE-1);
 	if (WARN_ON(offset + bytes > PAGE_SIZE))
 		bytes = (unsigned)PAGE_SIZE - offset;
-	ret = kvm_read_guest_page(vcpu->kvm, gpa >> PAGE_SHIFT, val,
-				  offset, bytes);
+	ret = kvm_vcpu_read_guest_page(vcpu, gpa >> PAGE_SHIFT, val,
+				       offset, bytes);
 	if (unlikely(ret < 0))
 		return X86EMUL_IO_NEEDED;
 
@@ -4490,7 +4490,7 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt *ctxt,
 
 		if (gpa == UNMAPPED_GVA)
 			return X86EMUL_PROPAGATE_FAULT;
-		ret = kvm_write_guest(vcpu->kvm, gpa, data, towrite);
+		ret = kvm_vcpu_write_guest(vcpu, gpa, data, towrite);
 		if (ret < 0) {
 			r = X86EMUL_IO_NEEDED;
 			goto out;
@@ -4543,7 +4543,7 @@ int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
 {
 	int ret;
 
-	ret = kvm_write_guest(vcpu->kvm, gpa, val, bytes);
+	ret = kvm_vcpu_write_guest(vcpu, gpa, val, bytes);
 	if (ret < 0)
 		return 0;
 	kvm_mmu_pte_write(vcpu, gpa, val, bytes);
@@ -4577,7 +4577,7 @@ static int read_prepare(struct kvm_vcpu *vcpu, void *val, int bytes)
 static int read_emulate(struct kvm_vcpu *vcpu, gpa_t gpa,
 			void *val, int bytes)
 {
-	return !kvm_read_guest(vcpu->kvm, gpa, val, bytes);
+	return !kvm_vcpu_read_guest(vcpu, gpa, val, bytes);
 }
 
 static int write_emulate(struct kvm_vcpu *vcpu, gpa_t gpa,
@@ -6544,7 +6544,7 @@ static void process_smi(struct kvm_vcpu *vcpu)
 	else
 		process_smi_save_state_32(vcpu, buf);
 
-	r = kvm_write_guest(vcpu->kvm, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf));
+	r = kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf));
 	if (r < 0)
 		return;
 
@@ -8034,7 +8034,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
 	 * memslots->generation has been incremented.
 	 * mmio generation may have reached its maximum value.
 	 */
-	kvm_mmu_invalidate_mmio_sptes(kvm);
+	kvm_mmu_invalidate_mmio_sptes(kvm, slots);
 }
 
 int kvm_arch_prepare_memory_region(struct kvm *kvm,
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/11] KVM: x86: work on all available address spaces
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (8 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 09/11] KVM: x86: use vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-18 13:48 ` [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space Paolo Bonzini
  2015-05-20  8:31 ` [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Christian Borntraeger
  11 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

This patch has no semantic change, but it prepares for the introduction
of a second address space for system management mode.

A new function x86_set_memory_region (and the "slots_lock taken"
counterpart __x86_set_memory_region) is introduced in order to
operate on all address spaces when adding or deleting private
memory slots.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  5 +++
 arch/x86/kvm/mmu.c              | 84 ++++++++++++++++++++++-------------------
 arch/x86/kvm/vmx.c              |  6 +--
 arch/x86/kvm/x86.c              | 40 ++++++++++++++++++--
 4 files changed, 91 insertions(+), 44 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 71232ccc8fe4..d7957733de61 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1187,4 +1187,9 @@ int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
 void kvm_handle_pmu_event(struct kvm_vcpu *vcpu);
 void kvm_deliver_pmi(struct kvm_vcpu *vcpu);
 
+int __x86_set_memory_region(struct kvm *kvm,
+			    const struct kvm_userspace_memory_region *mem);
+int x86_set_memory_region(struct kvm *kvm,
+			  const struct kvm_userspace_memory_region *mem);
+
 #endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7eff9d249f39..b7e94a662f4e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1499,30 +1499,33 @@ static int kvm_handle_hva_range(struct kvm *kvm,
 	struct kvm_memory_slot *memslot;
 	struct slot_rmap_walk_iterator iterator;
 	int ret = 0;
+	int i;
 
-	slots = kvm_memslots(kvm);
-
-	kvm_for_each_memslot(memslot, slots) {
-		unsigned long hva_start, hva_end;
-		gfn_t gfn_start, gfn_end;
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot(memslot, slots) {
+			unsigned long hva_start, hva_end;
+			gfn_t gfn_start, gfn_end;
 
-		hva_start = max(start, memslot->userspace_addr);
-		hva_end = min(end, memslot->userspace_addr +
-					(memslot->npages << PAGE_SHIFT));
-		if (hva_start >= hva_end)
-			continue;
-		/*
-		 * {gfn(page) | page intersects with [hva_start, hva_end)} =
-		 * {gfn_start, gfn_start+1, ..., gfn_end-1}.
-		 */
-		gfn_start = hva_to_gfn_memslot(hva_start, memslot);
-		gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot);
-
-		for_each_slot_rmap_range(memslot, PT_PAGE_TABLE_LEVEL,
-				PT_MAX_HUGEPAGE_LEVEL, gfn_start, gfn_end - 1,
-				&iterator)
-			ret |= handler(kvm, iterator.rmap, memslot,
-				       iterator.gfn, iterator.level, data);
+			hva_start = max(start, memslot->userspace_addr);
+			hva_end = min(end, memslot->userspace_addr +
+				      (memslot->npages << PAGE_SHIFT));
+			if (hva_start >= hva_end)
+				continue;
+			/*
+			 * {gfn(page) | page intersects with [hva_start, hva_end)} =
+			 * {gfn_start, gfn_start+1, ..., gfn_end-1}.
+			 */
+			gfn_start = hva_to_gfn_memslot(hva_start, memslot);
+			gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot);
+
+			for_each_slot_rmap_range(memslot, PT_PAGE_TABLE_LEVEL,
+						 PT_MAX_HUGEPAGE_LEVEL,
+						 gfn_start, gfn_end - 1,
+						 &iterator)
+				ret |= handler(kvm, iterator.rmap, memslot,
+					       iterator.gfn, iterator.level, data);
+		}
 	}
 
 	return ret;
@@ -4530,21 +4533,23 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 {
 	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
-
-	slots = kvm_memslots(kvm);
+	int i;
 
 	spin_lock(&kvm->mmu_lock);
-	kvm_for_each_memslot(memslot, slots) {
-		gfn_t start, end;
-
-		start = max(gfn_start, memslot->base_gfn);
-		end = min(gfn_end, memslot->base_gfn + memslot->npages);
-		if (start >= end)
-			continue;
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot(memslot, slots) {
+			gfn_t start, end;
+
+			start = max(gfn_start, memslot->base_gfn);
+			end = min(gfn_end, memslot->base_gfn + memslot->npages);
+			if (start >= end)
+				continue;
 
-		slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
-				PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL,
-				start, end - 1, true);
+			slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
+						PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL,
+						start, end - 1, true);
+		}
 	}
 
 	spin_unlock(&kvm->mmu_lock);
@@ -4901,15 +4906,18 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
 	unsigned int  nr_pages = 0;
 	struct kvm_memslots *slots;
 	struct kvm_memory_slot *memslot;
+	int i;
 
-	slots = kvm_memslots(kvm);
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		slots = __kvm_memslots(kvm, i);
 
-	kvm_for_each_memslot(memslot, slots)
-		nr_pages += memslot->npages;
+		kvm_for_each_memslot(memslot, slots)
+			nr_pages += memslot->npages;
+	}
 
 	nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000;
 	nr_mmu_pages = max(nr_mmu_pages,
-			(unsigned int) KVM_MIN_ALLOC_MMU_PAGES);
+			   (unsigned int) KVM_MIN_ALLOC_MMU_PAGES);
 
 	return nr_mmu_pages;
 }
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c9eab7440c73..23340cec42af 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4115,7 +4115,7 @@ static int alloc_apic_access_page(struct kvm *kvm)
 	kvm_userspace_mem.flags = 0;
 	kvm_userspace_mem.guest_phys_addr = APIC_DEFAULT_PHYS_BASE;
 	kvm_userspace_mem.memory_size = PAGE_SIZE;
-	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
+	r = __x86_set_memory_region(kvm, &kvm_userspace_mem);
 	if (r)
 		goto out;
 
@@ -4150,7 +4150,7 @@ static int alloc_identity_pagetable(struct kvm *kvm)
 	kvm_userspace_mem.guest_phys_addr =
 		kvm->arch.ept_identity_map_addr;
 	kvm_userspace_mem.memory_size = PAGE_SIZE;
-	r = __kvm_set_memory_region(kvm, &kvm_userspace_mem);
+	r = __x86_set_memory_region(kvm, &kvm_userspace_mem);
 
 	return r;
 }
@@ -4956,7 +4956,7 @@ static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
 		.flags = 0,
 	};
 
-	ret = kvm_set_memory_region(kvm, &tss_mem);
+	ret = x86_set_memory_region(kvm, &tss_mem);
 	if (ret)
 		return ret;
 	kvm->arch.tss_addr = addr;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0b6192fc348a..bb637e893e13 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7922,6 +7922,40 @@ void kvm_arch_sync_events(struct kvm *kvm)
 	kvm_free_pit(kvm);
 }
 
+int __x86_set_memory_region(struct kvm *kvm,
+			    const struct kvm_userspace_memory_region *mem)
+{
+	int i, r;
+
+	/* Called with kvm->slots_lock held.  */
+	BUG_ON(mem->slot >= KVM_MEM_SLOTS_NUM);
+
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		struct kvm_userspace_memory_region m = *mem;
+
+		m.slot |= i << 16;
+		r = __kvm_set_memory_region(kvm, &m);
+		if (r < 0)
+			return r;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(__x86_set_memory_region);
+
+int x86_set_memory_region(struct kvm *kvm,
+			  const struct kvm_userspace_memory_region *mem)
+{
+	int r;
+
+	mutex_lock(&kvm->slots_lock);
+	r = __x86_set_memory_region(kvm, mem);
+	mutex_unlock(&kvm->slots_lock);
+
+	return r;
+}
+EXPORT_SYMBOL_GPL(x86_set_memory_region);
+
 void kvm_arch_destroy_vm(struct kvm *kvm)
 {
 	if (current->mm == kvm->mm) {
@@ -7933,13 +7967,13 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		struct kvm_userspace_memory_region mem;
 		memset(&mem, 0, sizeof(mem));
 		mem.slot = APIC_ACCESS_PAGE_PRIVATE_MEMSLOT;
-		kvm_set_memory_region(kvm, &mem);
+		x86_set_memory_region(kvm, &mem);
 
 		mem.slot = IDENTITY_PAGETABLE_PRIVATE_MEMSLOT;
-		kvm_set_memory_region(kvm, &mem);
+		x86_set_memory_region(kvm, &mem);
 
 		mem.slot = TSS_PRIVATE_MEMSLOT;
-		kvm_set_memory_region(kvm, &mem);
+		x86_set_memory_region(kvm, &mem);
 	}
 	kvm_iommu_unmap_guest(kvm);
 	kfree(kvm->arch.vpic);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (9 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 10/11] KVM: x86: work on all available address spaces Paolo Bonzini
@ 2015-05-18 13:48 ` Paolo Bonzini
  2015-05-20  8:34   ` Xiao Guangrong
  2015-05-20  8:31 ` [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Christian Borntraeger
  11 siblings, 1 reply; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-18 13:48 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Xiao Guangrong, rkrcmar, bdas

This is now very simple to do.  The only interesting part is a simple
trick to find the right memslot in gfn_to_rmap, retrieving the address
space from the spte role word.

The comment on top of union kvm_mmu_page_role has been stale forever,
so remove it.  Speaking of stale code, remove pad_for_nice_hex_output
too: it was splitting the "access" bitfield across two bytes and thus
had effectively turned into pad_for_ugly_hex_output.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 19 ++++++++-----------
 arch/x86/kvm/mmu.c              |  7 ++++++-
 arch/x86/kvm/x86.c              |  1 +
 3 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d7957733de61..5dd9d960595c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache {
 	void *objects[KVM_NR_MEM_OBJS];
 };
 
-/*
- * kvm_mmu_page_role, below, is defined as:
- *
- *   bits 0:3 - total guest paging levels (2-4, or zero for real mode)
- *   bits 4:7 - page table level for this shadow (1-4)
- *   bits 8:9 - page table quadrant for 2-level guests
- *   bit   16 - direct mapping of virtual to physical mapping at gfn
- *              used for real mode and two-dimensional paging
- *   bits 17:19 - common access permissions for all ptes in this shadow page
- */
 union kvm_mmu_page_role {
 	unsigned word;
 	struct {
 		unsigned level:4;
 		unsigned cr4_pae:1;
 		unsigned quadrant:2;
-		unsigned pad_for_nice_hex_output:6;
 		unsigned direct:1;
 		unsigned access:3;
 		unsigned invalid:1;
@@ -208,6 +198,7 @@ union kvm_mmu_page_role {
 		unsigned cr0_wp:1;
 		unsigned smep_andnot_wp:1;
 		unsigned smap_andnot_wp:1;
+		unsigned smm:1;
 	};
 };
 
@@ -1118,6 +1109,12 @@ enum {
 #define HF_SMM_MASK		(1 << 6)
 #define HF_SMM_INSIDE_NMI_MASK	(1 << 7)
 
+#define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE
+#define KVM_ADDRESS_SPACE_NUM 2
+
+#define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0)
+#define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm)
+
 /*
  * Hardware virtualization extension instructions may fault if a
  * reboot turns off virtualization while processes are running.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b7e94a662f4e..92436bc4f3e0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1041,9 +1041,11 @@ static unsigned long *__gfn_to_rmap(gfn_t gfn, int level,
  */
 static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, struct kvm_mmu_page *sp)
 {
+	struct kvm_memslots *slots;
 	struct kvm_memory_slot *slot;
 
-	slot = gfn_to_memslot(kvm, gfn);
+	slots = kvm_memslots_for_spte_role(kvm, sp->role);
+	slot = __gfn_to_memslot(slots, gfn);
 	return __gfn_to_rmap(gfn, sp->role.level, slot);
 }
 
@@ -3918,6 +3920,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 	struct kvm_mmu *context = &vcpu->arch.mmu;
 
 	context->base_role.word = 0;
+	context->base_role.smm = is_smm(vcpu);
 	context->page_fault = tdp_page_fault;
 	context->sync_page = nonpaging_sync_page;
 	context->invlpg = nonpaging_invlpg;
@@ -3979,6 +3982,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 		= smep && !is_write_protection(vcpu);
 	context->base_role.smap_andnot_wp
 		= smap && !is_write_protection(vcpu);
+	context->base_role.smm = is_smm(vcpu);
 }
 EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
 
@@ -4261,6 +4265,7 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 		.nxe = 1,
 		.smep_andnot_wp = 1,
 		.smap_andnot_wp = 1,
+		.smm = 1,
 	};
 
 	/*
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index bb637e893e13..8a05770b9fa1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5473,6 +5473,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu, unsigned emul_flags)
 	}
 
 	vcpu->arch.hflags = emul_flags;
+	kvm_mmu_reset_context(vcpu);
 }
 
 static int kvm_vcpu_check_hw_bp(unsigned long addr, u32 type, u32 dr7,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-18 13:48 ` [PATCH 08/11] KVM: implement multiple address spaces Paolo Bonzini
@ 2015-05-19 13:32   ` Radim Krčmář
  2015-05-19 16:19     ` Paolo Bonzini
  0 siblings, 1 reply; 24+ messages in thread
From: Radim Krčmář @ 2015-05-19 13:32 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Xiao Guangrong, bdas

2015-05-18 15:48+0200, Paolo Bonzini:
> +If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 of "slot"
> +specifies the address space which is being modified.  They must be
> +less than the value that KVM_CHECK_EXTENSION returns for the
> +KVM_CAP_MULTI_ADDRESS_SPACE capability.

> +                                         Slots in separate address spaces
> +are unrelated;

I'd prefer to decouple address spaces and slots.  KVM_GET_DIRTY_LOG
could stay the same if we said that a slot can be in multiple address
spaces.  (Well, we could do that even now, by searching for slots that
differ only in address space id and pointing them to same dirty bitmap.
This even makes sense, which is a sign of lacking design :)

The main drawback is that forcing decoupling on existing IOCTLs would be
really awkward ... we'd need to have new API for address spaces;
there are two basic operations on an address space:
  add and remove slot (needs: slot id, address space id)
which would give roughly the same functionality as this patch.
(To maximixe compatibility, there could be a default address space and a
 slot flag that doesn't automatically add it there.)

On top of this, we could allow hierarchical address spaces, so very similar
address spaces (like SMM) would be easier to set up.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-19 13:32   ` Radim Krčmář
@ 2015-05-19 16:19     ` Paolo Bonzini
  2015-05-19 18:28       ` Radim Krčmář
  0 siblings, 1 reply; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-19 16:19 UTC (permalink / raw)
  To: Radim Krčmář; +Cc: linux-kernel, kvm, Xiao Guangrong, bdas



On 19/05/2015 15:32, Radim Krčmář wrote:
> I'd prefer to decouple address spaces and slots.  KVM_GET_DIRTY_LOG
> could stay the same if we said that a slot can be in multiple address
> spaces.

Actually, my original plan was a third one.  I wanted to have a fixed
number of address spaces, where CPUs would use only the first ones (0
for non-x86, 0+1 for x86).  Then I would let userspace pick a "dirty log
address space" which, in the case of QEMU, would simply use ram_addr_t
as the address.

However, that doesn't work because when marking a page dirty you need
either a (slots, gfn) or a (memslot, relative address in memslot) pair.
 Given the gfn that the VCPU has faulted on, it would be very expensive
to find the corresponding gfn in the "dirty log address space".  On the
contrary, it's very easy if the VCPU can simply query its current memslots.

In your proposal, there is the problem of where to do the overlap check.
 You cannot do it in the slots because it messes up userspace seriously.
 QEMU for example wants to use as few slots as possible and merges
consecutive slots; but what is mergeable in one address space need not
be mergeable in another.  And if you do it in the address spaces, you
have not solved the problem of multiple dirty bitmaps pointing for the
same userspace address.

BTW, a few more calls have to be converted to kvm_vcpu_ equivalents.
I've now gone through all occurrences of "gfn_to_" and found that we
have more invocations of gfn_to_page_many_atomic, gfn_to_pfn_atomic,
gfn_to_pfn, and gfn_to_page to convert.  Also, mark_page_dirty must be
changed to kvm_vcpu_mark_page_dirty.

> (Well, we could do that even now, by searching for slots that
> differ only in address space id and pointing them to same dirty bitmap.
> This even makes sense, which is a sign of lacking design :)

I thought about this, but ultimately it sounded like premature
optimization (and also might not even work due to the aforementioned
problem with merging of adjacent slots).

A possible optimization is to set a flag when no bits are set in the
dirty bitmap, and skip the iteration.  This is the common case for the
SMM memory space for example.  But it can be done later, and is an
independent work.

Keeping slots separate for different address spaces also makes the most
sense in QEMU, because each address space has a separate MemoryListener
that only knows about one address space.  There is one log_sync callback
per address space and no code has to know about all address spaces.

> The main drawback is that forcing decoupling on existing IOCTLs would be
> really awkward ... we'd need to have new API for address spaces;
> there are two basic operations on an address space:
>   add and remove slot (needs: slot id, address space id)
> which would give roughly the same functionality as this patch.
> (To maximixe compatibility, there could be a default address space and a
>  slot flag that doesn't automatical 
> On top of this, we could allow hierarchical address spaces, so very similar
> address spaces (like SMM) would be easier to set up.

That would actually be really messy. :)

The regular and SMM address spaces are not hierarchical.  As soon as you
put a PCI resource underneath SMRAM---which is exactly what happens for
legacy VRAM at 0xa0000---they can be completely different.  Note that
QEMU can map legacy VRAM as a KVM memslot when using the VGA 320x200x256
color mode (this mapping is not correct from the VGA point of view, but
it cannot be changed in QEMU without breaking migration).

What I do dislike in the API is the 16-bit split of the slots field; but
the alternative of defining KVM_SET_USER_MEMORY_REGION2 and
KVM_GET_DIRTY_LOG2 ioctls is just as sad.  If you prefer it that way, I
can certainly do that.

Is it okay for you if I separate the cleanup patches, and then repost
the actual SMM series after the API discussions have settled?  The
cleanups should be easily reviewable, and they make some sense on their
own.  I'm currently autotesting them.

Paolo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-19 16:19     ` Paolo Bonzini
@ 2015-05-19 18:28       ` Radim Krčmář
  2015-05-20  7:07         ` Paolo Bonzini
  0 siblings, 1 reply; 24+ messages in thread
From: Radim Krčmář @ 2015-05-19 18:28 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Xiao Guangrong, bdas

2015-05-19 18:19+0200, Paolo Bonzini:
> On 19/05/2015 15:32, Radim Krčmář wrote:
> > I'd prefer to decouple address spaces and slots.  KVM_GET_DIRTY_LOG
> > could stay the same if we said that a slot can be in multiple address
> > spaces.
> 
> Actually, my original plan was a third one.  I wanted to have a fixed
> number of address spaces, where CPUs would use only the first ones (0
> for non-x86, 0+1 for x86).  Then I would let userspace pick a "dirty log
> address space" which, in the case of QEMU, would simply use ram_addr_t
> as the address.
> 
> However, that doesn't work because when marking a page dirty you need
> either a (slots, gfn) or a (memslot, relative address in memslot) pair.
>  Given the gfn that the VCPU has faulted on, it would be very expensive
> to find the corresponding gfn in the "dirty log address space".

If I understand your aim correctly, we could go with just one dirty map
in this way:
 - have a dirty bitmap that covers userspace memory of all slots that
   have enabled dirty log
 - store an offset to this bitmap in those slots

It would be a framework that allows userspace to query dirty pages based
on userspace address space.  (Changing slots would be costly, but I
don't think it's done very often.)

>                                                                  On the
> contrary, it's very easy if the VCPU can simply query its current memslots.

Yeah, userspace drew the short stick.

> In your proposal, there is the problem of where to do the overlap check.
>  You cannot do it in the slots because it messes up userspace seriously.
> 
>  QEMU for example wants to use as few slots as possible and merges
> consecutive slots; but what is mergeable in one address space need not
> be mergeable in another.  And if you do it in the address spaces, you
> have not solved the problem of multiple dirty bitmaps pointing for the
> same userspace address.

The worst case is the same (two totally different mappings), but some
cases could make it with less slots.  With hierarchy (more below), we
could even allow overlapping over a large slot.

I didn't want to solve the "multiple bitmaps for same userspace" because
KVM_GET_DIRTY_LOG doesn't care about userspace memory space dirtiness
(thought it's probably what we use it for), but only about the guest
address space;  KVM_GET_DIRTY_LOG just says which which page was
modified in a slot :/
(If we wanted dirty bitmap for userspace memory, it would be better to
 use the solution at the top.)

Less slots just means less worries ...

> > (Well, we could do that even now, by searching for slots that
> > differ only in address space id and pointing them to same dirty bitmap.
> > This even makes sense, which is a sign of lacking design :)
> 
> I thought about this, but ultimately it sounded like premature
> optimization (and also might not even work due to the aforementioned
> problem with merging of adjacent slots).
> 
> A possible optimization is to set a flag when no bits are set in the
> dirty bitmap, and skip the iteration.  This is the common case for the
> SMM memory space for example.  But it can be done later, and is an
> independent work.

True.  (I'm not a huge fan of optimizations through exceptions.)

> Keeping slots separate for different address spaces also makes the most
> sense in QEMU, because each address space has a separate MemoryListener
> that only knows about one address space.  There is one log_sync callback
> per address space and no code has to know about all address spaces.

Ah, I would have thought this is done per slot, when there was no
concept of address space in KVM before this patch :)

> > The main drawback is that forcing decoupling on existing IOCTLs would be
> > really awkward ... we'd need to have new API for address spaces;
> > there are two basic operations on an address space:
> >   add and remove slot (needs: slot id, address space id)
> > which would give roughly the same functionality as this patch.
> > (To maximixe compatibility, there could be a default address space and a
> >  slot flag that doesn't automatical 
> > On top of this, we could allow hierarchical address spaces, so very similar
> > address spaces (like SMM) would be easier to set up.
> 
> That would actually be really messy. :)
> 
> The regular and SMM address spaces are not hierarchical.  As soon as you
> put a PCI resource underneath SMRAM---which is exactly what happens for
> legacy VRAM at 0xa0000---they can be completely different.  Note that
> QEMU can map legacy VRAM as a KVM memslot when using the VGA 320x200x256
> color mode (this mapping is not correct from the VGA point of view, but
> it cannot be changed in QEMU without breaking migration).

How is a PCI resource under SMRAM accessed?
I thought that outside SMM, PCI resource under SMRAM is working
normally, but it will be overshadowed, and made inaccessible, in SMM.

I'm not sure if we mean the same hierarchy.  I meant hierarchy in the
sense than one address space is considered before the other.
(Maybe layers would be a better word.)
SMM address space could have just one slot and be above regular, we'd
then decide how to handle overlapping.

> What I do dislike in the API is the 16-bit split of the slots field; but
> the alternative of defining KVM_SET_USER_MEMORY_REGION2 and
> KVM_GET_DIRTY_LOG2 ioctls is just as sad.  If you prefer it that way, I
> can certainly do that.

No, what we have now is much better.

I'd like to know why a taken route is better than a one I'd prefer,
which probably makes me look like I have a problem with everything,
but your solution is neat.

> Is it okay for you if I separate the cleanup patches, and then repost
> the actual SMM series after the API discussions have settled?  The
> cleanups should be easily reviewable, and they make some sense on their
> own.  I'm currently autotesting them.

Yes, it would be much cleaner.  (I wasn't able to apply these patches,
so I just skimmed it and a proper review will take time.)

Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-19 18:28       ` Radim Krčmář
@ 2015-05-20  7:07         ` Paolo Bonzini
  2015-05-20 15:46           ` Radim Krčmář
  0 siblings, 1 reply; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-20  7:07 UTC (permalink / raw)
  To: Radim Krčmář; +Cc: linux-kernel, kvm, Xiao Guangrong, bdas



On 19/05/2015 20:28, Radim Krčmář wrote:
> If I understand your aim correctly, we could go with just one dirty map
> in this way:
>  - have a dirty bitmap that covers userspace memory of all slots that
>    have enabled dirty log
>  - store an offset to this bitmap in those slots
> 
> It would be a framework that allows userspace to query dirty pages based
> on userspace address space.  (Changing slots would be costly, but I
> don't think it's done very often.)

Yes, that would work.  However, it would be a significant change to the
KVM dirty bitmap model.  Right now it is GPA-indexed, and it would
become HVA-indexed.

The difference arises if the same memory is aliased at different parts
of the address space.  Aliasing can happen if the 0xA0000 VRAM is a
window inside a larger framebuffer, for example; or it can happen across
separate address spaces.  You cannot be sure that *some* userspace is
relying on the properties of a GPA-indexed dirty bitmap.  Certainly not
QEMU, but we try to be userspace-agnostic.

>>                                                                  On the
>> contrary, it's very easy if the VCPU can simply query its current memslots.
> 
> Yeah, userspace drew the short stick.

Userspace is the slow path, so it makes sense that it did. :)

> I didn't want to solve the "multiple bitmaps for same userspace" because
> KVM_GET_DIRTY_LOG doesn't care about userspace memory space dirtiness
> (thought it's probably what we use it for), but only about the guest
> address space;  KVM_GET_DIRTY_LOG just says which which page was
> modified in a slot :/
> (If we wanted dirty bitmap for userspace memory, it would be better to
>  use the solution at the top.)

Userspace implements dirty bitmap for userspace memory as the OR of the
KVM dirty bitmap with its own dirty bitmap.  There's no particular
advantage to a GPA-indexed dirty bitmap; as you noticed, GPA and HVA
indexing can be equally fast because GPA->HVA mapping is just an add.

It's just that the dirty bitmap it's always been GPA-indexed, and it's
too late to change it.

>> A possible optimization is to set a flag when no bits are set in the
>> dirty bitmap, and skip the iteration.  This is the common case for the
>> SMM memory space for example.  But it can be done later, and is an
>> independent work.
> 
> True.  (I'm not a huge fan of optimizations through exceptions.)

It depends on how often the exception kicks in... :)

>> Keeping slots separate for different address spaces also makes the most
>> sense in QEMU, because each address space has a separate MemoryListener
>> that only knows about one address space.  There is one log_sync callback
>> per address space and no code has to know about all address spaces.
> 
> Ah, I would have thought this is done per slot, when there was no
> concept of address space in KVM before this patch :)

Right; the callback is _called_ per slot, but _registered_ per address
space.  Remember that QEMU does have address spaces.

>> The regular and SMM address spaces are not hierarchical.  As soon as you
>> put a PCI resource underneath SMRAM---which is exactly what happens for
>> legacy VRAM at 0xa0000---they can be completely different.  Note that
>> QEMU can map legacy VRAM as a KVM memslot when using the VGA 320x200x256
>> color mode (this mapping is not correct from the VGA point of view, but
>> it cannot be changed in QEMU without breaking migration).
> 
> How is a PCI resource under SMRAM accessed?
> I thought that outside SMM, PCI resource under SMRAM is working
> normally, but it will be overshadowed, and made inaccessible, in SMM.

Yes, it is.  (There is some chipset magic to make instruction fetches
retrieve SMRAM and data fetches retrieve PCI resources.  I guess you
could use execute-only EPT permissions, but needless to say, we don't care).

> I'm not sure if we mean the same hierarchy.  I meant hierarchy in the
> sense than one address space is considered before the other.
> (Maybe layers would be a better word.)
> SMM address space could have just one slot and be above regular, we'd
> then decide how to handle overlapping.

Ah, now I understand.  That would be doable.

But as they say, "All programming is an exercise in caching."  In this
case, the caching is done by userspace.

QEMU implements the SMM address space exactly by overlaying SMRAM over
normal memory:

    /* Outer container */
   memory_region_init(&smram_as_root, OBJECT(kvm_state),
                      "mem-container-smm", ~0ull);
   memory_region_set_enabled(&smm_as_root, true);

   /* Normal memory with priority 0 */
   memory_region_init_alias(&smm_as_mem, OBJECT(kvm_state), "mem-smm",
                            get_system_memory(), 0, ~0ull);
   memory_region_add_subregion_overlap(&smm_as_root, 0, &smm_as_mem, 0);
   memory_region_set_enabled(&smm_as_mem, true);

   /* SMRAM with priority 10 */
   memory_region_add_subregion_overlap(&smm_as_root, 0, smram, 10);
   memory_region_set_enabled(smram, true);

The caching consists simply in resolving the overlaps beforehand, thus
giving KVM the complete address space.

Since slots do not change often, the simpler code is not worth the
potentially more expensive KVM_SET_USER_MEMORY_REGION (it _is_ more
expensive, if only because it has to be called twice per slot change).

>> What I do dislike in the API is the 16-bit split of the slots field; but
>> the alternative of defining KVM_SET_USER_MEMORY_REGION2 and
>> KVM_GET_DIRTY_LOG2 ioctls is just as sad.  If you prefer it that way, I
>> can certainly do that.
> 
> No, what we have now is much better. I'd like to know why a taken
> route is better than a one I'd prefer, which probably makes me look
> like I have a problem with everything, but your solution is neat.

No problem, it's good because otherwise I take too much advantage of,
well, being able to commit to kvm.git. :)

(Also, it's not entirely my solution.  My solution was v1 and the
userspace patches showed that it was a mess and a dead end...  Compared
to v1, this one requires much more care in the MMU.  The same gfn can
have two completely different mapping and you have to use the right one
when e.g. accessing the rmap.  But that's pretty much the only downside).

Paolo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap
  2015-05-18 13:48 ` [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap Paolo Bonzini
@ 2015-05-20  8:30   ` Xiao Guangrong
  2015-05-20  9:07     ` Paolo Bonzini
  0 siblings, 1 reply; 24+ messages in thread
From: Xiao Guangrong @ 2015-05-20  8:30 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: rkrcmar, bdas



On 05/18/2015 09:48 PM, Paolo Bonzini wrote:
> This is always available, and we can use the role to look up the right
> memslots array.
>

How about pass role instead of sp so that it can be used if no sp is
available?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 00/11] KVM: multiple address spaces (for SMM)
  2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
                   ` (10 preceding siblings ...)
  2015-05-18 13:48 ` [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space Paolo Bonzini
@ 2015-05-20  8:31 ` Christian Borntraeger
  2015-05-20  8:58   ` Paolo Bonzini
  11 siblings, 1 reply; 24+ messages in thread
From: Christian Borntraeger @ 2015-05-20  8:31 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm
  Cc: Xiao Guangrong, rkrcmar, bdas, linux-s390

Am 18.05.2015 um 15:48 schrieb Paolo Bonzini:
> This implements Avi's suggestion of having two separate kvm_memslots for
> regular and SMM operation, corresponding to different address spaces.
> 
> All in all, the surgery is limited even though there are a few preparatory
> patches that touch all architectures.
> 
> The amount of added code for the vcpu-specific versions of kvm_read_guest
> and kvm_write_guest is smaller, and duplication is limited to a couple of
> functions.  Even the rmap parts, which scared me a lot when the first
> version OOPSed on me, :) are actually very easy.
> 
> Patches 1-6 are preparatory cleanups that can be applied separately,
> while the others will be posted in v2 of the SMM patches.
> 
> Patches 7-8 add the new functions (this time in virt/kvm/kvm_main.c).
> Architectures can then define a function kvm_arch_vcpu_memslots_id that
> returns the active address space id for a given VCPU.  The address space
> ID must be passed to KVM_SET_USER_MEMORY_REGION and KVM_GET_DIRTY_LOG,
> using the high 16 bits of the slot id.
> 
> Patch 9 then does VCPU-specific accesses in x86, and patch 10 loops
> over the SMM slots as well on memslot iterations.
> 
> Patch 11 then introduces the SMRAM address space, which is very simple
> after all the legwork.
> 
> Thanks for the reviews,


So in essence this allows to have each vcpu in separate address space if
necessary. These address space might overlap or be identical and allow
permutations by having different memslots for each address space.
And we can have several address spaces per vcpu. Correct?

This might be useful for kvm on s390 (e.g. we did the ucontrol thing that
also has one guest address space per vcpu). I need to have a look at that.

Christian


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space
  2015-05-18 13:48 ` [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space Paolo Bonzini
@ 2015-05-20  8:34   ` Xiao Guangrong
  2015-05-20  8:57     ` Paolo Bonzini
  0 siblings, 1 reply; 24+ messages in thread
From: Xiao Guangrong @ 2015-05-20  8:34 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: rkrcmar, bdas



On 05/18/2015 09:48 PM, Paolo Bonzini wrote:

> @@ -5473,6 +5473,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu, unsigned emul_flags)
>   	}
>
>   	vcpu->arch.hflags = emul_flags;
> +	kvm_mmu_reset_context(vcpu);

reset root table only if SMM flag is changed?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space
  2015-05-20  8:34   ` Xiao Guangrong
@ 2015-05-20  8:57     ` Paolo Bonzini
  0 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-20  8:57 UTC (permalink / raw)
  To: Xiao Guangrong, linux-kernel, kvm; +Cc: rkrcmar, bdas



On 20/05/2015 10:34, Xiao Guangrong wrote:
> 
> 
>> @@ -5473,6 +5473,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu,
>> unsigned emul_flags)
>>       }
>>
>>       vcpu->arch.hflags = emul_flags;
>> +    kvm_mmu_reset_context(vcpu);
> 
> reset root table only if SMM flag is changed?

Sure.

Paolo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC PATCH 00/11] KVM: multiple address spaces (for SMM)
  2015-05-20  8:31 ` [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Christian Borntraeger
@ 2015-05-20  8:58   ` Paolo Bonzini
  0 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-20  8:58 UTC (permalink / raw)
  To: Christian Borntraeger, linux-kernel, kvm
  Cc: Xiao Guangrong, rkrcmar, bdas, linux-s390



On 20/05/2015 10:31, Christian Borntraeger wrote:
> 
> So in essence this allows to have each vcpu in separate address space if
> necessary. These address space might overlap or be identical and allow
> permutations by having different memslots for each address space.
> And we can have several address spaces per vcpu. Correct?

Yes, exactly.

I've also posted QEMU patches that may give more insight on how this is
used.

Paolo

> This might be useful for kvm on s390 (e.g. we did the ucontrol thing that
> also has one guest address space per vcpu). I need to have a look at that.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap
  2015-05-20  8:30   ` Xiao Guangrong
@ 2015-05-20  9:07     ` Paolo Bonzini
  0 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-20  9:07 UTC (permalink / raw)
  To: Xiao Guangrong, linux-kernel, kvm; +Cc: rkrcmar, bdas



On 20/05/2015 10:30, Xiao Guangrong wrote:
>> This is always available, and we can use the role to look up the right
>> memslots array.
> 
> How about pass role instead of sp so that it can be used if no sp is
> available?

I can surely do that instead, but I'm not sure why one would have a role
and not an sp.

If you do not have an sp, you probably should instead use
kvm_vcpu_gfn_to_memslot and __gfn_to_rmap.  See for example
rmap_write_protect.

I pushed the work to an "smm" branch of kvm.git, so that you can see the
result.

Paolo

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-20  7:07         ` Paolo Bonzini
@ 2015-05-20 15:46           ` Radim Krčmář
  2015-05-20 18:17             ` Paolo Bonzini
  0 siblings, 1 reply; 24+ messages in thread
From: Radim Krčmář @ 2015-05-20 15:46 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: linux-kernel, kvm, Xiao Guangrong, bdas

2015-05-20 09:07+0200, Paolo Bonzini:
> On 19/05/2015 20:28, Radim Krčmář wrote:
>>> The regular and SMM address spaces are not hierarchical.  As soon as you
>>> put a PCI resource underneath SMRAM---which is exactly what happens for
>>> legacy VRAM at 0xa0000---they can be completely different.  Note that
>>> QEMU can map legacy VRAM as a KVM memslot when using the VGA 320x200x256
>>> color mode (this mapping is not correct from the VGA point of view, but
>>> it cannot be changed in QEMU without breaking migration).
>>
>> How is a PCI resource under SMRAM accessed?
>> I thought that outside SMM, PCI resource under SMRAM is working
>> normally, but it will be overshadowed, and made inaccessible, in SMM.
>
> Yes, it is.  (There is some chipset magic to make instruction fetches
> retrieve SMRAM and data fetches retrieve PCI resources.  I guess you
> could use execute-only EPT permissions, but needless to say, we don't care).

Interesting, so that part of SMRAM is going to be useless for SMM data?
(Even worse, SMM will read and write to the PCI resource?)

>> I'm not sure if we mean the same hierarchy.  I meant hierarchy in the
>> sense than one address space is considered before the other.
>> (Maybe layers would be a better word.)
>> SMM address space could have just one slot and be above regular, we'd
>> then decide how to handle overlapping.
>
> Ah, now I understand.  That would be doable.
>
> But as they say, "All programming is an exercise in caching."  In this
> case, the caching is done by userspace.

(It's not caching if we wanted a different result ;])

> QEMU implements the SMM address space exactly by overlaying SMRAM over
> normal memory:
| [...]
> The caching consists simply in resolving the overlaps beforehand, thus
> giving KVM the complete address space.
>
> Since slots do not change often, the simpler code is not worth the
> potentially more expensive KVM_SET_USER_MEMORY_REGION (it _is_ more
> expensive, if only because it has to be called twice per slot change).

I am a bit worried about the explosion that would happen if we wanted,
for example, per-VCPU address spaces;  SMM would double their amount.

My main issue (orthogonal to layering) is that we don't allow a way to
let userspace tell us that some slots in different name spaces are the
same slot.  We're losing information that could be useful in the future
(I can only think of less slot queries for dirty log now).

What I like about your solution is that it fits existing code really
well, is easily modified if needs change, and that it already exists.
All my ideas would require more code in kernel, which really doesn't
seem to be worth the benefits it would bring to the SMM use case ...

I'm ok with this approach,

Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 08/11] KVM: implement multiple address spaces
  2015-05-20 15:46           ` Radim Krčmář
@ 2015-05-20 18:17             ` Paolo Bonzini
  0 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2015-05-20 18:17 UTC (permalink / raw)
  To: Radim Krčmář; +Cc: linux-kernel, kvm, Xiao Guangrong, bdas



On 20/05/2015 17:46, Radim Krčmář wrote:
> I am a bit worried about the explosion that would happen if we wanted,
> for example, per-VCPU address spaces

Those would be very expensive.  If we were to implement relocatable APIC
base, we would have to do it in a different way than with memslots.

> My main issue (orthogonal to layering) is that we don't allow a way to
> let userspace tell us that some slots in different name spaces are the
> same slot.  We're losing information that could be useful in the future
> (I can only think of less slot queries for dirty log now).

You're right.  On the other hand, I think the ship has sailed the moment
the dirty log was GPA-indexed.

> What I like about your solution is that it fits existing code really
> well, is easily modified if needs change, and that it already exists.

Yes, it does fit existing code really well.

Thanks for the discussion!

Paolo

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-05-20 18:17 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-18 13:48 [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Paolo Bonzini
2015-05-18 13:48 ` [PATCH 01/11] KVM: introduce kvm_alloc/free_memslots Paolo Bonzini
2015-05-18 13:48 ` [PATCH 02/11] KVM: use kvm_memslots whenever possible Paolo Bonzini
2015-05-18 13:48 ` [PATCH 03/11] KVM: const-ify uses of struct kvm_userspace_memory_region Paolo Bonzini
2015-05-18 13:48 ` [PATCH 04/11] KVM: add memslots argument to kvm_arch_memslots_updated Paolo Bonzini
2015-05-18 13:48 ` [PATCH 05/11] KVM: add "new" argument to kvm_arch_commit_memory_region Paolo Bonzini
2015-05-18 13:48 ` [PATCH 06/11] KVM: x86: pass kvm_mmu_page to gfn_to_rmap Paolo Bonzini
2015-05-20  8:30   ` Xiao Guangrong
2015-05-20  9:07     ` Paolo Bonzini
2015-05-18 13:48 ` [PATCH 07/11] KVM: add vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
2015-05-18 13:48 ` [PATCH 08/11] KVM: implement multiple address spaces Paolo Bonzini
2015-05-19 13:32   ` Radim Krčmář
2015-05-19 16:19     ` Paolo Bonzini
2015-05-19 18:28       ` Radim Krčmář
2015-05-20  7:07         ` Paolo Bonzini
2015-05-20 15:46           ` Radim Krčmář
2015-05-20 18:17             ` Paolo Bonzini
2015-05-18 13:48 ` [PATCH 09/11] KVM: x86: use vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
2015-05-18 13:48 ` [PATCH 10/11] KVM: x86: work on all available address spaces Paolo Bonzini
2015-05-18 13:48 ` [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space Paolo Bonzini
2015-05-20  8:34   ` Xiao Guangrong
2015-05-20  8:57     ` Paolo Bonzini
2015-05-20  8:31 ` [RFC PATCH 00/11] KVM: multiple address spaces (for SMM) Christian Borntraeger
2015-05-20  8:58   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).