All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/4] Migrate non-migrated pages of a SVM.
@ 2020-05-31  2:27 ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

This patch series migrates the non-migrate pages of a SVM.
This is required when the UV calls H_SVM_INIT_DONE, and
when a memory-slot is hotplugged to a Secure VM.

Laurent Dufour (1):
  KVM: PPC: Book3S HV: migrate hot plugged memory

Ram Pai (3):
  KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c
  KVM: PPC: Book3S HV: track shared GFNs of secure VMs
  KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in
    H_SVM_INIT_DONE

 Documentation/powerpc/ultravisor.rst        |   2 +
 arch/powerpc/include/asm/kvm_book3s_uvmem.h |  10 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c                |  13 +-
 arch/powerpc/kvm/book3s_hv_uvmem.c          | 348 +++++++++++++++++++++-------
 5 files changed, 284 insertions(+), 91 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v1 0/4] Migrate non-migrated pages of a SVM.
@ 2020-05-31  2:27 ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

This patch series migrates the non-migrate pages of a SVM.
This is required when the UV calls H_SVM_INIT_DONE, and
when a memory-slot is hotplugged to a Secure VM.

Laurent Dufour (1):
  KVM: PPC: Book3S HV: migrate hot plugged memory

Ram Pai (3):
  KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c
  KVM: PPC: Book3S HV: track shared GFNs of secure VMs
  KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in
    H_SVM_INIT_DONE

 Documentation/powerpc/ultravisor.rst        |   2 +
 arch/powerpc/include/asm/kvm_book3s_uvmem.h |  10 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c                |  13 +-
 arch/powerpc/kvm/book3s_hv_uvmem.c          | 348 +++++++++++++++++++++-------
 5 files changed, 284 insertions(+), 91 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v1 1/4] KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c
  2020-05-31  2:27 ` Ram Pai
@ 2020-05-31  2:27   ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

Without this fix, GIT gets confused. It generates incorrect
function context for code changes.  Weird, but true.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_uvmem.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 79b1202..ea4a1f1 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -368,8 +368,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * Alloc a PFN from private device memory pool and copy page from normal
  * memory to secure memory using UV_PAGE_IN uvcall.
  */
-static int
-kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start,
+static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start,
 		   unsigned long end, unsigned long gpa, struct kvm *kvm,
 		   unsigned long page_shift, bool *downgrade)
 {
@@ -436,8 +435,8 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * In the former case, uses dev_pagemap_ops.migrate_to_ram handler
  * to unmap the device page from QEMU's page tables.
  */
-static unsigned long
-kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift)
+static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa,
+		unsigned long page_shift)
 {
 
 	int ret = H_PARAMETER;
@@ -486,9 +485,9 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * H_PAGE_IN_SHARED flag makes the page shared which means that the same
  * memory in is visible from both UV and HV.
  */
-unsigned long
-kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
-		     unsigned long flags, unsigned long page_shift)
+unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
+		unsigned long flags,
+		unsigned long page_shift)
 {
 	bool downgrade = false;
 	unsigned long start, end;
@@ -545,10 +544,10 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * Provision a new page on HV side and copy over the contents
  * from secure memory using UV_PAGE_OUT uvcall.
  */
-static int
-kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
-		    unsigned long end, unsigned long page_shift,
-		    struct kvm *kvm, unsigned long gpa)
+static int kvmppc_svm_page_out(struct vm_area_struct *vma,
+		unsigned long start,
+		unsigned long end, unsigned long page_shift,
+		struct kvm *kvm, unsigned long gpa)
 {
 	unsigned long src_pfn, dst_pfn = 0;
 	struct migrate_vma mig;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 1/4] KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c
@ 2020-05-31  2:27   ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

Without this fix, GIT gets confused. It generates incorrect
function context for code changes.  Weird, but true.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_uvmem.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 79b1202..ea4a1f1 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -368,8 +368,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * Alloc a PFN from private device memory pool and copy page from normal
  * memory to secure memory using UV_PAGE_IN uvcall.
  */
-static int
-kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start,
+static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start,
 		   unsigned long end, unsigned long gpa, struct kvm *kvm,
 		   unsigned long page_shift, bool *downgrade)
 {
@@ -436,8 +435,8 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * In the former case, uses dev_pagemap_ops.migrate_to_ram handler
  * to unmap the device page from QEMU's page tables.
  */
-static unsigned long
-kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift)
+static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa,
+		unsigned long page_shift)
 {
 
 	int ret = H_PARAMETER;
@@ -486,9 +485,9 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * H_PAGE_IN_SHARED flag makes the page shared which means that the same
  * memory in is visible from both UV and HV.
  */
-unsigned long
-kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
-		     unsigned long flags, unsigned long page_shift)
+unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
+		unsigned long flags,
+		unsigned long page_shift)
 {
 	bool downgrade = false;
 	unsigned long start, end;
@@ -545,10 +544,10 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * Provision a new page on HV side and copy over the contents
  * from secure memory using UV_PAGE_OUT uvcall.
  */
-static int
-kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
-		    unsigned long end, unsigned long page_shift,
-		    struct kvm *kvm, unsigned long gpa)
+static int kvmppc_svm_page_out(struct vm_area_struct *vma,
+		unsigned long start,
+		unsigned long end, unsigned long page_shift,
+		struct kvm *kvm, unsigned long gpa)
 {
 	unsigned long src_pfn, dst_pfn = 0;
 	struct migrate_vma mig;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
  2020-05-31  2:27 ` Ram Pai
@ 2020-05-31  2:27   ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

During the life of SVM, its GFNs can transition from secure to shared
state and vice-versa. Since the kernel does not track GFNs that are
shared, it is not possible to disambiguate a shared GFN from a GFN whose
PFN has not yet been migrated to a device-PFN.

The ability to identify a shared GFN is needed to skip migrating its PFN
to device PFN. This functionality is leveraged in a subsequent patch.

Add the ability to identify the state of a GFN.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s_uvmem.h |   6 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c                |   2 +-
 arch/powerpc/kvm/book3s_hv_uvmem.c          | 115 ++++++++++++++++++++++++++--
 4 files changed, 113 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
index 5a9834e..f0c5708 100644
--- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
+++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
@@ -21,7 +21,8 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn);
 unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm);
 void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
-			     struct kvm *kvm, bool skip_page_out);
+			     struct kvm *kvm, bool skip_page_out,
+			     bool purge_gfn);
 #else
 static inline int kvmppc_uvmem_init(void)
 {
@@ -75,6 +76,7 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
 
 static inline void
 kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
-			struct kvm *kvm, bool skip_page_out) { }
+			struct kvm *kvm, bool skip_page_out,
+			bool purge_gfn) { }
 #endif /* CONFIG_PPC_UV */
 #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 803940d..3448459 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -1100,7 +1100,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 	unsigned int shift;
 
 	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)
-		kvmppc_uvmem_drop_pages(memslot, kvm, true);
+		kvmppc_uvmem_drop_pages(memslot, kvm, true, false);
 
 	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
 		return;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 103d13e..4c62bfe 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -5467,7 +5467,7 @@ static int kvmhv_svm_off(struct kvm *kvm)
 			continue;
 
 		kvm_for_each_memslot(memslot, slots) {
-			kvmppc_uvmem_drop_pages(memslot, kvm, true);
+			kvmppc_uvmem_drop_pages(memslot, kvm, true, true);
 			uv_unregister_mem_slot(kvm->arch.lpid, memslot->id);
 		}
 	}
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index ea4a1f1..2ef1e03 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -99,14 +99,56 @@
 static DEFINE_SPINLOCK(kvmppc_uvmem_bitmap_lock);
 
 #define KVMPPC_UVMEM_PFN	(1UL << 63)
+#define KVMPPC_UVMEM_SHARED	(1UL << 62)
+#define KVMPPC_UVMEM_FLAG_MASK	(KVMPPC_UVMEM_PFN | KVMPPC_UVMEM_SHARED)
+#define KVMPPC_UVMEM_PFN_MASK	(~KVMPPC_UVMEM_FLAG_MASK)
 
 struct kvmppc_uvmem_slot {
 	struct list_head list;
 	unsigned long nr_pfns;
 	unsigned long base_pfn;
+	/*
+	 * pfns array has an entry for each GFN of the memory slot.
+	 *
+	 * The GFN can be in one of the following states.
+	 *
+	 * (a) Secure - The GFN is secure. Only Ultravisor can access it.
+	 * (b) Shared - The GFN is shared. Both Hypervisor and Ultravisor
+	 *		can access it.
+	 * (c) Normal - The GFN is a normal.  Only Hypervisor can access it.
+	 *
+	 * Secure GFN is associated with a devicePFN. Its pfn[] has
+	 * KVMPPC_UVMEM_PFN flag set, and has the value of the device PFN
+	 * KVMPPC_UVMEM_SHARED flag unset, and has the value of the device PFN
+	 *
+	 * Shared GFN is associated with a memoryPFN. Its pfn[] has
+	 * KVMPPC_UVMEM_SHARED flag set. But its KVMPPC_UVMEM_PFN is not set,
+	 * and there is no PFN value stored.
+	 *
+	 * Normal GFN is not associated with memoryPFN. Its pfn[] has
+	 * KVMPPC_UVMEM_SHARED and KVMPPC_UVMEM_PFN flag unset, and no PFN
+	 * value is stored.
+	 *
+	 * Any other combination of values in pfn[] leads to undefined
+	 * behavior.
+	 *
+	 * Life cycle of a GFN --
+	 *
+	 * ---------------------------------------------------------
+	 * |        |     Share	 |  Unshare | SVM	|slot      |
+	 * |	    |		 |	    | abort/	|flush	   |
+	 * |	    |		 |	    | terminate	|	   |
+	 * ---------------------------------------------------------
+	 * |        |            |          |           |          |
+	 * | Secure |     Shared | Secure   |Normal	|Secure    |
+	 * |        |            |          |           |          |
+	 * | Shared |     Shared | Secure   |Normal     |Shared    |
+	 * |        |            |          |           |          |
+	 * | Normal |     Shared | Secure   |Normal     |Normal    |
+	 * ---------------------------------------------------------
+	 */
 	unsigned long *pfns;
 };
-
 struct kvmppc_uvmem_page_pvt {
 	struct kvm *kvm;
 	unsigned long gpa;
@@ -175,7 +217,12 @@ static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
 
 	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
 		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
-			p->pfns[gfn - p->base_pfn] = 0;
+			/*
+			 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
+			 * flag intact.  A gfn continues to be shared or
+			 * unshared, with or without an associated device pfn.
+			 */
+			p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
 			return;
 		}
 	}
@@ -193,7 +240,7 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
 			if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
 				if (uvmem_pfn)
 					*uvmem_pfn = p->pfns[index] &
-						     ~KVMPPC_UVMEM_PFN;
+						     KVMPPC_UVMEM_PFN_MASK;
 				return true;
 			} else
 				return false;
@@ -202,6 +249,38 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
 	return false;
 }
 
+static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
+		bool set)
+{
+	struct kvmppc_uvmem_slot *p;
+
+	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
+		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
+			unsigned long index = gfn - p->base_pfn;
+
+			if (set)
+				p->pfns[index] |= KVMPPC_UVMEM_SHARED;
+			else
+				p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
+			return;
+		}
+	}
+}
+
+bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
+{
+	struct kvmppc_uvmem_slot *p;
+
+	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
+		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
+			unsigned long index = gfn - p->base_pfn;
+
+			return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
+		}
+	}
+	return false;
+}
+
 unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 {
 	struct kvm_memslots *slots;
@@ -256,9 +335,13 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
  * is HV side fault on these pages. Next we *get* these pages, forcing
  * fault on them, do fault time migration to replace the device PTEs in
  * QEMU page table with normal PTEs from newly allocated pages.
+ *
+ * if @purge_gfn is set, cleanup any information related to each of
+ * the GFNs associated with this memory slot.
  */
 void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
-			     struct kvm *kvm, bool skip_page_out)
+			     struct kvm *kvm, bool skip_page_out,
+			     bool purge_gfn)
 {
 	int i;
 	struct kvmppc_uvmem_page_pvt *pvt;
@@ -269,11 +352,22 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
 		struct page *uvmem_page;
 
 		mutex_lock(&kvm->arch.uvmem_lock);
+
+		if (purge_gfn) {
+			/*
+			 * cleanup the shared status of the GFN here.
+			 * Any device PFN associated with the GFN shall
+			 * be cleaned up later, in kvmppc_uvmem_page_free()
+			 * when the device PFN is actually disassociated
+			 * from the GFN.
+			 */
+			kvmppc_gfn_uvmem_shared(gfn, kvm, false);
+		}
+
 		if (!kvmppc_gfn_is_uvmem_pfn(gfn, kvm, &uvmem_pfn)) {
 			mutex_unlock(&kvm->arch.uvmem_lock);
 			continue;
 		}
-
 		uvmem_page = pfn_to_page(uvmem_pfn);
 		pvt = uvmem_page->zone_device_data;
 		pvt->skip_page_out = skip_page_out;
@@ -304,7 +398,7 @@ unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
 	srcu_idx = srcu_read_lock(&kvm->srcu);
 
 	kvm_for_each_memslot(memslot, kvm_memslots(kvm))
-		kvmppc_uvmem_drop_pages(memslot, kvm, false);
+		kvmppc_uvmem_drop_pages(memslot, kvm, false, true);
 
 	srcu_read_unlock(&kvm->srcu, srcu_idx);
 
@@ -470,8 +564,11 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa,
 		goto retry;
 	}
 
-	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift))
+	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0,
+				page_shift)) {
+		kvmppc_gfn_uvmem_shared(gfn, kvm, true);
 		ret = H_SUCCESS;
+	}
 	kvm_release_pfn_clean(pfn);
 	mutex_unlock(&kvm->arch.uvmem_lock);
 out:
@@ -527,8 +624,10 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 		goto out_unlock;
 
 	if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift,
-				&downgrade))
+				&downgrade)) {
+		kvmppc_gfn_uvmem_shared(gfn, kvm, false);
 		ret = H_SUCCESS;
+	}
 out_unlock:
 	mutex_unlock(&kvm->arch.uvmem_lock);
 out:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
@ 2020-05-31  2:27   ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

During the life of SVM, its GFNs can transition from secure to shared
state and vice-versa. Since the kernel does not track GFNs that are
shared, it is not possible to disambiguate a shared GFN from a GFN whose
PFN has not yet been migrated to a device-PFN.

The ability to identify a shared GFN is needed to skip migrating its PFN
to device PFN. This functionality is leveraged in a subsequent patch.

Add the ability to identify the state of a GFN.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s_uvmem.h |   6 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c      |   2 +-
 arch/powerpc/kvm/book3s_hv.c                |   2 +-
 arch/powerpc/kvm/book3s_hv_uvmem.c          | 115 ++++++++++++++++++++++++++--
 4 files changed, 113 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
index 5a9834e..f0c5708 100644
--- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
+++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
@@ -21,7 +21,8 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn);
 unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm);
 void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
-			     struct kvm *kvm, bool skip_page_out);
+			     struct kvm *kvm, bool skip_page_out,
+			     bool purge_gfn);
 #else
 static inline int kvmppc_uvmem_init(void)
 {
@@ -75,6 +76,7 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
 
 static inline void
 kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
-			struct kvm *kvm, bool skip_page_out) { }
+			struct kvm *kvm, bool skip_page_out,
+			bool purge_gfn) { }
 #endif /* CONFIG_PPC_UV */
 #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 803940d..3448459 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -1100,7 +1100,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 	unsigned int shift;
 
 	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)
-		kvmppc_uvmem_drop_pages(memslot, kvm, true);
+		kvmppc_uvmem_drop_pages(memslot, kvm, true, false);
 
 	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
 		return;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 103d13e..4c62bfe 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -5467,7 +5467,7 @@ static int kvmhv_svm_off(struct kvm *kvm)
 			continue;
 
 		kvm_for_each_memslot(memslot, slots) {
-			kvmppc_uvmem_drop_pages(memslot, kvm, true);
+			kvmppc_uvmem_drop_pages(memslot, kvm, true, true);
 			uv_unregister_mem_slot(kvm->arch.lpid, memslot->id);
 		}
 	}
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index ea4a1f1..2ef1e03 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -99,14 +99,56 @@
 static DEFINE_SPINLOCK(kvmppc_uvmem_bitmap_lock);
 
 #define KVMPPC_UVMEM_PFN	(1UL << 63)
+#define KVMPPC_UVMEM_SHARED	(1UL << 62)
+#define KVMPPC_UVMEM_FLAG_MASK	(KVMPPC_UVMEM_PFN | KVMPPC_UVMEM_SHARED)
+#define KVMPPC_UVMEM_PFN_MASK	(~KVMPPC_UVMEM_FLAG_MASK)
 
 struct kvmppc_uvmem_slot {
 	struct list_head list;
 	unsigned long nr_pfns;
 	unsigned long base_pfn;
+	/*
+	 * pfns array has an entry for each GFN of the memory slot.
+	 *
+	 * The GFN can be in one of the following states.
+	 *
+	 * (a) Secure - The GFN is secure. Only Ultravisor can access it.
+	 * (b) Shared - The GFN is shared. Both Hypervisor and Ultravisor
+	 *		can access it.
+	 * (c) Normal - The GFN is a normal.  Only Hypervisor can access it.
+	 *
+	 * Secure GFN is associated with a devicePFN. Its pfn[] has
+	 * KVMPPC_UVMEM_PFN flag set, and has the value of the device PFN
+	 * KVMPPC_UVMEM_SHARED flag unset, and has the value of the device PFN
+	 *
+	 * Shared GFN is associated with a memoryPFN. Its pfn[] has
+	 * KVMPPC_UVMEM_SHARED flag set. But its KVMPPC_UVMEM_PFN is not set,
+	 * and there is no PFN value stored.
+	 *
+	 * Normal GFN is not associated with memoryPFN. Its pfn[] has
+	 * KVMPPC_UVMEM_SHARED and KVMPPC_UVMEM_PFN flag unset, and no PFN
+	 * value is stored.
+	 *
+	 * Any other combination of values in pfn[] leads to undefined
+	 * behavior.
+	 *
+	 * Life cycle of a GFN --
+	 *
+	 * ---------------------------------------------------------
+	 * |        |     Share	 |  Unshare | SVM	|slot      |
+	 * |	    |		 |	    | abort/	|flush	   |
+	 * |	    |		 |	    | terminate	|	   |
+	 * ---------------------------------------------------------
+	 * |        |            |          |           |          |
+	 * | Secure |     Shared | Secure   |Normal	|Secure    |
+	 * |        |            |          |           |          |
+	 * | Shared |     Shared | Secure   |Normal     |Shared    |
+	 * |        |            |          |           |          |
+	 * | Normal |     Shared | Secure   |Normal     |Normal    |
+	 * ---------------------------------------------------------
+	 */
 	unsigned long *pfns;
 };
-
 struct kvmppc_uvmem_page_pvt {
 	struct kvm *kvm;
 	unsigned long gpa;
@@ -175,7 +217,12 @@ static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
 
 	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
 		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
-			p->pfns[gfn - p->base_pfn] = 0;
+			/*
+			 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
+			 * flag intact.  A gfn continues to be shared or
+			 * unshared, with or without an associated device pfn.
+			 */
+			p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
 			return;
 		}
 	}
@@ -193,7 +240,7 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
 			if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
 				if (uvmem_pfn)
 					*uvmem_pfn = p->pfns[index] &
-						     ~KVMPPC_UVMEM_PFN;
+						     KVMPPC_UVMEM_PFN_MASK;
 				return true;
 			} else
 				return false;
@@ -202,6 +249,38 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
 	return false;
 }
 
+static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
+		bool set)
+{
+	struct kvmppc_uvmem_slot *p;
+
+	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
+		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
+			unsigned long index = gfn - p->base_pfn;
+
+			if (set)
+				p->pfns[index] |= KVMPPC_UVMEM_SHARED;
+			else
+				p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
+			return;
+		}
+	}
+}
+
+bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
+{
+	struct kvmppc_uvmem_slot *p;
+
+	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
+		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
+			unsigned long index = gfn - p->base_pfn;
+
+			return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
+		}
+	}
+	return false;
+}
+
 unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 {
 	struct kvm_memslots *slots;
@@ -256,9 +335,13 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
  * is HV side fault on these pages. Next we *get* these pages, forcing
  * fault on them, do fault time migration to replace the device PTEs in
  * QEMU page table with normal PTEs from newly allocated pages.
+ *
+ * if @purge_gfn is set, cleanup any information related to each of
+ * the GFNs associated with this memory slot.
  */
 void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
-			     struct kvm *kvm, bool skip_page_out)
+			     struct kvm *kvm, bool skip_page_out,
+			     bool purge_gfn)
 {
 	int i;
 	struct kvmppc_uvmem_page_pvt *pvt;
@@ -269,11 +352,22 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
 		struct page *uvmem_page;
 
 		mutex_lock(&kvm->arch.uvmem_lock);
+
+		if (purge_gfn) {
+			/*
+			 * cleanup the shared status of the GFN here.
+			 * Any device PFN associated with the GFN shall
+			 * be cleaned up later, in kvmppc_uvmem_page_free()
+			 * when the device PFN is actually disassociated
+			 * from the GFN.
+			 */
+			kvmppc_gfn_uvmem_shared(gfn, kvm, false);
+		}
+
 		if (!kvmppc_gfn_is_uvmem_pfn(gfn, kvm, &uvmem_pfn)) {
 			mutex_unlock(&kvm->arch.uvmem_lock);
 			continue;
 		}
-
 		uvmem_page = pfn_to_page(uvmem_pfn);
 		pvt = uvmem_page->zone_device_data;
 		pvt->skip_page_out = skip_page_out;
@@ -304,7 +398,7 @@ unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
 	srcu_idx = srcu_read_lock(&kvm->srcu);
 
 	kvm_for_each_memslot(memslot, kvm_memslots(kvm))
-		kvmppc_uvmem_drop_pages(memslot, kvm, false);
+		kvmppc_uvmem_drop_pages(memslot, kvm, false, true);
 
 	srcu_read_unlock(&kvm->srcu, srcu_idx);
 
@@ -470,8 +564,11 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa,
 		goto retry;
 	}
 
-	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift))
+	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0,
+				page_shift)) {
+		kvmppc_gfn_uvmem_shared(gfn, kvm, true);
 		ret = H_SUCCESS;
+	}
 	kvm_release_pfn_clean(pfn);
 	mutex_unlock(&kvm->arch.uvmem_lock);
 out:
@@ -527,8 +624,10 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 		goto out_unlock;
 
 	if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift,
-				&downgrade))
+				&downgrade)) {
+		kvmppc_gfn_uvmem_shared(gfn, kvm, false);
 		ret = H_SUCCESS;
+	}
 out_unlock:
 	mutex_unlock(&kvm->arch.uvmem_lock);
 out:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
  2020-05-31  2:27 ` Ram Pai
@ 2020-05-31  2:27   ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
called H_SVM_PAGE_IN for all secure pages. These GFNs continue to be
normal GFNs associated with normal PFNs; when infact, these GFNs should
have been secure GFNs associated with device PFNs.

Move all the PFN associated with the SVM's GFNs to device PFNs, in
H_SVM_INIT_DONE. Skip the GFNs that are already Paged-in or Shared
through H_SVM_PAGE_IN.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 Documentation/powerpc/ultravisor.rst |   2 +
 arch/powerpc/kvm/book3s_hv_uvmem.c   | 219 ++++++++++++++++++++++++-----------
 2 files changed, 154 insertions(+), 67 deletions(-)

diff --git a/Documentation/powerpc/ultravisor.rst b/Documentation/powerpc/ultravisor.rst
index 363736d..3bc8957 100644
--- a/Documentation/powerpc/ultravisor.rst
+++ b/Documentation/powerpc/ultravisor.rst
@@ -933,6 +933,8 @@ Return values
 	* H_UNSUPPORTED		if called from the wrong context (e.g.
 				from an SVM or before an H_SVM_INIT_START
 				hypercall).
+	* H_STATE		if the hypervisor could not successfully
+                                transition the VM to Secure VM.
 
 Description
 ~~~~~~~~~~~
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 2ef1e03..36dda1d 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -318,14 +318,149 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 	return ret;
 }
 
+static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm);
+
+/*
+ * Alloc a PFN from private device memory pool. If @pagein is true,
+ * copy page from normal memory to secure memory using UV_PAGE_IN uvcall.
+ */
+static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
+		unsigned long start,
+		unsigned long end, unsigned long gpa, struct kvm *kvm,
+		unsigned long page_shift,
+		bool pagein)
+{
+	unsigned long src_pfn, dst_pfn = 0;
+	struct migrate_vma mig;
+	struct page *dpage;
+	struct page *spage;
+	unsigned long pfn;
+	int ret = 0;
+
+	memset(&mig, 0, sizeof(mig));
+	mig.vma = vma;
+	mig.start = start;
+	mig.end = end;
+	mig.src = &src_pfn;
+	mig.dst = &dst_pfn;
+
+	ret = migrate_vma_setup(&mig);
+	if (ret)
+		return ret;
+
+	if (!(*mig.src & MIGRATE_PFN_MIGRATE)) {
+		ret = -1;
+		goto out_finalize;
+	}
+
+	dpage = kvmppc_uvmem_get_page(gpa, kvm);
+	if (!dpage) {
+		ret = -1;
+		goto out_finalize;
+	}
+
+	if (pagein) {
+		pfn = *mig.src >> MIGRATE_PFN_SHIFT;
+		spage = migrate_pfn_to_page(*mig.src);
+		if (spage) {
+			ret = uv_page_in(kvm->arch.lpid, pfn << page_shift,
+					gpa, 0, page_shift);
+			if (ret)
+				goto out_finalize;
+		}
+	}
+
+	*mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+	migrate_vma_pages(&mig);
+out_finalize:
+	migrate_vma_finalize(&mig);
+	return ret;
+}
+
+static int uv_migrate_mem_slot(struct kvm *kvm,
+		const struct kvm_memory_slot *memslot)
+{
+	unsigned long gfn = memslot->base_gfn;
+	unsigned long end;
+	bool downgrade = false;
+	struct vm_area_struct *vma;
+	int i, ret = 0;
+	unsigned long start = gfn_to_hva(kvm, gfn);
+
+	if (kvm_is_error_hva(start))
+		return H_STATE;
+
+	end = start + (memslot->npages << PAGE_SHIFT);
+
+	down_write(&kvm->mm->mmap_sem);
+
+	mutex_lock(&kvm->arch.uvmem_lock);
+	vma = find_vma_intersection(kvm->mm, start, end);
+	if (!vma || vma->vm_start > start || vma->vm_end < end) {
+		ret = H_STATE;
+		goto out_unlock;
+	}
+
+	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
+			  MADV_UNMERGEABLE, &vma->vm_flags);
+	downgrade_write(&kvm->mm->mmap_sem);
+	downgrade = true;
+	if (ret) {
+		ret = H_STATE;
+		goto out_unlock;
+	}
+
+	for (i = 0; i < memslot->npages; i++, ++gfn) {
+		/* skip paged-in pages and shared pages */
+		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
+			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
+			continue;
+
+		start = gfn_to_hva(kvm, gfn);
+		end = start + (1UL << PAGE_SHIFT);
+		ret = kvmppc_svm_migrate_page(vma, start, end,
+			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
+
+		if (ret)
+			goto out_unlock;
+	}
+
+out_unlock:
+	mutex_unlock(&kvm->arch.uvmem_lock);
+	if (downgrade)
+		up_read(&kvm->mm->mmap_sem);
+	else
+		up_write(&kvm->mm->mmap_sem);
+	return ret;
+}
+
 unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 {
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int srcu_idx;
+	long ret = H_SUCCESS;
+
 	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
 		return H_UNSUPPORTED;
 
+	/* migrate any unmoved normal pfn to device pfns*/
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slots = kvm_memslots(kvm);
+	kvm_for_each_memslot(memslot, slots) {
+		ret = uv_migrate_mem_slot(kvm, memslot);
+		if (ret) {
+			ret = H_STATE;
+			goto out;
+		}
+	}
+
 	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
 	pr_info("LPID %d went secure\n", kvm->arch.lpid);
-	return H_SUCCESS;
+
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return ret;
 }
 
 /*
@@ -459,68 +594,6 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
 }
 
 /*
- * Alloc a PFN from private device memory pool and copy page from normal
- * memory to secure memory using UV_PAGE_IN uvcall.
- */
-static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start,
-		   unsigned long end, unsigned long gpa, struct kvm *kvm,
-		   unsigned long page_shift, bool *downgrade)
-{
-	unsigned long src_pfn, dst_pfn = 0;
-	struct migrate_vma mig;
-	struct page *spage;
-	unsigned long pfn;
-	struct page *dpage;
-	int ret = 0;
-
-	memset(&mig, 0, sizeof(mig));
-	mig.vma = vma;
-	mig.start = start;
-	mig.end = end;
-	mig.src = &src_pfn;
-	mig.dst = &dst_pfn;
-
-	/*
-	 * We come here with mmap_sem write lock held just for
-	 * ksm_madvise(), otherwise we only need read mmap_sem.
-	 * Hence downgrade to read lock once ksm_madvise() is done.
-	 */
-	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
-			  MADV_UNMERGEABLE, &vma->vm_flags);
-	downgrade_write(&kvm->mm->mmap_sem);
-	*downgrade = true;
-	if (ret)
-		return ret;
-
-	ret = migrate_vma_setup(&mig);
-	if (ret)
-		return ret;
-
-	if (!(*mig.src & MIGRATE_PFN_MIGRATE)) {
-		ret = -1;
-		goto out_finalize;
-	}
-
-	dpage = kvmppc_uvmem_get_page(gpa, kvm);
-	if (!dpage) {
-		ret = -1;
-		goto out_finalize;
-	}
-
-	pfn = *mig.src >> MIGRATE_PFN_SHIFT;
-	spage = migrate_pfn_to_page(*mig.src);
-	if (spage)
-		uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0,
-			   page_shift);
-
-	*mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
-	migrate_vma_pages(&mig);
-out_finalize:
-	migrate_vma_finalize(&mig);
-	return ret;
-}
-
-/*
  * Shares the page with HV, thus making it a normal page.
  *
  * - If the page is already secure, then provision a new page and share
@@ -623,11 +696,23 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 	if (!vma || vma->vm_start > start || vma->vm_end < end)
 		goto out_unlock;
 
-	if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift,
-				&downgrade)) {
-		kvmppc_gfn_uvmem_shared(gfn, kvm, false);
-		ret = H_SUCCESS;
+	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
+			  MADV_UNMERGEABLE, &vma->vm_flags);
+	downgrade_write(&kvm->mm->mmap_sem);
+	downgrade = true;
+	if (ret) {
+		ret = H_PARAMETER;
+		goto out_unlock;
 	}
+
+	ret = H_PARAMETER;
+	if (kvmppc_svm_migrate_page(vma, start, end, gpa, kvm, page_shift,
+				true))
+		goto out_unlock;
+
+	kvmppc_gfn_uvmem_shared(gfn, kvm, false);
+	ret = H_SUCCESS;
+
 out_unlock:
 	mutex_unlock(&kvm->arch.uvmem_lock);
 out:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
@ 2020-05-31  2:27   ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
called H_SVM_PAGE_IN for all secure pages. These GFNs continue to be
normal GFNs associated with normal PFNs; when infact, these GFNs should
have been secure GFNs associated with device PFNs.

Move all the PFN associated with the SVM's GFNs to device PFNs, in
H_SVM_INIT_DONE. Skip the GFNs that are already Paged-in or Shared
through H_SVM_PAGE_IN.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
---
 Documentation/powerpc/ultravisor.rst |   2 +
 arch/powerpc/kvm/book3s_hv_uvmem.c   | 219 ++++++++++++++++++++++++-----------
 2 files changed, 154 insertions(+), 67 deletions(-)

diff --git a/Documentation/powerpc/ultravisor.rst b/Documentation/powerpc/ultravisor.rst
index 363736d..3bc8957 100644
--- a/Documentation/powerpc/ultravisor.rst
+++ b/Documentation/powerpc/ultravisor.rst
@@ -933,6 +933,8 @@ Return values
 	* H_UNSUPPORTED		if called from the wrong context (e.g.
 				from an SVM or before an H_SVM_INIT_START
 				hypercall).
+	* H_STATE		if the hypervisor could not successfully
+                                transition the VM to Secure VM.
 
 Description
 ~~~~~~~~~~~
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 2ef1e03..36dda1d 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -318,14 +318,149 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 	return ret;
 }
 
+static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm);
+
+/*
+ * Alloc a PFN from private device memory pool. If @pagein is true,
+ * copy page from normal memory to secure memory using UV_PAGE_IN uvcall.
+ */
+static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
+		unsigned long start,
+		unsigned long end, unsigned long gpa, struct kvm *kvm,
+		unsigned long page_shift,
+		bool pagein)
+{
+	unsigned long src_pfn, dst_pfn = 0;
+	struct migrate_vma mig;
+	struct page *dpage;
+	struct page *spage;
+	unsigned long pfn;
+	int ret = 0;
+
+	memset(&mig, 0, sizeof(mig));
+	mig.vma = vma;
+	mig.start = start;
+	mig.end = end;
+	mig.src = &src_pfn;
+	mig.dst = &dst_pfn;
+
+	ret = migrate_vma_setup(&mig);
+	if (ret)
+		return ret;
+
+	if (!(*mig.src & MIGRATE_PFN_MIGRATE)) {
+		ret = -1;
+		goto out_finalize;
+	}
+
+	dpage = kvmppc_uvmem_get_page(gpa, kvm);
+	if (!dpage) {
+		ret = -1;
+		goto out_finalize;
+	}
+
+	if (pagein) {
+		pfn = *mig.src >> MIGRATE_PFN_SHIFT;
+		spage = migrate_pfn_to_page(*mig.src);
+		if (spage) {
+			ret = uv_page_in(kvm->arch.lpid, pfn << page_shift,
+					gpa, 0, page_shift);
+			if (ret)
+				goto out_finalize;
+		}
+	}
+
+	*mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
+	migrate_vma_pages(&mig);
+out_finalize:
+	migrate_vma_finalize(&mig);
+	return ret;
+}
+
+static int uv_migrate_mem_slot(struct kvm *kvm,
+		const struct kvm_memory_slot *memslot)
+{
+	unsigned long gfn = memslot->base_gfn;
+	unsigned long end;
+	bool downgrade = false;
+	struct vm_area_struct *vma;
+	int i, ret = 0;
+	unsigned long start = gfn_to_hva(kvm, gfn);
+
+	if (kvm_is_error_hva(start))
+		return H_STATE;
+
+	end = start + (memslot->npages << PAGE_SHIFT);
+
+	down_write(&kvm->mm->mmap_sem);
+
+	mutex_lock(&kvm->arch.uvmem_lock);
+	vma = find_vma_intersection(kvm->mm, start, end);
+	if (!vma || vma->vm_start > start || vma->vm_end < end) {
+		ret = H_STATE;
+		goto out_unlock;
+	}
+
+	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
+			  MADV_UNMERGEABLE, &vma->vm_flags);
+	downgrade_write(&kvm->mm->mmap_sem);
+	downgrade = true;
+	if (ret) {
+		ret = H_STATE;
+		goto out_unlock;
+	}
+
+	for (i = 0; i < memslot->npages; i++, ++gfn) {
+		/* skip paged-in pages and shared pages */
+		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
+			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
+			continue;
+
+		start = gfn_to_hva(kvm, gfn);
+		end = start + (1UL << PAGE_SHIFT);
+		ret = kvmppc_svm_migrate_page(vma, start, end,
+			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
+
+		if (ret)
+			goto out_unlock;
+	}
+
+out_unlock:
+	mutex_unlock(&kvm->arch.uvmem_lock);
+	if (downgrade)
+		up_read(&kvm->mm->mmap_sem);
+	else
+		up_write(&kvm->mm->mmap_sem);
+	return ret;
+}
+
 unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 {
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int srcu_idx;
+	long ret = H_SUCCESS;
+
 	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
 		return H_UNSUPPORTED;
 
+	/* migrate any unmoved normal pfn to device pfns*/
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slots = kvm_memslots(kvm);
+	kvm_for_each_memslot(memslot, slots) {
+		ret = uv_migrate_mem_slot(kvm, memslot);
+		if (ret) {
+			ret = H_STATE;
+			goto out;
+		}
+	}
+
 	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
 	pr_info("LPID %d went secure\n", kvm->arch.lpid);
-	return H_SUCCESS;
+
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return ret;
 }
 
 /*
@@ -459,68 +594,6 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
 }
 
 /*
- * Alloc a PFN from private device memory pool and copy page from normal
- * memory to secure memory using UV_PAGE_IN uvcall.
- */
-static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start,
-		   unsigned long end, unsigned long gpa, struct kvm *kvm,
-		   unsigned long page_shift, bool *downgrade)
-{
-	unsigned long src_pfn, dst_pfn = 0;
-	struct migrate_vma mig;
-	struct page *spage;
-	unsigned long pfn;
-	struct page *dpage;
-	int ret = 0;
-
-	memset(&mig, 0, sizeof(mig));
-	mig.vma = vma;
-	mig.start = start;
-	mig.end = end;
-	mig.src = &src_pfn;
-	mig.dst = &dst_pfn;
-
-	/*
-	 * We come here with mmap_sem write lock held just for
-	 * ksm_madvise(), otherwise we only need read mmap_sem.
-	 * Hence downgrade to read lock once ksm_madvise() is done.
-	 */
-	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
-			  MADV_UNMERGEABLE, &vma->vm_flags);
-	downgrade_write(&kvm->mm->mmap_sem);
-	*downgrade = true;
-	if (ret)
-		return ret;
-
-	ret = migrate_vma_setup(&mig);
-	if (ret)
-		return ret;
-
-	if (!(*mig.src & MIGRATE_PFN_MIGRATE)) {
-		ret = -1;
-		goto out_finalize;
-	}
-
-	dpage = kvmppc_uvmem_get_page(gpa, kvm);
-	if (!dpage) {
-		ret = -1;
-		goto out_finalize;
-	}
-
-	pfn = *mig.src >> MIGRATE_PFN_SHIFT;
-	spage = migrate_pfn_to_page(*mig.src);
-	if (spage)
-		uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0,
-			   page_shift);
-
-	*mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED;
-	migrate_vma_pages(&mig);
-out_finalize:
-	migrate_vma_finalize(&mig);
-	return ret;
-}
-
-/*
  * Shares the page with HV, thus making it a normal page.
  *
  * - If the page is already secure, then provision a new page and share
@@ -623,11 +696,23 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 	if (!vma || vma->vm_start > start || vma->vm_end < end)
 		goto out_unlock;
 
-	if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift,
-				&downgrade)) {
-		kvmppc_gfn_uvmem_shared(gfn, kvm, false);
-		ret = H_SUCCESS;
+	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
+			  MADV_UNMERGEABLE, &vma->vm_flags);
+	downgrade_write(&kvm->mm->mmap_sem);
+	downgrade = true;
+	if (ret) {
+		ret = H_PARAMETER;
+		goto out_unlock;
 	}
+
+	ret = H_PARAMETER;
+	if (kvmppc_svm_migrate_page(vma, start, end, gpa, kvm, page_shift,
+				true))
+		goto out_unlock;
+
+	kvmppc_gfn_uvmem_shared(gfn, kvm, false);
+	ret = H_SUCCESS;
+
 out_unlock:
 	mutex_unlock(&kvm->arch.uvmem_lock);
 out:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
  2020-05-31  2:27 ` Ram Pai
@ 2020-05-31  2:27   ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

From: Laurent Dufour <ldufour@linux.ibm.com>

When a memory slot is hot plugged to a SVM, GFNs associated with that
memory slot automatically default to secure GFN. Hence migrate the
PFNs associated with these GFNs to device-PFNs.

uv_migrate_mem_slot() is called to achieve that. It will not call
UV_PAGE_IN since this request is ignored by the Ultravisor.
NOTE: Ultravisor does not trust any page content provided by
the Hypervisor, ones the VM turns secure.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
	(fixed merge conflicts. Modified the commit message)
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
 arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
 arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
index f0c5708..2ec2e5afb 100644
--- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
+++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
@@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
 			     struct kvm *kvm, bool skip_page_out,
 			     bool purge_gfn);
+int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
 #else
 static inline int kvmppc_uvmem_init(void)
 {
@@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
 kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
 			struct kvm *kvm, bool skip_page_out,
 			bool purge_gfn) { }
+
+static int uv_migrate_mem_slot(struct kvm *kvm,
+		const struct kvm_memory_slot *memslot);
 #endif /* CONFIG_PPC_UV */
 #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4c62bfe..604d062 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 	case KVM_MR_CREATE:
 		if (kvmppc_uvmem_slot_init(kvm, new))
 			return;
-		uv_register_mem_slot(kvm->arch.lpid,
-				     new->base_gfn << PAGE_SHIFT,
-				     new->npages * PAGE_SIZE,
-				     0, new->id);
+		if (uv_register_mem_slot(kvm->arch.lpid,
+					 new->base_gfn << PAGE_SHIFT,
+					 new->npages * PAGE_SIZE,
+					 0, new->id))
+			return;
+		uv_migrate_mem_slot(kvm, new);
 		break;
 	case KVM_MR_DELETE:
 		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
+		kvmppc_uvmem_drop_pages(old, kvm, true, true);
 		kvmppc_uvmem_slot_free(kvm, old);
 		break;
 	default:
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 36dda1d..1fa5f2a 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -377,8 +377,7 @@ static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
 	return ret;
 }
 
-static int uv_migrate_mem_slot(struct kvm *kvm,
-		const struct kvm_memory_slot *memslot)
+int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot)
 {
 	unsigned long gfn = memslot->base_gfn;
 	unsigned long end;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-05-31  2:27   ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-05-31  2:27 UTC (permalink / raw)
  To: kvm-ppc, linuxppc-dev
  Cc: ldufour, linuxram, cclaudio, bharata, aneesh.kumar, sukadev,
	bauerman, david

From: Laurent Dufour <ldufour@linux.ibm.com>

When a memory slot is hot plugged to a SVM, GFNs associated with that
memory slot automatically default to secure GFN. Hence migrate the
PFNs associated with these GFNs to device-PFNs.

uv_migrate_mem_slot() is called to achieve that. It will not call
UV_PAGE_IN since this request is ignored by the Ultravisor.
NOTE: Ultravisor does not trust any page content provided by
the Hypervisor, ones the VM turns secure.

Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
Cc: kvm-ppc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
	(fixed merge conflicts. Modified the commit message)
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
 arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
 arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
index f0c5708..2ec2e5afb 100644
--- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
+++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
@@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
 			     struct kvm *kvm, bool skip_page_out,
 			     bool purge_gfn);
+int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
 #else
 static inline int kvmppc_uvmem_init(void)
 {
@@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
 kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
 			struct kvm *kvm, bool skip_page_out,
 			bool purge_gfn) { }
+
+static int uv_migrate_mem_slot(struct kvm *kvm,
+		const struct kvm_memory_slot *memslot);
 #endif /* CONFIG_PPC_UV */
 #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 4c62bfe..604d062 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 	case KVM_MR_CREATE:
 		if (kvmppc_uvmem_slot_init(kvm, new))
 			return;
-		uv_register_mem_slot(kvm->arch.lpid,
-				     new->base_gfn << PAGE_SHIFT,
-				     new->npages * PAGE_SIZE,
-				     0, new->id);
+		if (uv_register_mem_slot(kvm->arch.lpid,
+					 new->base_gfn << PAGE_SHIFT,
+					 new->npages * PAGE_SIZE,
+					 0, new->id))
+			return;
+		uv_migrate_mem_slot(kvm, new);
 		break;
 	case KVM_MR_DELETE:
 		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
+		kvmppc_uvmem_drop_pages(old, kvm, true, true);
 		kvmppc_uvmem_slot_free(kvm, old);
 		break;
 	default:
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 36dda1d..1fa5f2a 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -377,8 +377,7 @@ static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
 	return ret;
 }
 
-static int uv_migrate_mem_slot(struct kvm *kvm,
-		const struct kvm_memory_slot *memslot)
+int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot)
 {
 	unsigned long gfn = memslot->base_gfn;
 	unsigned long end;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
  2020-05-31  2:27   ` Ram Pai
  (?)
@ 2020-06-01  4:12     ` kbuild test robot
  -1 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:12 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: ldufour, kbuild-all, bharata, aneesh.kumar, sukadev, bauerman

[-- Attachment #1: Type: text/plain, Size: 8401 bytes --]

Hi Ram,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7 next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_hv_uvmem.c:158:6: warning: no previous prototype for 'kvmppc_uvmem_available' [-Wmissing-prototypes]
158 | bool kvmppc_uvmem_available(void)
|      ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:167:5: warning: no previous prototype for 'kvmppc_uvmem_slot_init' [-Wmissing-prototypes]
167 | int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
|     ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:192:6: warning: no previous prototype for 'kvmppc_uvmem_slot_free' [-Wmissing-prototypes]
192 | void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot)
|      ^~~~~~~~~~~~~~~~~~~~~~
>> arch/powerpc/kvm/book3s_hv_uvmem.c:279:6: warning: no previous prototype for 'kvmppc_gfn_is_uvmem_shared' [-Wmissing-prototypes]
279 | bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
|      ^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:293:15: warning: no previous prototype for 'kvmppc_h_svm_init_start' [-Wmissing-prototypes]
293 | unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:335:15: warning: no previous prototype for 'kvmppc_h_svm_init_done' [-Wmissing-prototypes]
335 | unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:356:6: warning: no previous prototype for 'kvmppc_uvmem_drop_pages' [-Wmissing-prototypes]
356 | void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
|      ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:397:15: warning: no previous prototype for 'kvmppc_h_svm_init_abort' [-Wmissing-prototypes]
397 | unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:599:15: warning: no previous prototype for 'kvmppc_h_svm_page_in' [-Wmissing-prototypes]
599 | unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
|               ^~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:784:1: warning: no previous prototype for 'kvmppc_h_svm_page_out' [-Wmissing-prototypes]
784 | kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
| ^~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:822:5: warning: no previous prototype for 'kvmppc_send_page_to_uv' [-Wmissing-prototypes]
822 | int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
|     ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:867:5: warning: no previous prototype for 'kvmppc_uvmem_init' [-Wmissing-prototypes]
867 | int kvmppc_uvmem_init(void)
|     ^~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:922:6: warning: no previous prototype for 'kvmppc_uvmem_free' [-Wmissing-prototypes]
922 | void kvmppc_uvmem_free(void)
|      ^~~~~~~~~~~~~~~~~

vim +/kvmppc_gfn_is_uvmem_shared +279 arch/powerpc/kvm/book3s_hv_uvmem.c

   157	
 > 158	bool kvmppc_uvmem_available(void)
   159	{
   160		/*
   161		 * If kvmppc_uvmem_bitmap != NULL, then there is an ultravisor
   162		 * and our data structures have been initialized successfully.
   163		 */
   164		return !!kvmppc_uvmem_bitmap;
   165	}
   166	
   167	int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
   168	{
   169		struct kvmppc_uvmem_slot *p;
   170	
   171		p = kzalloc(sizeof(*p), GFP_KERNEL);
   172		if (!p)
   173			return -ENOMEM;
   174		p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns)));
   175		if (!p->pfns) {
   176			kfree(p);
   177			return -ENOMEM;
   178		}
   179		p->nr_pfns = slot->npages;
   180		p->base_pfn = slot->base_gfn;
   181	
   182		mutex_lock(&kvm->arch.uvmem_lock);
   183		list_add(&p->list, &kvm->arch.uvmem_pfns);
   184		mutex_unlock(&kvm->arch.uvmem_lock);
   185	
   186		return 0;
   187	}
   188	
   189	/*
   190	 * All device PFNs are already released by the time we come here.
   191	 */
   192	void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot)
   193	{
   194		struct kvmppc_uvmem_slot *p, *next;
   195	
   196		mutex_lock(&kvm->arch.uvmem_lock);
   197		list_for_each_entry_safe(p, next, &kvm->arch.uvmem_pfns, list) {
   198			if (p->base_pfn == slot->base_gfn) {
   199				vfree(p->pfns);
   200				list_del(&p->list);
   201				kfree(p);
   202				break;
   203			}
   204		}
   205		mutex_unlock(&kvm->arch.uvmem_lock);
   206	}
   207	
   208	static void kvmppc_uvmem_pfn_insert(unsigned long gfn, unsigned long uvmem_pfn,
   209					    struct kvm *kvm)
   210	{
   211		struct kvmppc_uvmem_slot *p;
   212	
   213		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   214			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   215				unsigned long index = gfn - p->base_pfn;
   216	
   217				p->pfns[index] = uvmem_pfn | KVMPPC_UVMEM_PFN;
   218				return;
   219			}
   220		}
   221	}
   222	
   223	static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
   224	{
   225		struct kvmppc_uvmem_slot *p;
   226	
   227		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   228			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   229				/*
   230				 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
   231				 * flag intact.  A gfn continues to be shared or
   232				 * unshared, with or without an associated device pfn.
   233				 */
   234				p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
   235				return;
   236			}
   237		}
   238	}
   239	
   240	static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
   241					    unsigned long *uvmem_pfn)
   242	{
   243		struct kvmppc_uvmem_slot *p;
   244	
   245		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   246			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   247				unsigned long index = gfn - p->base_pfn;
   248	
   249				if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
   250					if (uvmem_pfn)
   251						*uvmem_pfn = p->pfns[index] &
   252							     KVMPPC_UVMEM_PFN_MASK;
   253					return true;
   254				} else
   255					return false;
   256			}
   257		}
   258		return false;
   259	}
   260	
   261	static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
   262			bool set)
   263	{
   264		struct kvmppc_uvmem_slot *p;
   265	
   266		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   267			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   268				unsigned long index = gfn - p->base_pfn;
   269	
   270				if (set)
   271					p->pfns[index] |= KVMPPC_UVMEM_SHARED;
   272				else
   273					p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
   274				return;
   275			}
   276		}
   277	}
   278	
 > 279	bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
   280	{
   281		struct kvmppc_uvmem_slot *p;
   282	
   283		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   284			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   285				unsigned long index = gfn - p->base_pfn;
   286	
   287				return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
   288			}
   289		}
   290		return false;
   291	}
   292	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 66040 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
@ 2020-06-01  4:12     ` kbuild test robot
  0 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:12 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 8610 bytes --]

Hi Ram,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7 next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_hv_uvmem.c:158:6: warning: no previous prototype for 'kvmppc_uvmem_available' [-Wmissing-prototypes]
158 | bool kvmppc_uvmem_available(void)
|      ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:167:5: warning: no previous prototype for 'kvmppc_uvmem_slot_init' [-Wmissing-prototypes]
167 | int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
|     ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:192:6: warning: no previous prototype for 'kvmppc_uvmem_slot_free' [-Wmissing-prototypes]
192 | void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot)
|      ^~~~~~~~~~~~~~~~~~~~~~
>> arch/powerpc/kvm/book3s_hv_uvmem.c:279:6: warning: no previous prototype for 'kvmppc_gfn_is_uvmem_shared' [-Wmissing-prototypes]
279 | bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
|      ^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:293:15: warning: no previous prototype for 'kvmppc_h_svm_init_start' [-Wmissing-prototypes]
293 | unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:335:15: warning: no previous prototype for 'kvmppc_h_svm_init_done' [-Wmissing-prototypes]
335 | unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:356:6: warning: no previous prototype for 'kvmppc_uvmem_drop_pages' [-Wmissing-prototypes]
356 | void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
|      ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:397:15: warning: no previous prototype for 'kvmppc_h_svm_init_abort' [-Wmissing-prototypes]
397 | unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:599:15: warning: no previous prototype for 'kvmppc_h_svm_page_in' [-Wmissing-prototypes]
599 | unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
|               ^~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:784:1: warning: no previous prototype for 'kvmppc_h_svm_page_out' [-Wmissing-prototypes]
784 | kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
| ^~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:822:5: warning: no previous prototype for 'kvmppc_send_page_to_uv' [-Wmissing-prototypes]
822 | int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
|     ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:867:5: warning: no previous prototype for 'kvmppc_uvmem_init' [-Wmissing-prototypes]
867 | int kvmppc_uvmem_init(void)
|     ^~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:922:6: warning: no previous prototype for 'kvmppc_uvmem_free' [-Wmissing-prototypes]
922 | void kvmppc_uvmem_free(void)
|      ^~~~~~~~~~~~~~~~~

vim +/kvmppc_gfn_is_uvmem_shared +279 arch/powerpc/kvm/book3s_hv_uvmem.c

   157	
 > 158	bool kvmppc_uvmem_available(void)
   159	{
   160		/*
   161		 * If kvmppc_uvmem_bitmap != NULL, then there is an ultravisor
   162		 * and our data structures have been initialized successfully.
   163		 */
   164		return !!kvmppc_uvmem_bitmap;
   165	}
   166	
   167	int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
   168	{
   169		struct kvmppc_uvmem_slot *p;
   170	
   171		p = kzalloc(sizeof(*p), GFP_KERNEL);
   172		if (!p)
   173			return -ENOMEM;
   174		p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns)));
   175		if (!p->pfns) {
   176			kfree(p);
   177			return -ENOMEM;
   178		}
   179		p->nr_pfns = slot->npages;
   180		p->base_pfn = slot->base_gfn;
   181	
   182		mutex_lock(&kvm->arch.uvmem_lock);
   183		list_add(&p->list, &kvm->arch.uvmem_pfns);
   184		mutex_unlock(&kvm->arch.uvmem_lock);
   185	
   186		return 0;
   187	}
   188	
   189	/*
   190	 * All device PFNs are already released by the time we come here.
   191	 */
   192	void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot)
   193	{
   194		struct kvmppc_uvmem_slot *p, *next;
   195	
   196		mutex_lock(&kvm->arch.uvmem_lock);
   197		list_for_each_entry_safe(p, next, &kvm->arch.uvmem_pfns, list) {
   198			if (p->base_pfn == slot->base_gfn) {
   199				vfree(p->pfns);
   200				list_del(&p->list);
   201				kfree(p);
   202				break;
   203			}
   204		}
   205		mutex_unlock(&kvm->arch.uvmem_lock);
   206	}
   207	
   208	static void kvmppc_uvmem_pfn_insert(unsigned long gfn, unsigned long uvmem_pfn,
   209					    struct kvm *kvm)
   210	{
   211		struct kvmppc_uvmem_slot *p;
   212	
   213		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   214			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   215				unsigned long index = gfn - p->base_pfn;
   216	
   217				p->pfns[index] = uvmem_pfn | KVMPPC_UVMEM_PFN;
   218				return;
   219			}
   220		}
   221	}
   222	
   223	static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
   224	{
   225		struct kvmppc_uvmem_slot *p;
   226	
   227		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   228			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   229				/*
   230				 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
   231				 * flag intact.  A gfn continues to be shared or
   232				 * unshared, with or without an associated device pfn.
   233				 */
   234				p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
   235				return;
   236			}
   237		}
   238	}
   239	
   240	static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
   241					    unsigned long *uvmem_pfn)
   242	{
   243		struct kvmppc_uvmem_slot *p;
   244	
   245		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   246			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   247				unsigned long index = gfn - p->base_pfn;
   248	
   249				if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
   250					if (uvmem_pfn)
   251						*uvmem_pfn = p->pfns[index] &
   252							     KVMPPC_UVMEM_PFN_MASK;
   253					return true;
   254				} else
   255					return false;
   256			}
   257		}
   258		return false;
   259	}
   260	
   261	static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
   262			bool set)
   263	{
   264		struct kvmppc_uvmem_slot *p;
   265	
   266		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   267			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   268				unsigned long index = gfn - p->base_pfn;
   269	
   270				if (set)
   271					p->pfns[index] |= KVMPPC_UVMEM_SHARED;
   272				else
   273					p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
   274				return;
   275			}
   276		}
   277	}
   278	
 > 279	bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
   280	{
   281		struct kvmppc_uvmem_slot *p;
   282	
   283		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   284			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   285				unsigned long index = gfn - p->base_pfn;
   286	
   287				return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
   288			}
   289		}
   290		return false;
   291	}
   292	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 66040 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
@ 2020-06-01  4:12     ` kbuild test robot
  0 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:12 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: ldufour, kbuild-all, bharata, aneesh.kumar, sukadev, bauerman

[-- Attachment #1: Type: text/plain, Size: 8401 bytes --]

Hi Ram,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7 next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allyesconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_hv_uvmem.c:158:6: warning: no previous prototype for 'kvmppc_uvmem_available' [-Wmissing-prototypes]
158 | bool kvmppc_uvmem_available(void)
|      ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:167:5: warning: no previous prototype for 'kvmppc_uvmem_slot_init' [-Wmissing-prototypes]
167 | int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
|     ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:192:6: warning: no previous prototype for 'kvmppc_uvmem_slot_free' [-Wmissing-prototypes]
192 | void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot)
|      ^~~~~~~~~~~~~~~~~~~~~~
>> arch/powerpc/kvm/book3s_hv_uvmem.c:279:6: warning: no previous prototype for 'kvmppc_gfn_is_uvmem_shared' [-Wmissing-prototypes]
279 | bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
|      ^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:293:15: warning: no previous prototype for 'kvmppc_h_svm_init_start' [-Wmissing-prototypes]
293 | unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:335:15: warning: no previous prototype for 'kvmppc_h_svm_init_done' [-Wmissing-prototypes]
335 | unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:356:6: warning: no previous prototype for 'kvmppc_uvmem_drop_pages' [-Wmissing-prototypes]
356 | void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
|      ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:397:15: warning: no previous prototype for 'kvmppc_h_svm_init_abort' [-Wmissing-prototypes]
397 | unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
|               ^~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:599:15: warning: no previous prototype for 'kvmppc_h_svm_page_in' [-Wmissing-prototypes]
599 | unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
|               ^~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:784:1: warning: no previous prototype for 'kvmppc_h_svm_page_out' [-Wmissing-prototypes]
784 | kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
| ^~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:822:5: warning: no previous prototype for 'kvmppc_send_page_to_uv' [-Wmissing-prototypes]
822 | int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
|     ^~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:867:5: warning: no previous prototype for 'kvmppc_uvmem_init' [-Wmissing-prototypes]
867 | int kvmppc_uvmem_init(void)
|     ^~~~~~~~~~~~~~~~~
arch/powerpc/kvm/book3s_hv_uvmem.c:922:6: warning: no previous prototype for 'kvmppc_uvmem_free' [-Wmissing-prototypes]
922 | void kvmppc_uvmem_free(void)
|      ^~~~~~~~~~~~~~~~~

vim +/kvmppc_gfn_is_uvmem_shared +279 arch/powerpc/kvm/book3s_hv_uvmem.c

   157	
 > 158	bool kvmppc_uvmem_available(void)
   159	{
   160		/*
   161		 * If kvmppc_uvmem_bitmap != NULL, then there is an ultravisor
   162		 * and our data structures have been initialized successfully.
   163		 */
   164		return !!kvmppc_uvmem_bitmap;
   165	}
   166	
   167	int kvmppc_uvmem_slot_init(struct kvm *kvm, const struct kvm_memory_slot *slot)
   168	{
   169		struct kvmppc_uvmem_slot *p;
   170	
   171		p = kzalloc(sizeof(*p), GFP_KERNEL);
   172		if (!p)
   173			return -ENOMEM;
   174		p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns)));
   175		if (!p->pfns) {
   176			kfree(p);
   177			return -ENOMEM;
   178		}
   179		p->nr_pfns = slot->npages;
   180		p->base_pfn = slot->base_gfn;
   181	
   182		mutex_lock(&kvm->arch.uvmem_lock);
   183		list_add(&p->list, &kvm->arch.uvmem_pfns);
   184		mutex_unlock(&kvm->arch.uvmem_lock);
   185	
   186		return 0;
   187	}
   188	
   189	/*
   190	 * All device PFNs are already released by the time we come here.
   191	 */
   192	void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot)
   193	{
   194		struct kvmppc_uvmem_slot *p, *next;
   195	
   196		mutex_lock(&kvm->arch.uvmem_lock);
   197		list_for_each_entry_safe(p, next, &kvm->arch.uvmem_pfns, list) {
   198			if (p->base_pfn == slot->base_gfn) {
   199				vfree(p->pfns);
   200				list_del(&p->list);
   201				kfree(p);
   202				break;
   203			}
   204		}
   205		mutex_unlock(&kvm->arch.uvmem_lock);
   206	}
   207	
   208	static void kvmppc_uvmem_pfn_insert(unsigned long gfn, unsigned long uvmem_pfn,
   209					    struct kvm *kvm)
   210	{
   211		struct kvmppc_uvmem_slot *p;
   212	
   213		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   214			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   215				unsigned long index = gfn - p->base_pfn;
   216	
   217				p->pfns[index] = uvmem_pfn | KVMPPC_UVMEM_PFN;
   218				return;
   219			}
   220		}
   221	}
   222	
   223	static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
   224	{
   225		struct kvmppc_uvmem_slot *p;
   226	
   227		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   228			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   229				/*
   230				 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
   231				 * flag intact.  A gfn continues to be shared or
   232				 * unshared, with or without an associated device pfn.
   233				 */
   234				p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
   235				return;
   236			}
   237		}
   238	}
   239	
   240	static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
   241					    unsigned long *uvmem_pfn)
   242	{
   243		struct kvmppc_uvmem_slot *p;
   244	
   245		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   246			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   247				unsigned long index = gfn - p->base_pfn;
   248	
   249				if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
   250					if (uvmem_pfn)
   251						*uvmem_pfn = p->pfns[index] &
   252							     KVMPPC_UVMEM_PFN_MASK;
   253					return true;
   254				} else
   255					return false;
   256			}
   257		}
   258		return false;
   259	}
   260	
   261	static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
   262			bool set)
   263	{
   264		struct kvmppc_uvmem_slot *p;
   265	
   266		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   267			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   268				unsigned long index = gfn - p->base_pfn;
   269	
   270				if (set)
   271					p->pfns[index] |= KVMPPC_UVMEM_SHARED;
   272				else
   273					p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
   274				return;
   275			}
   276		}
   277	}
   278	
 > 279	bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
   280	{
   281		struct kvmppc_uvmem_slot *p;
   282	
   283		list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
   284			if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
   285				unsigned long index = gfn - p->base_pfn;
   286	
   287				return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
   288			}
   289		}
   290		return false;
   291	}
   292	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 66040 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
  2020-05-31  2:27   ` Ram Pai
  (?)
@ 2020-06-01  4:35     ` kbuild test robot
  -1 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:35 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: ldufour, kbuild-all, bharata, aneesh.kumar, sukadev, bauerman

[-- Attachment #1: Type: text/plain, Size: 2082 bytes --]

Hi Ram,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7]
[cannot apply to next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-rhel-kconfig (attached as .config)
compiler: powerpc64le-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_hv.c:3516:5: warning: no previous prototype for 'kvmhv_p9_guest_entry' [-Wmissing-prototypes]
3516 | int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
|     ^~~~~~~~~~~~~~~~~~~~
In file included from arch/powerpc/kvm/book3s_hv.c:75:
>> arch/powerpc/include/asm/kvm_book3s_uvmem.h:89:12: warning: 'uv_migrate_mem_slot' used but never defined
89 | static int uv_migrate_mem_slot(struct kvm *kvm,
|            ^~~~~~~~~~~~~~~~~~~

vim +/uv_migrate_mem_slot +89 arch/powerpc/include/asm/kvm_book3s_uvmem.h

    83	
    84	static inline void
    85	kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
    86				struct kvm *kvm, bool skip_page_out,
    87				bool purge_gfn) { }
    88	
  > 89	static int uv_migrate_mem_slot(struct kvm *kvm,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 15580 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-01  4:35     ` kbuild test robot
  0 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:35 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2131 bytes --]

Hi Ram,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7]
[cannot apply to next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-rhel-kconfig (attached as .config)
compiler: powerpc64le-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_hv.c:3516:5: warning: no previous prototype for 'kvmhv_p9_guest_entry' [-Wmissing-prototypes]
3516 | int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
|     ^~~~~~~~~~~~~~~~~~~~
In file included from arch/powerpc/kvm/book3s_hv.c:75:
>> arch/powerpc/include/asm/kvm_book3s_uvmem.h:89:12: warning: 'uv_migrate_mem_slot' used but never defined
89 | static int uv_migrate_mem_slot(struct kvm *kvm,
|            ^~~~~~~~~~~~~~~~~~~

vim +/uv_migrate_mem_slot +89 arch/powerpc/include/asm/kvm_book3s_uvmem.h

    83	
    84	static inline void
    85	kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
    86				struct kvm *kvm, bool skip_page_out,
    87				bool purge_gfn) { }
    88	
  > 89	static int uv_migrate_mem_slot(struct kvm *kvm,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 15580 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-01  4:35     ` kbuild test robot
  0 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:35 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: ldufour, kbuild-all, bharata, aneesh.kumar, sukadev, bauerman

[-- Attachment #1: Type: text/plain, Size: 2082 bytes --]

Hi Ram,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on powerpc/next]
[also build test WARNING on v5.7]
[cannot apply to next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-rhel-kconfig (attached as .config)
compiler: powerpc64le-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_hv.c:3516:5: warning: no previous prototype for 'kvmhv_p9_guest_entry' [-Wmissing-prototypes]
3516 | int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
|     ^~~~~~~~~~~~~~~~~~~~
In file included from arch/powerpc/kvm/book3s_hv.c:75:
>> arch/powerpc/include/asm/kvm_book3s_uvmem.h:89:12: warning: 'uv_migrate_mem_slot' used but never defined
89 | static int uv_migrate_mem_slot(struct kvm *kvm,
|            ^~~~~~~~~~~~~~~~~~~

vim +/uv_migrate_mem_slot +89 arch/powerpc/include/asm/kvm_book3s_uvmem.h

    83	
    84	static inline void
    85	kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
    86				struct kvm *kvm, bool skip_page_out,
    87				bool purge_gfn) { }
    88	
  > 89	static int uv_migrate_mem_slot(struct kvm *kvm,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 15580 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
  2020-05-31  2:27   ` Ram Pai
  (?)
@ 2020-06-01  4:45     ` kbuild test robot
  -1 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:45 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: ldufour, kbuild-all, bharata, aneesh.kumar, sukadev, bauerman

[-- Attachment #1: Type: text/plain, Size: 2155 bytes --]

Hi Ram,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.7]
[cannot apply to next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_64_mmu_radix.c:345:6: error: no previous prototype for 'kvmppc_radix_set_pte_at' [-Werror=missing-prototypes]
345 | void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
|      ^~~~~~~~~~~~~~~~~~~~~~~
In file included from arch/powerpc/kvm/book3s_64_mmu_radix.c:23:
>> arch/powerpc/include/asm/kvm_book3s_uvmem.h:89:12: error: 'uv_migrate_mem_slot' declared 'static' but never defined [-Werror=unused-function]
89 | static int uv_migrate_mem_slot(struct kvm *kvm,
|            ^~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors

vim +89 arch/powerpc/include/asm/kvm_book3s_uvmem.h

    83	
    84	static inline void
    85	kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
    86				struct kvm *kvm, bool skip_page_out,
    87				bool purge_gfn) { }
    88	
  > 89	static int uv_migrate_mem_slot(struct kvm *kvm,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 26147 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-01  4:45     ` kbuild test robot
  0 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:45 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2205 bytes --]

Hi Ram,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.7]
[cannot apply to next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_64_mmu_radix.c:345:6: error: no previous prototype for 'kvmppc_radix_set_pte_at' [-Werror=missing-prototypes]
345 | void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
|      ^~~~~~~~~~~~~~~~~~~~~~~
In file included from arch/powerpc/kvm/book3s_64_mmu_radix.c:23:
>> arch/powerpc/include/asm/kvm_book3s_uvmem.h:89:12: error: 'uv_migrate_mem_slot' declared 'static' but never defined [-Werror=unused-function]
89 | static int uv_migrate_mem_slot(struct kvm *kvm,
|            ^~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors

vim +89 arch/powerpc/include/asm/kvm_book3s_uvmem.h

    83	
    84	static inline void
    85	kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
    86				struct kvm *kvm, bool skip_page_out,
    87				bool purge_gfn) { }
    88	
  > 89	static int uv_migrate_mem_slot(struct kvm *kvm,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 26147 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-01  4:45     ` kbuild test robot
  0 siblings, 0 replies; 39+ messages in thread
From: kbuild test robot @ 2020-06-01  4:45 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: ldufour, kbuild-all, bharata, aneesh.kumar, sukadev, bauerman

[-- Attachment #1: Type: text/plain, Size: 2155 bytes --]

Hi Ram,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.7]
[cannot apply to next-20200529]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ram-Pai/Migrate-non-migrated-pages-of-a-SVM/20200601-034649
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>, old ones prefixed by <<):

arch/powerpc/kvm/book3s_64_mmu_radix.c:345:6: error: no previous prototype for 'kvmppc_radix_set_pte_at' [-Werror=missing-prototypes]
345 | void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
|      ^~~~~~~~~~~~~~~~~~~~~~~
In file included from arch/powerpc/kvm/book3s_64_mmu_radix.c:23:
>> arch/powerpc/include/asm/kvm_book3s_uvmem.h:89:12: error: 'uv_migrate_mem_slot' declared 'static' but never defined [-Werror=unused-function]
89 | static int uv_migrate_mem_slot(struct kvm *kvm,
|            ^~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors

vim +89 arch/powerpc/include/asm/kvm_book3s_uvmem.h

    83	
    84	static inline void
    85	kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
    86				struct kvm *kvm, bool skip_page_out,
    87				bool purge_gfn) { }
    88	
  > 89	static int uv_migrate_mem_slot(struct kvm *kvm,

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 26147 bytes --]

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
  2020-05-31  2:27   ` Ram Pai
@ 2020-06-01 11:55     ` Bharata B Rao
  -1 siblings, 0 replies; 39+ messages in thread
From: Bharata B Rao @ 2020-06-01 11:55 UTC (permalink / raw)
  To: Ram Pai
  Cc: ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> called H_SVM_PAGE_IN for all secure pages.

I don't think that is quite true. HV doesn't assume anything about
secure pages by itself.

> These GFNs continue to be
> normal GFNs associated with normal PFNs; when infact, these GFNs should
> have been secure GFNs associated with device PFNs.

Transition to secure state is driven by SVM/UV and HV just responds to
hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
determine the required pages that need to be moved into secure side.
HV just responds to it and tracks such pages as device private pages.

If SVM/UV doesn't get in all the pages to secure side by the time
of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
otherwise) pages as far as HV is concerned.  Why should HV assume that
SVM/UV didn't ask for a few pages and hence push those pages during
H_SVM_INIT_DONE?

I think UV should drive the movement of pages into secure side both
of boot-time SVM memory and hot-plugged memory. HV does memslot
registration uvcall when new memory is plugged in, UV should explicitly
get the required pages in at that time instead of expecting HV to drive
the same.

> +static int uv_migrate_mem_slot(struct kvm *kvm,
> +		const struct kvm_memory_slot *memslot)
> +{
> +	unsigned long gfn = memslot->base_gfn;
> +	unsigned long end;
> +	bool downgrade = false;
> +	struct vm_area_struct *vma;
> +	int i, ret = 0;
> +	unsigned long start = gfn_to_hva(kvm, gfn);
> +
> +	if (kvm_is_error_hva(start))
> +		return H_STATE;
> +
> +	end = start + (memslot->npages << PAGE_SHIFT);
> +
> +	down_write(&kvm->mm->mmap_sem);
> +
> +	mutex_lock(&kvm->arch.uvmem_lock);
> +	vma = find_vma_intersection(kvm->mm, start, end);
> +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> +		ret = H_STATE;
> +		goto out_unlock;
> +	}
> +
> +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> +			  MADV_UNMERGEABLE, &vma->vm_flags);
> +	downgrade_write(&kvm->mm->mmap_sem);
> +	downgrade = true;
> +	if (ret) {
> +		ret = H_STATE;
> +		goto out_unlock;
> +	}
> +
> +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> +		/* skip paged-in pages and shared pages */
> +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> +			continue;
> +
> +		start = gfn_to_hva(kvm, gfn);
> +		end = start + (1UL << PAGE_SHIFT);
> +		ret = kvmppc_svm_migrate_page(vma, start, end,
> +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> +
> +		if (ret)
> +			goto out_unlock;
> +	}

Is there a guarantee that the vma you got for the start address remains
valid for all the addresses till end in a memslot? If not, you should
re-get the vma for the current address in each iteration I suppose.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D
@ 2020-06-01 11:55     ` Bharata B Rao
  0 siblings, 0 replies; 39+ messages in thread
From: Bharata B Rao @ 2020-06-01 11:55 UTC (permalink / raw)
  To: Ram Pai
  Cc: ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> called H_SVM_PAGE_IN for all secure pages.

I don't think that is quite true. HV doesn't assume anything about
secure pages by itself.

> These GFNs continue to be
> normal GFNs associated with normal PFNs; when infact, these GFNs should
> have been secure GFNs associated with device PFNs.

Transition to secure state is driven by SVM/UV and HV just responds to
hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
determine the required pages that need to be moved into secure side.
HV just responds to it and tracks such pages as device private pages.

If SVM/UV doesn't get in all the pages to secure side by the time
of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
otherwise) pages as far as HV is concerned.  Why should HV assume that
SVM/UV didn't ask for a few pages and hence push those pages during
H_SVM_INIT_DONE?

I think UV should drive the movement of pages into secure side both
of boot-time SVM memory and hot-plugged memory. HV does memslot
registration uvcall when new memory is plugged in, UV should explicitly
get the required pages in at that time instead of expecting HV to drive
the same.

> +static int uv_migrate_mem_slot(struct kvm *kvm,
> +		const struct kvm_memory_slot *memslot)
> +{
> +	unsigned long gfn = memslot->base_gfn;
> +	unsigned long end;
> +	bool downgrade = false;
> +	struct vm_area_struct *vma;
> +	int i, ret = 0;
> +	unsigned long start = gfn_to_hva(kvm, gfn);
> +
> +	if (kvm_is_error_hva(start))
> +		return H_STATE;
> +
> +	end = start + (memslot->npages << PAGE_SHIFT);
> +
> +	down_write(&kvm->mm->mmap_sem);
> +
> +	mutex_lock(&kvm->arch.uvmem_lock);
> +	vma = find_vma_intersection(kvm->mm, start, end);
> +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> +		ret = H_STATE;
> +		goto out_unlock;
> +	}
> +
> +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> +			  MADV_UNMERGEABLE, &vma->vm_flags);
> +	downgrade_write(&kvm->mm->mmap_sem);
> +	downgrade = true;
> +	if (ret) {
> +		ret = H_STATE;
> +		goto out_unlock;
> +	}
> +
> +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> +		/* skip paged-in pages and shared pages */
> +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> +			continue;
> +
> +		start = gfn_to_hva(kvm, gfn);
> +		end = start + (1UL << PAGE_SHIFT);
> +		ret = kvmppc_svm_migrate_page(vma, start, end,
> +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> +
> +		if (ret)
> +			goto out_unlock;
> +	}

Is there a guarantee that the vma you got for the start address remains
valid for all the addresses till end in a memslot? If not, you should
re-get the vma for the current address in each iteration I suppose.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
  2020-06-01 11:55     ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Bharata B Rao
@ 2020-06-01 19:05       ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-01 19:05 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > called H_SVM_PAGE_IN for all secure pages.
> 
> I don't think that is quite true. HV doesn't assume anything about
> secure pages by itself.

Yes. Currently, it does not assume anything about secure pages.  But I am
proposing that it should consider all pages (except the shared pages) as
secure pages, when H_SVM_INIT_DONE is called.

In other words, HV should treat all pages; except shared pages, as
secure pages once H_SVM_INIT_DONE is called. And this includes pages
added subsequently through memory hotplug.

Yes. the Ultravisor can explicitly request the HV to move the pages
individually.  But that will slow down the transition too significantly.
It takes above 20min to transition them, for a SVM of size 100G.

With this proposed enhancement, the switch completes in a few seconds.

> 
> > These GFNs continue to be
> > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > have been secure GFNs associated with device PFNs.
> 
> Transition to secure state is driven by SVM/UV and HV just responds to
> hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> determine the required pages that need to be moved into secure side.
> HV just responds to it and tracks such pages as device private pages.
> 
> If SVM/UV doesn't get in all the pages to secure side by the time
> of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> otherwise) pages as far as HV is concerned.  Why should HV assume that
> SVM/UV didn't ask for a few pages and hence push those pages during
> H_SVM_INIT_DONE?

By definition, SVM is a VM backed by secure pages.
Hence all pages(except shared) must turn secure when a VM switches to SVM.

UV is interested in only a certain pages for the VM, which it will
request explicitly through H_SVM_PAGE_IN.  All other pages, need not
be paged-in through UV_PAGE_IN.  They just need to be switched to
device-pages.

> 
> I think UV should drive the movement of pages into secure side both
> of boot-time SVM memory and hot-plugged memory. HV does memslot
> registration uvcall when new memory is plugged in, UV should explicitly
> get the required pages in at that time instead of expecting HV to drive
> the same.
> 
> > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > +		const struct kvm_memory_slot *memslot)
> > +{
> > +	unsigned long gfn = memslot->base_gfn;
> > +	unsigned long end;
> > +	bool downgrade = false;
> > +	struct vm_area_struct *vma;
> > +	int i, ret = 0;
> > +	unsigned long start = gfn_to_hva(kvm, gfn);
> > +
> > +	if (kvm_is_error_hva(start))
> > +		return H_STATE;
> > +
> > +	end = start + (memslot->npages << PAGE_SHIFT);
> > +
> > +	down_write(&kvm->mm->mmap_sem);
> > +
> > +	mutex_lock(&kvm->arch.uvmem_lock);
> > +	vma = find_vma_intersection(kvm->mm, start, end);
> > +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > +		ret = H_STATE;
> > +		goto out_unlock;
> > +	}
> > +
> > +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > +			  MADV_UNMERGEABLE, &vma->vm_flags);
> > +	downgrade_write(&kvm->mm->mmap_sem);
> > +	downgrade = true;
> > +	if (ret) {
> > +		ret = H_STATE;
> > +		goto out_unlock;
> > +	}
> > +
> > +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> > +		/* skip paged-in pages and shared pages */
> > +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > +			continue;
> > +
> > +		start = gfn_to_hva(kvm, gfn);
> > +		end = start + (1UL << PAGE_SHIFT);
> > +		ret = kvmppc_svm_migrate_page(vma, start, end,
> > +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > +
> > +		if (ret)
> > +			goto out_unlock;
> > +	}
> 
> Is there a guarantee that the vma you got for the start address remains
> valid for all the addresses till end in a memslot? If not, you should
> re-get the vma for the current address in each iteration I suppose.


mm->mmap_sem  is the semaphore that guards the vma. right?  If that
semaphore is held, can the vma change?


Thanks for your comments,
RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D
@ 2020-06-01 19:05       ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-01 19:05 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > called H_SVM_PAGE_IN for all secure pages.
> 
> I don't think that is quite true. HV doesn't assume anything about
> secure pages by itself.

Yes. Currently, it does not assume anything about secure pages.  But I am
proposing that it should consider all pages (except the shared pages) as
secure pages, when H_SVM_INIT_DONE is called.

In other words, HV should treat all pages; except shared pages, as
secure pages once H_SVM_INIT_DONE is called. And this includes pages
added subsequently through memory hotplug.

Yes. the Ultravisor can explicitly request the HV to move the pages
individually.  But that will slow down the transition too significantly.
It takes above 20min to transition them, for a SVM of size 100G.

With this proposed enhancement, the switch completes in a few seconds.

> 
> > These GFNs continue to be
> > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > have been secure GFNs associated with device PFNs.
> 
> Transition to secure state is driven by SVM/UV and HV just responds to
> hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> determine the required pages that need to be moved into secure side.
> HV just responds to it and tracks such pages as device private pages.
> 
> If SVM/UV doesn't get in all the pages to secure side by the time
> of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> otherwise) pages as far as HV is concerned.  Why should HV assume that
> SVM/UV didn't ask for a few pages and hence push those pages during
> H_SVM_INIT_DONE?

By definition, SVM is a VM backed by secure pages.
Hence all pages(except shared) must turn secure when a VM switches to SVM.

UV is interested in only a certain pages for the VM, which it will
request explicitly through H_SVM_PAGE_IN.  All other pages, need not
be paged-in through UV_PAGE_IN.  They just need to be switched to
device-pages.

> 
> I think UV should drive the movement of pages into secure side both
> of boot-time SVM memory and hot-plugged memory. HV does memslot
> registration uvcall when new memory is plugged in, UV should explicitly
> get the required pages in at that time instead of expecting HV to drive
> the same.
> 
> > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > +		const struct kvm_memory_slot *memslot)
> > +{
> > +	unsigned long gfn = memslot->base_gfn;
> > +	unsigned long end;
> > +	bool downgrade = false;
> > +	struct vm_area_struct *vma;
> > +	int i, ret = 0;
> > +	unsigned long start = gfn_to_hva(kvm, gfn);
> > +
> > +	if (kvm_is_error_hva(start))
> > +		return H_STATE;
> > +
> > +	end = start + (memslot->npages << PAGE_SHIFT);
> > +
> > +	down_write(&kvm->mm->mmap_sem);
> > +
> > +	mutex_lock(&kvm->arch.uvmem_lock);
> > +	vma = find_vma_intersection(kvm->mm, start, end);
> > +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > +		ret = H_STATE;
> > +		goto out_unlock;
> > +	}
> > +
> > +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > +			  MADV_UNMERGEABLE, &vma->vm_flags);
> > +	downgrade_write(&kvm->mm->mmap_sem);
> > +	downgrade = true;
> > +	if (ret) {
> > +		ret = H_STATE;
> > +		goto out_unlock;
> > +	}
> > +
> > +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> > +		/* skip paged-in pages and shared pages */
> > +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > +			continue;
> > +
> > +		start = gfn_to_hva(kvm, gfn);
> > +		end = start + (1UL << PAGE_SHIFT);
> > +		ret = kvmppc_svm_migrate_page(vma, start, end,
> > +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > +
> > +		if (ret)
> > +			goto out_unlock;
> > +	}
> 
> Is there a guarantee that the vma you got for the start address remains
> valid for all the addresses till end in a memslot? If not, you should
> re-get the vma for the current address in each iteration I suppose.


mm->mmap_sem  is the semaphore that guards the vma. right?  If that
semaphore is held, can the vma change?


Thanks for your comments,
RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
  2020-05-31  2:27   ` Ram Pai
@ 2020-06-02  8:31     ` Laurent Dufour
  -1 siblings, 0 replies; 39+ messages in thread
From: Laurent Dufour @ 2020-06-02  8:31 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: cclaudio, bharata, aneesh.kumar, sukadev, bauerman, david

Le 31/05/2020 à 04:27, Ram Pai a écrit :
> From: Laurent Dufour <ldufour@linux.ibm.com>
> 
> When a memory slot is hot plugged to a SVM, GFNs associated with that
> memory slot automatically default to secure GFN. Hence migrate the
> PFNs associated with these GFNs to device-PFNs.
> 
> uv_migrate_mem_slot() is called to achieve that. It will not call
> UV_PAGE_IN since this request is ignored by the Ultravisor.
> NOTE: Ultravisor does not trust any page content provided by
> the Hypervisor, ones the VM turns secure.
> 
> Cc: Paul Mackerras <paulus@ozlabs.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Bharata B Rao <bharata@linux.ibm.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> Cc: Laurent Dufour <ldufour@linux.ibm.com>
> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> Cc: kvm-ppc@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	(fixed merge conflicts. Modified the commit message)
> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
>   arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
>   arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
>   3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> index f0c5708..2ec2e5afb 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> @@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			     struct kvm *kvm, bool skip_page_out,
>   			     bool purge_gfn);
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
>   #else
>   static inline int kvmppc_uvmem_init(void)
>   {
> @@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
>   kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			struct kvm *kvm, bool skip_page_out,
>   			bool purge_gfn) { }
> +
> +static int uv_migrate_mem_slot(struct kvm *kvm,
> +		const struct kvm_memory_slot *memslot);

That line was not part of the patch I sent to you!


>   #endif /* CONFIG_PPC_UV */
>   #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 4c62bfe..604d062 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>   	case KVM_MR_CREATE:
>   		if (kvmppc_uvmem_slot_init(kvm, new))
>   			return;
> -		uv_register_mem_slot(kvm->arch.lpid,
> -				     new->base_gfn << PAGE_SHIFT,
> -				     new->npages * PAGE_SIZE,
> -				     0, new->id);
> +		if (uv_register_mem_slot(kvm->arch.lpid,
> +					 new->base_gfn << PAGE_SHIFT,
> +					 new->npages * PAGE_SIZE,
> +					 0, new->id))
> +			return;
> +		uv_migrate_mem_slot(kvm, new);
>   		break;
>   	case KVM_MR_DELETE:
>   		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +		kvmppc_uvmem_drop_pages(old, kvm, true, true);

Again that line has been changed from the patch I sent to you. The last 'true' 
argument has nothing to do here.

Is that series really building?

>   		kvmppc_uvmem_slot_free(kvm, old);
>   		break;
>   	default:
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 36dda1d..1fa5f2a 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -377,8 +377,7 @@ static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
>   	return ret;
>   }
>   
> -static int uv_migrate_mem_slot(struct kvm *kvm,
> -		const struct kvm_memory_slot *memslot)
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot)
>   {
>   	unsigned long gfn = memslot->base_gfn;
>   	unsigned long end;
> 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-02  8:31     ` Laurent Dufour
  0 siblings, 0 replies; 39+ messages in thread
From: Laurent Dufour @ 2020-06-02  8:31 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: cclaudio, bharata, aneesh.kumar, sukadev, bauerman, david

Le 31/05/2020 à 04:27, Ram Pai a écrit :
> From: Laurent Dufour <ldufour@linux.ibm.com>
> 
> When a memory slot is hot plugged to a SVM, GFNs associated with that
> memory slot automatically default to secure GFN. Hence migrate the
> PFNs associated with these GFNs to device-PFNs.
> 
> uv_migrate_mem_slot() is called to achieve that. It will not call
> UV_PAGE_IN since this request is ignored by the Ultravisor.
> NOTE: Ultravisor does not trust any page content provided by
> the Hypervisor, ones the VM turns secure.
> 
> Cc: Paul Mackerras <paulus@ozlabs.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Bharata B Rao <bharata@linux.ibm.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> Cc: Laurent Dufour <ldufour@linux.ibm.com>
> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> Cc: kvm-ppc@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	(fixed merge conflicts. Modified the commit message)
> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
>   arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
>   arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
>   3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> index f0c5708..2ec2e5afb 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> @@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			     struct kvm *kvm, bool skip_page_out,
>   			     bool purge_gfn);
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
>   #else
>   static inline int kvmppc_uvmem_init(void)
>   {
> @@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
>   kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			struct kvm *kvm, bool skip_page_out,
>   			bool purge_gfn) { }
> +
> +static int uv_migrate_mem_slot(struct kvm *kvm,
> +		const struct kvm_memory_slot *memslot);

That line was not part of the patch I sent to you!


>   #endif /* CONFIG_PPC_UV */
>   #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 4c62bfe..604d062 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>   	case KVM_MR_CREATE:
>   		if (kvmppc_uvmem_slot_init(kvm, new))
>   			return;
> -		uv_register_mem_slot(kvm->arch.lpid,
> -				     new->base_gfn << PAGE_SHIFT,
> -				     new->npages * PAGE_SIZE,
> -				     0, new->id);
> +		if (uv_register_mem_slot(kvm->arch.lpid,
> +					 new->base_gfn << PAGE_SHIFT,
> +					 new->npages * PAGE_SIZE,
> +					 0, new->id))
> +			return;
> +		uv_migrate_mem_slot(kvm, new);
>   		break;
>   	case KVM_MR_DELETE:
>   		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +		kvmppc_uvmem_drop_pages(old, kvm, true, true);

Again that line has been changed from the patch I sent to you. The last 'true' 
argument has nothing to do here.

Is that series really building?

>   		kvmppc_uvmem_slot_free(kvm, old);
>   		break;
>   	default:
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 36dda1d..1fa5f2a 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -377,8 +377,7 @@ static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
>   	return ret;
>   }
>   
> -static int uv_migrate_mem_slot(struct kvm *kvm,
> -		const struct kvm_memory_slot *memslot)
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot)
>   {
>   	unsigned long gfn = memslot->base_gfn;
>   	unsigned long end;
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
  2020-06-01 19:05       ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Ram Pai
@ 2020-06-02 10:18         ` Bharata B Rao
  -1 siblings, 0 replies; 39+ messages in thread
From: Bharata B Rao @ 2020-06-02 10:06 UTC (permalink / raw)
  To: Ram Pai
  Cc: rcampbell, ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev,
	linuxppc-dev, bauerman, david

On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > called H_SVM_PAGE_IN for all secure pages.
> > 
> > I don't think that is quite true. HV doesn't assume anything about
> > secure pages by itself.
> 
> Yes. Currently, it does not assume anything about secure pages.  But I am
> proposing that it should consider all pages (except the shared pages) as
> secure pages, when H_SVM_INIT_DONE is called.

Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
documentation.

> 
> In other words, HV should treat all pages; except shared pages, as
> secure pages once H_SVM_INIT_DONE is called. And this includes pages
> added subsequently through memory hotplug.

So after H_SVM_INIT_DONE, if HV touches a secure page for any
reason and gets encrypted contents via page-out, HV drops the
device pfn at that time. So what state we would be in that case? We
have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
page in HV?

> 
> Yes. the Ultravisor can explicitly request the HV to move the pages
> individually.  But that will slow down the transition too significantly.
> It takes above 20min to transition them, for a SVM of size 100G.
> 
> With this proposed enhancement, the switch completes in a few seconds.

I think, many pages during initial switch and most pages for hotplugged
memory are zero pages, for which we don't anyway issue UV page-in calls.
So the 20min saving you are observing is purely due to hcall overhead?

How about extending H_SVM_PAGE_IN interface or a new hcall to request
multiple pages in one request?

Also, how about requesting for bigger page sizes (2M)? Ralph Campbell
had patches that added THP support for migrate_vma_* calls.

> 
> > 
> > > These GFNs continue to be
> > > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > > have been secure GFNs associated with device PFNs.
> > 
> > Transition to secure state is driven by SVM/UV and HV just responds to
> > hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> > determine the required pages that need to be moved into secure side.
> > HV just responds to it and tracks such pages as device private pages.
> > 
> > If SVM/UV doesn't get in all the pages to secure side by the time
> > of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> > otherwise) pages as far as HV is concerned.  Why should HV assume that
> > SVM/UV didn't ask for a few pages and hence push those pages during
> > H_SVM_INIT_DONE?
> 
> By definition, SVM is a VM backed by secure pages.
> Hence all pages(except shared) must turn secure when a VM switches to SVM.
> 
> UV is interested in only a certain pages for the VM, which it will
> request explicitly through H_SVM_PAGE_IN.  All other pages, need not
> be paged-in through UV_PAGE_IN.  They just need to be switched to
> device-pages.
> 
> > 
> > I think UV should drive the movement of pages into secure side both
> > of boot-time SVM memory and hot-plugged memory. HV does memslot
> > registration uvcall when new memory is plugged in, UV should explicitly
> > get the required pages in at that time instead of expecting HV to drive
> > the same.
> > 
> > > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > > +		const struct kvm_memory_slot *memslot)
> > > +{
> > > +	unsigned long gfn = memslot->base_gfn;
> > > +	unsigned long end;
> > > +	bool downgrade = false;
> > > +	struct vm_area_struct *vma;
> > > +	int i, ret = 0;
> > > +	unsigned long start = gfn_to_hva(kvm, gfn);
> > > +
> > > +	if (kvm_is_error_hva(start))
> > > +		return H_STATE;
> > > +
> > > +	end = start + (memslot->npages << PAGE_SHIFT);
> > > +
> > > +	down_write(&kvm->mm->mmap_sem);
> > > +
> > > +	mutex_lock(&kvm->arch.uvmem_lock);
> > > +	vma = find_vma_intersection(kvm->mm, start, end);
> > > +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > > +		ret = H_STATE;
> > > +		goto out_unlock;
> > > +	}
> > > +
> > > +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > > +			  MADV_UNMERGEABLE, &vma->vm_flags);
> > > +	downgrade_write(&kvm->mm->mmap_sem);
> > > +	downgrade = true;
> > > +	if (ret) {
> > > +		ret = H_STATE;
> > > +		goto out_unlock;
> > > +	}
> > > +
> > > +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> > > +		/* skip paged-in pages and shared pages */
> > > +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > > +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > > +			continue;
> > > +
> > > +		start = gfn_to_hva(kvm, gfn);
> > > +		end = start + (1UL << PAGE_SHIFT);
> > > +		ret = kvmppc_svm_migrate_page(vma, start, end,
> > > +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > > +
> > > +		if (ret)
> > > +			goto out_unlock;
> > > +	}
> > 
> > Is there a guarantee that the vma you got for the start address remains
> > valid for all the addresses till end in a memslot? If not, you should
> > re-get the vma for the current address in each iteration I suppose.
> 
> 
> mm->mmap_sem  is the semaphore that guards the vma. right?  If that
> semaphore is held, can the vma change?

I am not sure if the vma you obtained would span the entire address range
in the memslot.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D
@ 2020-06-02 10:18         ` Bharata B Rao
  0 siblings, 0 replies; 39+ messages in thread
From: Bharata B Rao @ 2020-06-02 10:18 UTC (permalink / raw)
  To: Ram Pai
  Cc: rcampbell, ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev,
	linuxppc-dev, bauerman, david

On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > called H_SVM_PAGE_IN for all secure pages.
> > 
> > I don't think that is quite true. HV doesn't assume anything about
> > secure pages by itself.
> 
> Yes. Currently, it does not assume anything about secure pages.  But I am
> proposing that it should consider all pages (except the shared pages) as
> secure pages, when H_SVM_INIT_DONE is called.

Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
documentation.

> 
> In other words, HV should treat all pages; except shared pages, as
> secure pages once H_SVM_INIT_DONE is called. And this includes pages
> added subsequently through memory hotplug.

So after H_SVM_INIT_DONE, if HV touches a secure page for any
reason and gets encrypted contents via page-out, HV drops the
device pfn at that time. So what state we would be in that case? We
have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
page in HV?

> 
> Yes. the Ultravisor can explicitly request the HV to move the pages
> individually.  But that will slow down the transition too significantly.
> It takes above 20min to transition them, for a SVM of size 100G.
> 
> With this proposed enhancement, the switch completes in a few seconds.

I think, many pages during initial switch and most pages for hotplugged
memory are zero pages, for which we don't anyway issue UV page-in calls.
So the 20min saving you are observing is purely due to hcall overhead?

How about extending H_SVM_PAGE_IN interface or a new hcall to request
multiple pages in one request?

Also, how about requesting for bigger page sizes (2M)? Ralph Campbell
had patches that added THP support for migrate_vma_* calls.

> 
> > 
> > > These GFNs continue to be
> > > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > > have been secure GFNs associated with device PFNs.
> > 
> > Transition to secure state is driven by SVM/UV and HV just responds to
> > hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> > determine the required pages that need to be moved into secure side.
> > HV just responds to it and tracks such pages as device private pages.
> > 
> > If SVM/UV doesn't get in all the pages to secure side by the time
> > of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> > otherwise) pages as far as HV is concerned.  Why should HV assume that
> > SVM/UV didn't ask for a few pages and hence push those pages during
> > H_SVM_INIT_DONE?
> 
> By definition, SVM is a VM backed by secure pages.
> Hence all pages(except shared) must turn secure when a VM switches to SVM.
> 
> UV is interested in only a certain pages for the VM, which it will
> request explicitly through H_SVM_PAGE_IN.  All other pages, need not
> be paged-in through UV_PAGE_IN.  They just need to be switched to
> device-pages.
> 
> > 
> > I think UV should drive the movement of pages into secure side both
> > of boot-time SVM memory and hot-plugged memory. HV does memslot
> > registration uvcall when new memory is plugged in, UV should explicitly
> > get the required pages in at that time instead of expecting HV to drive
> > the same.
> > 
> > > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > > +		const struct kvm_memory_slot *memslot)
> > > +{
> > > +	unsigned long gfn = memslot->base_gfn;
> > > +	unsigned long end;
> > > +	bool downgrade = false;
> > > +	struct vm_area_struct *vma;
> > > +	int i, ret = 0;
> > > +	unsigned long start = gfn_to_hva(kvm, gfn);
> > > +
> > > +	if (kvm_is_error_hva(start))
> > > +		return H_STATE;
> > > +
> > > +	end = start + (memslot->npages << PAGE_SHIFT);
> > > +
> > > +	down_write(&kvm->mm->mmap_sem);
> > > +
> > > +	mutex_lock(&kvm->arch.uvmem_lock);
> > > +	vma = find_vma_intersection(kvm->mm, start, end);
> > > +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > > +		ret = H_STATE;
> > > +		goto out_unlock;
> > > +	}
> > > +
> > > +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > > +			  MADV_UNMERGEABLE, &vma->vm_flags);
> > > +	downgrade_write(&kvm->mm->mmap_sem);
> > > +	downgrade = true;
> > > +	if (ret) {
> > > +		ret = H_STATE;
> > > +		goto out_unlock;
> > > +	}
> > > +
> > > +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> > > +		/* skip paged-in pages and shared pages */
> > > +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > > +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > > +			continue;
> > > +
> > > +		start = gfn_to_hva(kvm, gfn);
> > > +		end = start + (1UL << PAGE_SHIFT);
> > > +		ret = kvmppc_svm_migrate_page(vma, start, end,
> > > +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > > +
> > > +		if (ret)
> > > +			goto out_unlock;
> > > +	}
> > 
> > Is there a guarantee that the vma you got for the start address remains
> > valid for all the addresses till end in a memslot? If not, you should
> > re-get the vma for the current address in each iteration I suppose.
> 
> 
> mm->mmap_sem  is the semaphore that guards the vma. right?  If that
> semaphore is held, can the vma change?

I am not sure if the vma you obtained would span the entire address range
in the memslot.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
  2020-06-02  8:31     ` Laurent Dufour
@ 2020-06-03 13:25       ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-03 13:25 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: cclaudio, kvm-ppc, bharata, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

On Tue, Jun 02, 2020 at 10:31:32AM +0200, Laurent Dufour wrote:
> Le 31/05/2020 à 04:27, Ram Pai a écrit :
> >From: Laurent Dufour <ldufour@linux.ibm.com>
> >
> >When a memory slot is hot plugged to a SVM, GFNs associated with that
> >memory slot automatically default to secure GFN. Hence migrate the
> >PFNs associated with these GFNs to device-PFNs.
> >
> >uv_migrate_mem_slot() is called to achieve that. It will not call
> >UV_PAGE_IN since this request is ignored by the Ultravisor.
> >NOTE: Ultravisor does not trust any page content provided by
> >the Hypervisor, ones the VM turns secure.
> >
> >Cc: Paul Mackerras <paulus@ozlabs.org>
> >Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> >Cc: Michael Ellerman <mpe@ellerman.id.au>
> >Cc: Bharata B Rao <bharata@linux.ibm.com>
> >Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> >Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> >Cc: Laurent Dufour <ldufour@linux.ibm.com>
> >Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> >Cc: David Gibson <david@gibson.dropbear.id.au>
> >Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> >Cc: kvm-ppc@vger.kernel.org
> >Cc: linuxppc-dev@lists.ozlabs.org
> >Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> >	(fixed merge conflicts. Modified the commit message)
> >Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
> >---
> >  arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
> >  arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
> >  arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
> >  3 files changed, 12 insertions(+), 6 deletions(-)
> >
> >diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> >index f0c5708..2ec2e5afb 100644
> >--- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> >+++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> >@@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
> >  void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> >  			     struct kvm *kvm, bool skip_page_out,
> >  			     bool purge_gfn);
> >+int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
> >  #else
> >  static inline int kvmppc_uvmem_init(void)
> >  {
> >@@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
> >  kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> >  			struct kvm *kvm, bool skip_page_out,
> >  			bool purge_gfn) { }
> >+
> >+static int uv_migrate_mem_slot(struct kvm *kvm,
> >+		const struct kvm_memory_slot *memslot);
> 
> That line was not part of the patch I sent to you!

Your patch is rebased on top of my patches. This prototype declaration
is for the ifndef CONFIG_PPC_UV   case.

> 
> 
> >  #endif /* CONFIG_PPC_UV */
> >  #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> >diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> >index 4c62bfe..604d062 100644
> >--- a/arch/powerpc/kvm/book3s_hv.c
> >+++ b/arch/powerpc/kvm/book3s_hv.c
> >@@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
> >  	case KVM_MR_CREATE:
> >  		if (kvmppc_uvmem_slot_init(kvm, new))
> >  			return;
> >-		uv_register_mem_slot(kvm->arch.lpid,
> >-				     new->base_gfn << PAGE_SHIFT,
> >-				     new->npages * PAGE_SIZE,
> >-				     0, new->id);
> >+		if (uv_register_mem_slot(kvm->arch.lpid,
> >+					 new->base_gfn << PAGE_SHIFT,
> >+					 new->npages * PAGE_SIZE,
> >+					 0, new->id))
> >+			return;
> >+		uv_migrate_mem_slot(kvm, new);
> >  		break;
> >  	case KVM_MR_DELETE:
> >  		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> >+		kvmppc_uvmem_drop_pages(old, kvm, true, true);
> 
> Again that line has been changed from the patch I sent to you. The
> last 'true' argument has nothing to do here.

yes. i did add another parameter to kvmppc_uvmem_drop_pages() in my
patch series. So had to adapt your patch to operate on top my mine.
> 
> Is that series really building?

yes. it built for me.

RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-03 13:25       ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-03 13:25 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: cclaudio, kvm-ppc, bharata, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

On Tue, Jun 02, 2020 at 10:31:32AM +0200, Laurent Dufour wrote:
> Le 31/05/2020 à 04:27, Ram Pai a écrit :
> >From: Laurent Dufour <ldufour@linux.ibm.com>
> >
> >When a memory slot is hot plugged to a SVM, GFNs associated with that
> >memory slot automatically default to secure GFN. Hence migrate the
> >PFNs associated with these GFNs to device-PFNs.
> >
> >uv_migrate_mem_slot() is called to achieve that. It will not call
> >UV_PAGE_IN since this request is ignored by the Ultravisor.
> >NOTE: Ultravisor does not trust any page content provided by
> >the Hypervisor, ones the VM turns secure.
> >
> >Cc: Paul Mackerras <paulus@ozlabs.org>
> >Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> >Cc: Michael Ellerman <mpe@ellerman.id.au>
> >Cc: Bharata B Rao <bharata@linux.ibm.com>
> >Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> >Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> >Cc: Laurent Dufour <ldufour@linux.ibm.com>
> >Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> >Cc: David Gibson <david@gibson.dropbear.id.au>
> >Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> >Cc: kvm-ppc@vger.kernel.org
> >Cc: linuxppc-dev@lists.ozlabs.org
> >Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> >	(fixed merge conflicts. Modified the commit message)
> >Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
> >---
> >  arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
> >  arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
> >  arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
> >  3 files changed, 12 insertions(+), 6 deletions(-)
> >
> >diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> >index f0c5708..2ec2e5afb 100644
> >--- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> >+++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> >@@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
> >  void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> >  			     struct kvm *kvm, bool skip_page_out,
> >  			     bool purge_gfn);
> >+int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
> >  #else
> >  static inline int kvmppc_uvmem_init(void)
> >  {
> >@@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
> >  kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> >  			struct kvm *kvm, bool skip_page_out,
> >  			bool purge_gfn) { }
> >+
> >+static int uv_migrate_mem_slot(struct kvm *kvm,
> >+		const struct kvm_memory_slot *memslot);
> 
> That line was not part of the patch I sent to you!

Your patch is rebased on top of my patches. This prototype declaration
is for the ifndef CONFIG_PPC_UV   case.

> 
> 
> >  #endif /* CONFIG_PPC_UV */
> >  #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> >diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> >index 4c62bfe..604d062 100644
> >--- a/arch/powerpc/kvm/book3s_hv.c
> >+++ b/arch/powerpc/kvm/book3s_hv.c
> >@@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
> >  	case KVM_MR_CREATE:
> >  		if (kvmppc_uvmem_slot_init(kvm, new))
> >  			return;
> >-		uv_register_mem_slot(kvm->arch.lpid,
> >-				     new->base_gfn << PAGE_SHIFT,
> >-				     new->npages * PAGE_SIZE,
> >-				     0, new->id);
> >+		if (uv_register_mem_slot(kvm->arch.lpid,
> >+					 new->base_gfn << PAGE_SHIFT,
> >+					 new->npages * PAGE_SIZE,
> >+					 0, new->id))
> >+			return;
> >+		uv_migrate_mem_slot(kvm, new);
> >  		break;
> >  	case KVM_MR_DELETE:
> >  		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> >+		kvmppc_uvmem_drop_pages(old, kvm, true, true);
> 
> Again that line has been changed from the patch I sent to you. The
> last 'true' argument has nothing to do here.

yes. i did add another parameter to kvmppc_uvmem_drop_pages() in my
patch series. So had to adapt your patch to operate on top my mine.
> 
> Is that series really building?

yes. it built for me.

RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
  2020-06-02 10:18         ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Bharata B Rao
@ 2020-06-03 23:10           ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-03 23:10 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: rcampbell, ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev,
	linuxppc-dev, bauerman, david

On Tue, Jun 02, 2020 at 03:36:39PM +0530, Bharata B Rao wrote:
> On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> > On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > > called H_SVM_PAGE_IN for all secure pages.
> > > 
> > > I don't think that is quite true. HV doesn't assume anything about
> > > secure pages by itself.
> > 
> > Yes. Currently, it does not assume anything about secure pages.  But I am
> > proposing that it should consider all pages (except the shared pages) as
> > secure pages, when H_SVM_INIT_DONE is called.
> 
> Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
> documentation.

ok.

> 
> > 
> > In other words, HV should treat all pages; except shared pages, as
> > secure pages once H_SVM_INIT_DONE is called. And this includes pages
> > added subsequently through memory hotplug.
> 
> So after H_SVM_INIT_DONE, if HV touches a secure page for any
> reason and gets encrypted contents via page-out, HV drops the
> device pfn at that time. So what state we would be in that case? We
> have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
> page in HV?

Good point.

The corresponding GFN will continue to be a secure GFN. Just that its
backing PFN is not a device-PFN, but a memory-PFN. Also that backing
memory-PFN contains encrypted content.

I will clarify this in the patch; about secure-GFN state.

> 
> > 
> > Yes. the Ultravisor can explicitly request the HV to move the pages
> > individually.  But that will slow down the transition too significantly.
> > It takes above 20min to transition them, for a SVM of size 100G.
> > 
> > With this proposed enhancement, the switch completes in a few seconds.
> 
> I think, many pages during initial switch and most pages for hotplugged
> memory are zero pages, for which we don't anyway issue UV page-in calls.
> So the 20min saving you are observing is purely due to hcall overhead?

Apparently, that seems to be the case.

> 
> How about extending H_SVM_PAGE_IN interface or a new hcall to request
> multiple pages in one request?
> 
> Also, how about requesting for bigger page sizes (2M)? Ralph Campbell
> had patches that added THP support for migrate_vma_* calls.

yes. that should give further boost. I think the API does not stop us
from using that feature. Its the support on the Ultravisor side.
Hopefully we will have contributions to the ultravisor once it is
opensourced.

> 
> > 
> > > 
> > > > These GFNs continue to be
> > > > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > > > have been secure GFNs associated with device PFNs.
> > > 
> > > Transition to secure state is driven by SVM/UV and HV just responds to
> > > hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> > > determine the required pages that need to be moved into secure side.
> > > HV just responds to it and tracks such pages as device private pages.
> > > 
> > > If SVM/UV doesn't get in all the pages to secure side by the time
> > > of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> > > otherwise) pages as far as HV is concerned.  Why should HV assume that
> > > SVM/UV didn't ask for a few pages and hence push those pages during
> > > H_SVM_INIT_DONE?
> > 
> > By definition, SVM is a VM backed by secure pages.
> > Hence all pages(except shared) must turn secure when a VM switches to SVM.
> > 
> > UV is interested in only a certain pages for the VM, which it will
> > request explicitly through H_SVM_PAGE_IN.  All other pages, need not
> > be paged-in through UV_PAGE_IN.  They just need to be switched to
> > device-pages.
> > 
> > > 
> > > I think UV should drive the movement of pages into secure side both
> > > of boot-time SVM memory and hot-plugged memory. HV does memslot
> > > registration uvcall when new memory is plugged in, UV should explicitly
> > > get the required pages in at that time instead of expecting HV to drive
> > > the same.
> > > 
> > > > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > > > +		const struct kvm_memory_slot *memslot)
> > > > +{
> > > > +	unsigned long gfn = memslot->base_gfn;
> > > > +	unsigned long end;
> > > > +	bool downgrade = false;
> > > > +	struct vm_area_struct *vma;
> > > > +	int i, ret = 0;
> > > > +	unsigned long start = gfn_to_hva(kvm, gfn);
> > > > +
> > > > +	if (kvm_is_error_hva(start))
> > > > +		return H_STATE;
> > > > +
> > > > +	end = start + (memslot->npages << PAGE_SHIFT);
> > > > +
> > > > +	down_write(&kvm->mm->mmap_sem);
> > > > +
> > > > +	mutex_lock(&kvm->arch.uvmem_lock);
> > > > +	vma = find_vma_intersection(kvm->mm, start, end);
> > > > +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > > > +		ret = H_STATE;
> > > > +		goto out_unlock;
> > > > +	}
> > > > +
> > > > +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > > > +			  MADV_UNMERGEABLE, &vma->vm_flags);
> > > > +	downgrade_write(&kvm->mm->mmap_sem);
> > > > +	downgrade = true;
> > > > +	if (ret) {
> > > > +		ret = H_STATE;
> > > > +		goto out_unlock;
> > > > +	}
> > > > +
> > > > +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> > > > +		/* skip paged-in pages and shared pages */
> > > > +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > > > +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > > > +			continue;
> > > > +
> > > > +		start = gfn_to_hva(kvm, gfn);
> > > > +		end = start + (1UL << PAGE_SHIFT);
> > > > +		ret = kvmppc_svm_migrate_page(vma, start, end,
> > > > +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > > > +
> > > > +		if (ret)
> > > > +			goto out_unlock;
> > > > +	}
> > > 
> > > Is there a guarantee that the vma you got for the start address remains
> > > valid for all the addresses till end in a memslot? If not, you should
> > > re-get the vma for the current address in each iteration I suppose.
> > 
> > 
> > mm->mmap_sem  is the semaphore that guards the vma. right?  If that
> > semaphore is held, can the vma change?
> 
> I am not sure if the vma you obtained would span the entire address range
> in the memslot.

will check.

Thanks,
RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D
@ 2020-06-03 23:10           ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-03 23:10 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: rcampbell, ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev,
	linuxppc-dev, bauerman, david

On Tue, Jun 02, 2020 at 03:36:39PM +0530, Bharata B Rao wrote:
> On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> > On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > > called H_SVM_PAGE_IN for all secure pages.
> > > 
> > > I don't think that is quite true. HV doesn't assume anything about
> > > secure pages by itself.
> > 
> > Yes. Currently, it does not assume anything about secure pages.  But I am
> > proposing that it should consider all pages (except the shared pages) as
> > secure pages, when H_SVM_INIT_DONE is called.
> 
> Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
> documentation.

ok.

> 
> > 
> > In other words, HV should treat all pages; except shared pages, as
> > secure pages once H_SVM_INIT_DONE is called. And this includes pages
> > added subsequently through memory hotplug.
> 
> So after H_SVM_INIT_DONE, if HV touches a secure page for any
> reason and gets encrypted contents via page-out, HV drops the
> device pfn at that time. So what state we would be in that case? We
> have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
> page in HV?

Good point.

The corresponding GFN will continue to be a secure GFN. Just that its
backing PFN is not a device-PFN, but a memory-PFN. Also that backing
memory-PFN contains encrypted content.

I will clarify this in the patch; about secure-GFN state.

> 
> > 
> > Yes. the Ultravisor can explicitly request the HV to move the pages
> > individually.  But that will slow down the transition too significantly.
> > It takes above 20min to transition them, for a SVM of size 100G.
> > 
> > With this proposed enhancement, the switch completes in a few seconds.
> 
> I think, many pages during initial switch and most pages for hotplugged
> memory are zero pages, for which we don't anyway issue UV page-in calls.
> So the 20min saving you are observing is purely due to hcall overhead?

Apparently, that seems to be the case.

> 
> How about extending H_SVM_PAGE_IN interface or a new hcall to request
> multiple pages in one request?
> 
> Also, how about requesting for bigger page sizes (2M)? Ralph Campbell
> had patches that added THP support for migrate_vma_* calls.

yes. that should give further boost. I think the API does not stop us
from using that feature. Its the support on the Ultravisor side.
Hopefully we will have contributions to the ultravisor once it is
opensourced.

> 
> > 
> > > 
> > > > These GFNs continue to be
> > > > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > > > have been secure GFNs associated with device PFNs.
> > > 
> > > Transition to secure state is driven by SVM/UV and HV just responds to
> > > hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> > > determine the required pages that need to be moved into secure side.
> > > HV just responds to it and tracks such pages as device private pages.
> > > 
> > > If SVM/UV doesn't get in all the pages to secure side by the time
> > > of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> > > otherwise) pages as far as HV is concerned.  Why should HV assume that
> > > SVM/UV didn't ask for a few pages and hence push those pages during
> > > H_SVM_INIT_DONE?
> > 
> > By definition, SVM is a VM backed by secure pages.
> > Hence all pages(except shared) must turn secure when a VM switches to SVM.
> > 
> > UV is interested in only a certain pages for the VM, which it will
> > request explicitly through H_SVM_PAGE_IN.  All other pages, need not
> > be paged-in through UV_PAGE_IN.  They just need to be switched to
> > device-pages.
> > 
> > > 
> > > I think UV should drive the movement of pages into secure side both
> > > of boot-time SVM memory and hot-plugged memory. HV does memslot
> > > registration uvcall when new memory is plugged in, UV should explicitly
> > > get the required pages in at that time instead of expecting HV to drive
> > > the same.
> > > 
> > > > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > > > +		const struct kvm_memory_slot *memslot)
> > > > +{
> > > > +	unsigned long gfn = memslot->base_gfn;
> > > > +	unsigned long end;
> > > > +	bool downgrade = false;
> > > > +	struct vm_area_struct *vma;
> > > > +	int i, ret = 0;
> > > > +	unsigned long start = gfn_to_hva(kvm, gfn);
> > > > +
> > > > +	if (kvm_is_error_hva(start))
> > > > +		return H_STATE;
> > > > +
> > > > +	end = start + (memslot->npages << PAGE_SHIFT);
> > > > +
> > > > +	down_write(&kvm->mm->mmap_sem);
> > > > +
> > > > +	mutex_lock(&kvm->arch.uvmem_lock);
> > > > +	vma = find_vma_intersection(kvm->mm, start, end);
> > > > +	if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > > > +		ret = H_STATE;
> > > > +		goto out_unlock;
> > > > +	}
> > > > +
> > > > +	ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > > > +			  MADV_UNMERGEABLE, &vma->vm_flags);
> > > > +	downgrade_write(&kvm->mm->mmap_sem);
> > > > +	downgrade = true;
> > > > +	if (ret) {
> > > > +		ret = H_STATE;
> > > > +		goto out_unlock;
> > > > +	}
> > > > +
> > > > +	for (i = 0; i < memslot->npages; i++, ++gfn) {
> > > > +		/* skip paged-in pages and shared pages */
> > > > +		if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > > > +			kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > > > +			continue;
> > > > +
> > > > +		start = gfn_to_hva(kvm, gfn);
> > > > +		end = start + (1UL << PAGE_SHIFT);
> > > > +		ret = kvmppc_svm_migrate_page(vma, start, end,
> > > > +			(gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > > > +
> > > > +		if (ret)
> > > > +			goto out_unlock;
> > > > +	}
> > > 
> > > Is there a guarantee that the vma you got for the start address remains
> > > valid for all the addresses till end in a memslot? If not, you should
> > > re-get the vma for the current address in each iteration I suppose.
> > 
> > 
> > mm->mmap_sem  is the semaphore that guards the vma. right?  If that
> > semaphore is held, can the vma change?
> 
> I am not sure if the vma you obtained would span the entire address range
> in the memslot.

will check.

Thanks,
RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
  2020-06-03 23:10           ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Ram Pai
@ 2020-06-04  2:43             ` Bharata B Rao
  -1 siblings, 0 replies; 39+ messages in thread
From: Bharata B Rao @ 2020-06-04  2:31 UTC (permalink / raw)
  To: Ram Pai
  Cc: rcampbell, ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev,
	linuxppc-dev, bauerman, david

On Wed, Jun 03, 2020 at 04:10:25PM -0700, Ram Pai wrote:
> On Tue, Jun 02, 2020 at 03:36:39PM +0530, Bharata B Rao wrote:
> > On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> > > On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > > > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > > > called H_SVM_PAGE_IN for all secure pages.
> > > > 
> > > > I don't think that is quite true. HV doesn't assume anything about
> > > > secure pages by itself.
> > > 
> > > Yes. Currently, it does not assume anything about secure pages.  But I am
> > > proposing that it should consider all pages (except the shared pages) as
> > > secure pages, when H_SVM_INIT_DONE is called.
> > 
> > Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
> > documentation.
> 
> ok.
> 
> > 
> > > 
> > > In other words, HV should treat all pages; except shared pages, as
> > > secure pages once H_SVM_INIT_DONE is called. And this includes pages
> > > added subsequently through memory hotplug.
> > 
> > So after H_SVM_INIT_DONE, if HV touches a secure page for any
> > reason and gets encrypted contents via page-out, HV drops the
> > device pfn at that time. So what state we would be in that case? We
> > have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
> > page in HV?
> 
> Good point.
> 
> The corresponding GFN will continue to be a secure GFN. Just that its
> backing PFN is not a device-PFN, but a memory-PFN. Also that backing
> memory-PFN contains encrypted content.
> 
> I will clarify this in the patch; about secure-GFN state.

I feel all this is complicating the states in HV and is avoidable
if UV just issued page-in calls during memslot registration uvcall.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D
@ 2020-06-04  2:43             ` Bharata B Rao
  0 siblings, 0 replies; 39+ messages in thread
From: Bharata B Rao @ 2020-06-04  2:43 UTC (permalink / raw)
  To: Ram Pai
  Cc: rcampbell, ldufour, cclaudio, kvm-ppc, aneesh.kumar, sukadev,
	linuxppc-dev, bauerman, david

On Wed, Jun 03, 2020 at 04:10:25PM -0700, Ram Pai wrote:
> On Tue, Jun 02, 2020 at 03:36:39PM +0530, Bharata B Rao wrote:
> > On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> > > On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > > > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > > > called H_SVM_PAGE_IN for all secure pages.
> > > > 
> > > > I don't think that is quite true. HV doesn't assume anything about
> > > > secure pages by itself.
> > > 
> > > Yes. Currently, it does not assume anything about secure pages.  But I am
> > > proposing that it should consider all pages (except the shared pages) as
> > > secure pages, when H_SVM_INIT_DONE is called.
> > 
> > Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
> > documentation.
> 
> ok.
> 
> > 
> > > 
> > > In other words, HV should treat all pages; except shared pages, as
> > > secure pages once H_SVM_INIT_DONE is called. And this includes pages
> > > added subsequently through memory hotplug.
> > 
> > So after H_SVM_INIT_DONE, if HV touches a secure page for any
> > reason and gets encrypted contents via page-out, HV drops the
> > device pfn at that time. So what state we would be in that case? We
> > have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
> > page in HV?
> 
> Good point.
> 
> The corresponding GFN will continue to be a secure GFN. Just that its
> backing PFN is not a device-PFN, but a memory-PFN. Also that backing
> memory-PFN contains encrypted content.
> 
> I will clarify this in the patch; about secure-GFN state.

I feel all this is complicating the states in HV and is avoidable
if UV just issued page-in calls during memslot registration uvcall.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
  2020-05-31  2:27   ` Ram Pai
@ 2020-06-05  9:48     ` Laurent Dufour
  -1 siblings, 0 replies; 39+ messages in thread
From: Laurent Dufour @ 2020-06-05  9:48 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: cclaudio, bharata, aneesh.kumar, sukadev, bauerman, david

Le 31/05/2020 à 04:27, Ram Pai a écrit :
> During the life of SVM, its GFNs can transition from secure to shared
> state and vice-versa. Since the kernel does not track GFNs that are
> shared, it is not possible to disambiguate a shared GFN from a GFN whose
> PFN has not yet been migrated to a device-PFN.
> 
> The ability to identify a shared GFN is needed to skip migrating its PFN
> to device PFN. This functionality is leveraged in a subsequent patch.
> 
> Add the ability to identify the state of a GFN.
> 
> Cc: Paul Mackerras <paulus@ozlabs.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Bharata B Rao <bharata@linux.ibm.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> Cc: Laurent Dufour <ldufour@linux.ibm.com>
> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> Cc: kvm-ppc@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> ---
>   arch/powerpc/include/asm/kvm_book3s_uvmem.h |   6 +-
>   arch/powerpc/kvm/book3s_64_mmu_radix.c      |   2 +-
>   arch/powerpc/kvm/book3s_hv.c                |   2 +-
>   arch/powerpc/kvm/book3s_hv_uvmem.c          | 115 ++++++++++++++++++++++++++--
>   4 files changed, 113 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> index 5a9834e..f0c5708 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> @@ -21,7 +21,8 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
>   int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn);
>   unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm);
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> -			     struct kvm *kvm, bool skip_page_out);
> +			     struct kvm *kvm, bool skip_page_out,
> +			     bool purge_gfn);
>   #else
>   static inline int kvmppc_uvmem_init(void)
>   {
> @@ -75,6 +76,7 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
>   
>   static inline void
>   kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> -			struct kvm *kvm, bool skip_page_out) { }
> +			struct kvm *kvm, bool skip_page_out,
> +			bool purge_gfn) { }
>   #endif /* CONFIG_PPC_UV */
>   #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index 803940d..3448459 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -1100,7 +1100,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
>   	unsigned int shift;
>   
>   	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)
> -		kvmppc_uvmem_drop_pages(memslot, kvm, true);
> +		kvmppc_uvmem_drop_pages(memslot, kvm, true, false);

Why purge_gfn is false here?
That call function is called when dropping an hot plugged memslot.
That's being said, when called by kvmppc_core_commit_memory_region_hv(), the mem 
slot is then free by kvmppc_uvmem_slot_free() so that shared state will not 
remain long but there is a window...

>   
>   	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
>   		return;
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 103d13e..4c62bfe 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -5467,7 +5467,7 @@ static int kvmhv_svm_off(struct kvm *kvm)
>   			continue;
>   
>   		kvm_for_each_memslot(memslot, slots) {
> -			kvmppc_uvmem_drop_pages(memslot, kvm, true);
> +			kvmppc_uvmem_drop_pages(memslot, kvm, true, true);
>   			uv_unregister_mem_slot(kvm->arch.lpid, memslot->id);
>   		}
>   	}
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index ea4a1f1..2ef1e03 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -99,14 +99,56 @@
>   static DEFINE_SPINLOCK(kvmppc_uvmem_bitmap_lock);
>   
>   #define KVMPPC_UVMEM_PFN	(1UL << 63)
> +#define KVMPPC_UVMEM_SHARED	(1UL << 62)
> +#define KVMPPC_UVMEM_FLAG_MASK	(KVMPPC_UVMEM_PFN | KVMPPC_UVMEM_SHARED)
> +#define KVMPPC_UVMEM_PFN_MASK	(~KVMPPC_UVMEM_FLAG_MASK)
>   
>   struct kvmppc_uvmem_slot {
>   	struct list_head list;
>   	unsigned long nr_pfns;
>   	unsigned long base_pfn;
> +	/*
> +	 * pfns array has an entry for each GFN of the memory slot.
> +	 *
> +	 * The GFN can be in one of the following states.
> +	 *
> +	 * (a) Secure - The GFN is secure. Only Ultravisor can access it.
> +	 * (b) Shared - The GFN is shared. Both Hypervisor and Ultravisor
> +	 *		can access it.
> +	 * (c) Normal - The GFN is a normal.  Only Hypervisor can access it.
> +	 *
> +	 * Secure GFN is associated with a devicePFN. Its pfn[] has
> +	 * KVMPPC_UVMEM_PFN flag set, and has the value of the device PFN
> +	 * KVMPPC_UVMEM_SHARED flag unset, and has the value of the device PFN
> +	 *
> +	 * Shared GFN is associated with a memoryPFN. Its pfn[] has
> +	 * KVMPPC_UVMEM_SHARED flag set. But its KVMPPC_UVMEM_PFN is not set,
> +	 * and there is no PFN value stored.
> +	 *
> +	 * Normal GFN is not associated with memoryPFN. Its pfn[] has
> +	 * KVMPPC_UVMEM_SHARED and KVMPPC_UVMEM_PFN flag unset, and no PFN
> +	 * value is stored.
> +	 *
> +	 * Any other combination of values in pfn[] leads to undefined
> +	 * behavior.
> +	 *
> +	 * Life cycle of a GFN --
> +	 *
> +	 * ---------------------------------------------------------
> +	 * |        |     Share	 |  Unshare | SVM	|slot      |
> +	 * |	    |		 |	    | abort/	|flush	   |
> +	 * |	    |		 |	    | terminate	|	   |
> +	 * ---------------------------------------------------------
> +	 * |        |            |          |           |          |
> +	 * | Secure |     Shared | Secure   |Normal	|Secure    |
> +	 * |        |            |          |           |          |
> +	 * | Shared |     Shared | Secure   |Normal     |Shared    |
> +	 * |        |            |          |           |          |
> +	 * | Normal |     Shared | Secure   |Normal     |Normal    |
> +	 * ---------------------------------------------------------
> +	 */
>   	unsigned long *pfns;
>   };
> -
>   struct kvmppc_uvmem_page_pvt {
>   	struct kvm *kvm;
>   	unsigned long gpa;
> @@ -175,7 +217,12 @@ static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
>   
>   	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
>   		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
> -			p->pfns[gfn - p->base_pfn] = 0;
> +			/*
> +			 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
> +			 * flag intact.  A gfn continues to be shared or
> +			 * unshared, with or without an associated device pfn.
> +			 */
> +			p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
>   			return;
>   		}
>   	}
> @@ -193,7 +240,7 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
>   			if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
>   				if (uvmem_pfn)
>   					*uvmem_pfn = p->pfns[index] &
> -						     ~KVMPPC_UVMEM_PFN;
> +						     KVMPPC_UVMEM_PFN_MASK;
>   				return true;
>   			} else
>   				return false;
> @@ -202,6 +249,38 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
>   	return false;
>   }
>   
> +static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
> +		bool set)
> +{
> +	struct kvmppc_uvmem_slot *p;
> +
> +	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
> +		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
> +			unsigned long index = gfn - p->base_pfn;
> +
> +			if (set)
> +				p->pfns[index] |= KVMPPC_UVMEM_SHARED;
> +			else
> +				p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
> +			return;
> +		}
> +	}
> +}
> +
> +bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
> +{
> +	struct kvmppc_uvmem_slot *p;
> +
> +	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
> +		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
> +			unsigned long index = gfn - p->base_pfn;
> +
> +			return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
> +		}
> +	}
> +	return false;
> +}
> +
>   unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
>   {
>   	struct kvm_memslots *slots;
> @@ -256,9 +335,13 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
>    * is HV side fault on these pages. Next we *get* these pages, forcing
>    * fault on them, do fault time migration to replace the device PTEs in
>    * QEMU page table with normal PTEs from newly allocated pages.
> + *
> + * if @purge_gfn is set, cleanup any information related to each of
> + * the GFNs associated with this memory slot.
>    */
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> -			     struct kvm *kvm, bool skip_page_out)
> +			     struct kvm *kvm, bool skip_page_out,
> +			     bool purge_gfn)
>   {
>   	int i;
>   	struct kvmppc_uvmem_page_pvt *pvt;
> @@ -269,11 +352,22 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   		struct page *uvmem_page;
>   
>   		mutex_lock(&kvm->arch.uvmem_lock);
> +
> +		if (purge_gfn) {
> +			/*
> +			 * cleanup the shared status of the GFN here.
> +			 * Any device PFN associated with the GFN shall
> +			 * be cleaned up later, in kvmppc_uvmem_page_free()
> +			 * when the device PFN is actually disassociated
> +			 * from the GFN.
> +			 */
> +			kvmppc_gfn_uvmem_shared(gfn, kvm, false);
> +		}
> +
>   		if (!kvmppc_gfn_is_uvmem_pfn(gfn, kvm, &uvmem_pfn)) {
>   			mutex_unlock(&kvm->arch.uvmem_lock);
>   			continue;
>   		}
> -
>   		uvmem_page = pfn_to_page(uvmem_pfn);
>   		pvt = uvmem_page->zone_device_data;
>   		pvt->skip_page_out = skip_page_out;
> @@ -304,7 +398,7 @@ unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
>   	srcu_idx = srcu_read_lock(&kvm->srcu);
>   
>   	kvm_for_each_memslot(memslot, kvm_memslots(kvm))
> -		kvmppc_uvmem_drop_pages(memslot, kvm, false);
> +		kvmppc_uvmem_drop_pages(memslot, kvm, false, true);
>   
>   	srcu_read_unlock(&kvm->srcu, srcu_idx);
>   
> @@ -470,8 +564,11 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa,
>   		goto retry;
>   	}
>   
> -	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift))
> +	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0,
> +				page_shift)) {
> +		kvmppc_gfn_uvmem_shared(gfn, kvm, true);
>   		ret = H_SUCCESS;
> +	}
>   	kvm_release_pfn_clean(pfn);
>   	mutex_unlock(&kvm->arch.uvmem_lock);
>   out:
> @@ -527,8 +624,10 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
>   		goto out_unlock;
>   
>   	if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift,
> -				&downgrade))
> +				&downgrade)) {
> +		kvmppc_gfn_uvmem_shared(gfn, kvm, false);
>   		ret = H_SUCCESS;
> +	}
>   out_unlock:
>   	mutex_unlock(&kvm->arch.uvmem_lock);
>   out:
> 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
@ 2020-06-05  9:48     ` Laurent Dufour
  0 siblings, 0 replies; 39+ messages in thread
From: Laurent Dufour @ 2020-06-05  9:48 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: cclaudio, bharata, aneesh.kumar, sukadev, bauerman, david

Le 31/05/2020 à 04:27, Ram Pai a écrit :
> During the life of SVM, its GFNs can transition from secure to shared
> state and vice-versa. Since the kernel does not track GFNs that are
> shared, it is not possible to disambiguate a shared GFN from a GFN whose
> PFN has not yet been migrated to a device-PFN.
> 
> The ability to identify a shared GFN is needed to skip migrating its PFN
> to device PFN. This functionality is leveraged in a subsequent patch.
> 
> Add the ability to identify the state of a GFN.
> 
> Cc: Paul Mackerras <paulus@ozlabs.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Bharata B Rao <bharata@linux.ibm.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> Cc: Laurent Dufour <ldufour@linux.ibm.com>
> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> Cc: kvm-ppc@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> ---
>   arch/powerpc/include/asm/kvm_book3s_uvmem.h |   6 +-
>   arch/powerpc/kvm/book3s_64_mmu_radix.c      |   2 +-
>   arch/powerpc/kvm/book3s_hv.c                |   2 +-
>   arch/powerpc/kvm/book3s_hv_uvmem.c          | 115 ++++++++++++++++++++++++++--
>   4 files changed, 113 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> index 5a9834e..f0c5708 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> @@ -21,7 +21,8 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
>   int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn);
>   unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm);
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> -			     struct kvm *kvm, bool skip_page_out);
> +			     struct kvm *kvm, bool skip_page_out,
> +			     bool purge_gfn);
>   #else
>   static inline int kvmppc_uvmem_init(void)
>   {
> @@ -75,6 +76,7 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
>   
>   static inline void
>   kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> -			struct kvm *kvm, bool skip_page_out) { }
> +			struct kvm *kvm, bool skip_page_out,
> +			bool purge_gfn) { }
>   #endif /* CONFIG_PPC_UV */
>   #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index 803940d..3448459 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -1100,7 +1100,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
>   	unsigned int shift;
>   
>   	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)
> -		kvmppc_uvmem_drop_pages(memslot, kvm, true);
> +		kvmppc_uvmem_drop_pages(memslot, kvm, true, false);

Why purge_gfn is false here?
That call function is called when dropping an hot plugged memslot.
That's being said, when called by kvmppc_core_commit_memory_region_hv(), the mem 
slot is then free by kvmppc_uvmem_slot_free() so that shared state will not 
remain long but there is a window...

>   
>   	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE)
>   		return;
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 103d13e..4c62bfe 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -5467,7 +5467,7 @@ static int kvmhv_svm_off(struct kvm *kvm)
>   			continue;
>   
>   		kvm_for_each_memslot(memslot, slots) {
> -			kvmppc_uvmem_drop_pages(memslot, kvm, true);
> +			kvmppc_uvmem_drop_pages(memslot, kvm, true, true);
>   			uv_unregister_mem_slot(kvm->arch.lpid, memslot->id);
>   		}
>   	}
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index ea4a1f1..2ef1e03 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -99,14 +99,56 @@
>   static DEFINE_SPINLOCK(kvmppc_uvmem_bitmap_lock);
>   
>   #define KVMPPC_UVMEM_PFN	(1UL << 63)
> +#define KVMPPC_UVMEM_SHARED	(1UL << 62)
> +#define KVMPPC_UVMEM_FLAG_MASK	(KVMPPC_UVMEM_PFN | KVMPPC_UVMEM_SHARED)
> +#define KVMPPC_UVMEM_PFN_MASK	(~KVMPPC_UVMEM_FLAG_MASK)
>   
>   struct kvmppc_uvmem_slot {
>   	struct list_head list;
>   	unsigned long nr_pfns;
>   	unsigned long base_pfn;
> +	/*
> +	 * pfns array has an entry for each GFN of the memory slot.
> +	 *
> +	 * The GFN can be in one of the following states.
> +	 *
> +	 * (a) Secure - The GFN is secure. Only Ultravisor can access it.
> +	 * (b) Shared - The GFN is shared. Both Hypervisor and Ultravisor
> +	 *		can access it.
> +	 * (c) Normal - The GFN is a normal.  Only Hypervisor can access it.
> +	 *
> +	 * Secure GFN is associated with a devicePFN. Its pfn[] has
> +	 * KVMPPC_UVMEM_PFN flag set, and has the value of the device PFN
> +	 * KVMPPC_UVMEM_SHARED flag unset, and has the value of the device PFN
> +	 *
> +	 * Shared GFN is associated with a memoryPFN. Its pfn[] has
> +	 * KVMPPC_UVMEM_SHARED flag set. But its KVMPPC_UVMEM_PFN is not set,
> +	 * and there is no PFN value stored.
> +	 *
> +	 * Normal GFN is not associated with memoryPFN. Its pfn[] has
> +	 * KVMPPC_UVMEM_SHARED and KVMPPC_UVMEM_PFN flag unset, and no PFN
> +	 * value is stored.
> +	 *
> +	 * Any other combination of values in pfn[] leads to undefined
> +	 * behavior.
> +	 *
> +	 * Life cycle of a GFN --
> +	 *
> +	 * ---------------------------------------------------------
> +	 * |        |     Share	 |  Unshare | SVM	|slot      |
> +	 * |	    |		 |	    | abort/	|flush	   |
> +	 * |	    |		 |	    | terminate	|	   |
> +	 * ---------------------------------------------------------
> +	 * |        |            |          |           |          |
> +	 * | Secure |     Shared | Secure   |Normal	|Secure    |
> +	 * |        |            |          |           |          |
> +	 * | Shared |     Shared | Secure   |Normal     |Shared    |
> +	 * |        |            |          |           |          |
> +	 * | Normal |     Shared | Secure   |Normal     |Normal    |
> +	 * ---------------------------------------------------------
> +	 */
>   	unsigned long *pfns;
>   };
> -
>   struct kvmppc_uvmem_page_pvt {
>   	struct kvm *kvm;
>   	unsigned long gpa;
> @@ -175,7 +217,12 @@ static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm)
>   
>   	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
>   		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
> -			p->pfns[gfn - p->base_pfn] = 0;
> +			/*
> +			 * Reset everything, but keep the KVMPPC_UVMEM_SHARED
> +			 * flag intact.  A gfn continues to be shared or
> +			 * unshared, with or without an associated device pfn.
> +			 */
> +			p->pfns[gfn - p->base_pfn] &= KVMPPC_UVMEM_SHARED;
>   			return;
>   		}
>   	}
> @@ -193,7 +240,7 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
>   			if (p->pfns[index] & KVMPPC_UVMEM_PFN) {
>   				if (uvmem_pfn)
>   					*uvmem_pfn = p->pfns[index] &
> -						     ~KVMPPC_UVMEM_PFN;
> +						     KVMPPC_UVMEM_PFN_MASK;
>   				return true;
>   			} else
>   				return false;
> @@ -202,6 +249,38 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
>   	return false;
>   }
>   
> +static void kvmppc_gfn_uvmem_shared(unsigned long gfn, struct kvm *kvm,
> +		bool set)
> +{
> +	struct kvmppc_uvmem_slot *p;
> +
> +	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
> +		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
> +			unsigned long index = gfn - p->base_pfn;
> +
> +			if (set)
> +				p->pfns[index] |= KVMPPC_UVMEM_SHARED;
> +			else
> +				p->pfns[index] &= ~KVMPPC_UVMEM_SHARED;
> +			return;
> +		}
> +	}
> +}
> +
> +bool kvmppc_gfn_is_uvmem_shared(unsigned long gfn, struct kvm *kvm)
> +{
> +	struct kvmppc_uvmem_slot *p;
> +
> +	list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) {
> +		if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) {
> +			unsigned long index = gfn - p->base_pfn;
> +
> +			return (p->pfns[index] & KVMPPC_UVMEM_SHARED);
> +		}
> +	}
> +	return false;
> +}
> +
>   unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
>   {
>   	struct kvm_memslots *slots;
> @@ -256,9 +335,13 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
>    * is HV side fault on these pages. Next we *get* these pages, forcing
>    * fault on them, do fault time migration to replace the device PTEs in
>    * QEMU page table with normal PTEs from newly allocated pages.
> + *
> + * if @purge_gfn is set, cleanup any information related to each of
> + * the GFNs associated with this memory slot.
>    */
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
> -			     struct kvm *kvm, bool skip_page_out)
> +			     struct kvm *kvm, bool skip_page_out,
> +			     bool purge_gfn)
>   {
>   	int i;
>   	struct kvmppc_uvmem_page_pvt *pvt;
> @@ -269,11 +352,22 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   		struct page *uvmem_page;
>   
>   		mutex_lock(&kvm->arch.uvmem_lock);
> +
> +		if (purge_gfn) {
> +			/*
> +			 * cleanup the shared status of the GFN here.
> +			 * Any device PFN associated with the GFN shall
> +			 * be cleaned up later, in kvmppc_uvmem_page_free()
> +			 * when the device PFN is actually disassociated
> +			 * from the GFN.
> +			 */
> +			kvmppc_gfn_uvmem_shared(gfn, kvm, false);
> +		}
> +
>   		if (!kvmppc_gfn_is_uvmem_pfn(gfn, kvm, &uvmem_pfn)) {
>   			mutex_unlock(&kvm->arch.uvmem_lock);
>   			continue;
>   		}
> -
>   		uvmem_page = pfn_to_page(uvmem_pfn);
>   		pvt = uvmem_page->zone_device_data;
>   		pvt->skip_page_out = skip_page_out;
> @@ -304,7 +398,7 @@ unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm)
>   	srcu_idx = srcu_read_lock(&kvm->srcu);
>   
>   	kvm_for_each_memslot(memslot, kvm_memslots(kvm))
> -		kvmppc_uvmem_drop_pages(memslot, kvm, false);
> +		kvmppc_uvmem_drop_pages(memslot, kvm, false, true);
>   
>   	srcu_read_unlock(&kvm->srcu, srcu_idx);
>   
> @@ -470,8 +564,11 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa,
>   		goto retry;
>   	}
>   
> -	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift))
> +	if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0,
> +				page_shift)) {
> +		kvmppc_gfn_uvmem_shared(gfn, kvm, true);
>   		ret = H_SUCCESS;
> +	}
>   	kvm_release_pfn_clean(pfn);
>   	mutex_unlock(&kvm->arch.uvmem_lock);
>   out:
> @@ -527,8 +624,10 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
>   		goto out_unlock;
>   
>   	if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift,
> -				&downgrade))
> +				&downgrade)) {
> +		kvmppc_gfn_uvmem_shared(gfn, kvm, false);
>   		ret = H_SUCCESS;
> +	}
>   out_unlock:
>   	mutex_unlock(&kvm->arch.uvmem_lock);
>   out:
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
  2020-06-05  9:48     ` Laurent Dufour
@ 2020-06-05 14:38       ` Ram Pai
  -1 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-05 14:38 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: cclaudio, kvm-ppc, bharata, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

> >diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> >index 803940d..3448459 100644
> >--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> >+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> >@@ -1100,7 +1100,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
> >  	unsigned int shift;
> >  	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)
> >-		kvmppc_uvmem_drop_pages(memslot, kvm, true);
> >+		kvmppc_uvmem_drop_pages(memslot, kvm, true, false);
> 
> Why purge_gfn is false here?
> That call function is called when dropping an hot plugged memslot.

This function does not know, under what context it is called. Since
its job is to just flush the memslot, it cannot assume anything
about purging the pages in the memslot.

.snip..


RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs
@ 2020-06-05 14:38       ` Ram Pai
  0 siblings, 0 replies; 39+ messages in thread
From: Ram Pai @ 2020-06-05 14:38 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: cclaudio, kvm-ppc, bharata, aneesh.kumar, sukadev, linuxppc-dev,
	bauerman, david

> >diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> >index 803940d..3448459 100644
> >--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> >+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> >@@ -1100,7 +1100,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
> >  	unsigned int shift;
> >  	if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)
> >-		kvmppc_uvmem_drop_pages(memslot, kvm, true);
> >+		kvmppc_uvmem_drop_pages(memslot, kvm, true, false);
> 
> Why purge_gfn is false here?
> That call function is called when dropping an hot plugged memslot.

This function does not know, under what context it is called. Since
its job is to just flush the memslot, it cannot assume anything
about purging the pages in the memslot.

.snip..


RP

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
  2020-05-31  2:27   ` Ram Pai
@ 2020-06-15 17:00     ` Laurent Dufour
  -1 siblings, 0 replies; 39+ messages in thread
From: Laurent Dufour @ 2020-06-15 17:00 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: cclaudio, bharata, aneesh.kumar, sukadev, bauerman, david

Le 31/05/2020 à 04:27, Ram Pai a écrit :
> From: Laurent Dufour <ldufour@linux.ibm.com>
> 
> When a memory slot is hot plugged to a SVM, GFNs associated with that
> memory slot automatically default to secure GFN. Hence migrate the
> PFNs associated with these GFNs to device-PFNs.
> 
> uv_migrate_mem_slot() is called to achieve that. It will not call
> UV_PAGE_IN since this request is ignored by the Ultravisor.
> NOTE: Ultravisor does not trust any page content provided by
> the Hypervisor, ones the VM turns secure.
> 
> Cc: Paul Mackerras <paulus@ozlabs.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Bharata B Rao <bharata@linux.ibm.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> Cc: Laurent Dufour <ldufour@linux.ibm.com>
> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> Cc: kvm-ppc@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	(fixed merge conflicts. Modified the commit message)
> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
>   arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
>   arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
>   3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> index f0c5708..2ec2e5afb 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> @@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			     struct kvm *kvm, bool skip_page_out,
>   			     bool purge_gfn);
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
>   #else
>   static inline int kvmppc_uvmem_init(void)
>   {
> @@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
>   kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			struct kvm *kvm, bool skip_page_out,
>   			bool purge_gfn) { }
> +
> +static int uv_migrate_mem_slot(struct kvm *kvm,
> +		const struct kvm_memory_slot *memslot);
>   #endif /* CONFIG_PPC_UV */
>   #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 4c62bfe..604d062 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>   	case KVM_MR_CREATE:
>   		if (kvmppc_uvmem_slot_init(kvm, new))
>   			return;
> -		uv_register_mem_slot(kvm->arch.lpid,
> -				     new->base_gfn << PAGE_SHIFT,
> -				     new->npages * PAGE_SIZE,
> -				     0, new->id);
> +		if (uv_register_mem_slot(kvm->arch.lpid,
> +					 new->base_gfn << PAGE_SHIFT,
> +					 new->npages * PAGE_SIZE,
> +					 0, new->id))
> +			return;
> +		uv_migrate_mem_slot(kvm, new);
>   		break;
>   	case KVM_MR_DELETE:
>   		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +		kvmppc_uvmem_drop_pages(old, kvm, true, true);

My mistake, kvmppc_radix_flush_memslot() called just before is already 
triggering the call to kvmppc_uvmem_drop_pages(), so that call is useless.

You should remove it in your v2.

>   		kvmppc_uvmem_slot_free(kvm, old);
>   		break;
>   	default:
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 36dda1d..1fa5f2a 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -377,8 +377,7 @@ static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
>   	return ret;
>   }
>   
> -static int uv_migrate_mem_slot(struct kvm *kvm,
> -		const struct kvm_memory_slot *memslot)
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot)
>   {
>   	unsigned long gfn = memslot->base_gfn;
>   	unsigned long end;
> 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory
@ 2020-06-15 17:00     ` Laurent Dufour
  0 siblings, 0 replies; 39+ messages in thread
From: Laurent Dufour @ 2020-06-15 17:00 UTC (permalink / raw)
  To: Ram Pai, kvm-ppc, linuxppc-dev
  Cc: cclaudio, bharata, aneesh.kumar, sukadev, bauerman, david

Le 31/05/2020 à 04:27, Ram Pai a écrit :
> From: Laurent Dufour <ldufour@linux.ibm.com>
> 
> When a memory slot is hot plugged to a SVM, GFNs associated with that
> memory slot automatically default to secure GFN. Hence migrate the
> PFNs associated with these GFNs to device-PFNs.
> 
> uv_migrate_mem_slot() is called to achieve that. It will not call
> UV_PAGE_IN since this request is ignored by the Ultravisor.
> NOTE: Ultravisor does not trust any page content provided by
> the Hypervisor, ones the VM turns secure.
> 
> Cc: Paul Mackerras <paulus@ozlabs.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Bharata B Rao <bharata@linux.ibm.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> Cc: Laurent Dufour <ldufour@linux.ibm.com>
> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Claudio Carvalho <cclaudio@linux.ibm.com>
> Cc: kvm-ppc@vger.kernel.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	(fixed merge conflicts. Modified the commit message)
> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/kvm_book3s_uvmem.h |  4 ++++
>   arch/powerpc/kvm/book3s_hv.c                | 11 +++++++----
>   arch/powerpc/kvm/book3s_hv_uvmem.c          |  3 +--
>   3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> index f0c5708..2ec2e5afb 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h
> @@ -23,6 +23,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
>   void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			     struct kvm *kvm, bool skip_page_out,
>   			     bool purge_gfn);
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
>   #else
>   static inline int kvmppc_uvmem_init(void)
>   {
> @@ -78,5 +79,8 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn)
>   kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free,
>   			struct kvm *kvm, bool skip_page_out,
>   			bool purge_gfn) { }
> +
> +static int uv_migrate_mem_slot(struct kvm *kvm,
> +		const struct kvm_memory_slot *memslot);
>   #endif /* CONFIG_PPC_UV */
>   #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 4c62bfe..604d062 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -4516,13 +4516,16 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>   	case KVM_MR_CREATE:
>   		if (kvmppc_uvmem_slot_init(kvm, new))
>   			return;
> -		uv_register_mem_slot(kvm->arch.lpid,
> -				     new->base_gfn << PAGE_SHIFT,
> -				     new->npages * PAGE_SIZE,
> -				     0, new->id);
> +		if (uv_register_mem_slot(kvm->arch.lpid,
> +					 new->base_gfn << PAGE_SHIFT,
> +					 new->npages * PAGE_SIZE,
> +					 0, new->id))
> +			return;
> +		uv_migrate_mem_slot(kvm, new);
>   		break;
>   	case KVM_MR_DELETE:
>   		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +		kvmppc_uvmem_drop_pages(old, kvm, true, true);

My mistake, kvmppc_radix_flush_memslot() called just before is already 
triggering the call to kvmppc_uvmem_drop_pages(), so that call is useless.

You should remove it in your v2.

>   		kvmppc_uvmem_slot_free(kvm, old);
>   		break;
>   	default:
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 36dda1d..1fa5f2a 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -377,8 +377,7 @@ static int kvmppc_svm_migrate_page(struct vm_area_struct *vma,
>   	return ret;
>   }
>   
> -static int uv_migrate_mem_slot(struct kvm *kvm,
> -		const struct kvm_memory_slot *memslot)
> +int uv_migrate_mem_slot(struct kvm *kvm, const struct kvm_memory_slot *memslot)
>   {
>   	unsigned long gfn = memslot->base_gfn;
>   	unsigned long end;
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2020-06-15 17:03 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-31  2:27 [PATCH v1 0/4] Migrate non-migrated pages of a SVM Ram Pai
2020-05-31  2:27 ` Ram Pai
2020-05-31  2:27 ` [PATCH v1 1/4] KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c Ram Pai
2020-05-31  2:27   ` Ram Pai
2020-05-31  2:27 ` [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs Ram Pai
2020-05-31  2:27   ` Ram Pai
2020-06-01  4:12   ` kbuild test robot
2020-06-01  4:12     ` kbuild test robot
2020-06-01  4:12     ` kbuild test robot
2020-06-05  9:48   ` Laurent Dufour
2020-06-05  9:48     ` Laurent Dufour
2020-06-05 14:38     ` Ram Pai
2020-06-05 14:38       ` Ram Pai
2020-05-31  2:27 ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Ram Pai
2020-05-31  2:27   ` Ram Pai
2020-06-01 11:55   ` Bharata B Rao
2020-06-01 11:55     ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Bharata B Rao
2020-06-01 19:05     ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Ram Pai
2020-06-01 19:05       ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Ram Pai
2020-06-02 10:06       ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Bharata B Rao
2020-06-02 10:18         ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Bharata B Rao
2020-06-03 23:10         ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Ram Pai
2020-06-03 23:10           ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Ram Pai
2020-06-04  2:31           ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Bharata B Rao
2020-06-04  2:43             ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_D Bharata B Rao
2020-05-31  2:27 ` [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory Ram Pai
2020-05-31  2:27   ` Ram Pai
2020-06-01  4:35   ` kbuild test robot
2020-06-01  4:35     ` kbuild test robot
2020-06-01  4:35     ` kbuild test robot
2020-06-01  4:45   ` kbuild test robot
2020-06-01  4:45     ` kbuild test robot
2020-06-01  4:45     ` kbuild test robot
2020-06-02  8:31   ` Laurent Dufour
2020-06-02  8:31     ` Laurent Dufour
2020-06-03 13:25     ` Ram Pai
2020-06-03 13:25       ` Ram Pai
2020-06-15 17:00   ` Laurent Dufour
2020-06-15 17:00     ` Laurent Dufour

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.