All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/7] kvmppc: HMM driver to manage pages of secure guest
@ 2019-07-09 10:25 ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

Hi,

A pseries guest can be run as a secure guest on Ultravisor-enabled
POWER platforms. On such platforms, this driver will be used to manage
the movement of guest pages between the normal memory managed by
hypervisor (HV) and secure memory managed by Ultravisor (UV).

Private ZONE_DEVICE memory equal to the amount of secure memory
available in the platform for running secure guests is created
via a HMM device. The movement of pages between normal and secure
memory is done by ->alloc_and_copy() callback routine of migrate_vma().

The page-in or page-out requests from UV will come to HV as hcalls and
HV will call back into UV via uvcalls to satisfy these page requests.

These patches apply and work on top of the base Ultravisor v4 patches
posted by Claudio Carvalho at:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg152842.html

Changes in v5
=============
- Hold kvm->srcu lock until we are done migrating the page.
- Ensure we take heavier lock mmap_sem first before taking kvm->srcu
  lock.
- Code reorgs, comments updates and commit messages updates.
- Ensure we don't lookup HV side partition scoped page tables from
  memslot flush code, this is required for memory unplug to make
  progress.
- Fix reboot of secure SMP guests by unpinng the VPA pages during
  reboot (Ram Pai).
- Added documentation for the new KVM_PP_SVM_OFF ioctl.
- Using different bit slot to differentiate HMM PFN from other uses
  of rmap entries.
- Remove kvmppc_hmm_release_pfns() as releasing of HMM PFNs will be
  done by unmap_vmas() and its callers during VM shutdown.
- Carrying the patch that adds CONFIG_PPC_UV with this patchset.

v4: https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg151156.html

Anshuman Khandual (1):
  KVM: PPC: Ultravisor: Add PPC_UV config option

Bharata B Rao (6):
  kvmppc: HMM backend driver to manage pages of secure guest
  kvmppc: Shared pages support for secure guests
  kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
  kvmppc: Handle memory plug/unplug to secure VM
  kvmppc: Radix changes for secure guest
  kvmppc: Support reset of secure guest

 Documentation/virtual/kvm/api.txt         |  19 +
 arch/powerpc/Kconfig                      |  20 +
 arch/powerpc/include/asm/hvcall.h         |   9 +
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  48 ++
 arch/powerpc/include/asm/kvm_host.h       |  28 +
 arch/powerpc/include/asm/kvm_ppc.h        |   2 +
 arch/powerpc/include/asm/ultravisor-api.h |   6 +
 arch/powerpc/include/asm/ultravisor.h     |  47 ++
 arch/powerpc/kvm/Makefile                 |   3 +
 arch/powerpc/kvm/book3s_64_mmu_radix.c    |  22 +
 arch/powerpc/kvm/book3s_hv.c              | 115 ++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 656 ++++++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                |  12 +
 include/uapi/linux/kvm.h                  |   1 +
 tools/include/uapi/linux/kvm.h            |   1 +
 15 files changed, 989 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
 create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c

-- 
2.21.0


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v5 0/7] kvmppc: HMM driver to manage pages of secure guest
@ 2019-07-09 10:25 ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

Hi,

A pseries guest can be run as a secure guest on Ultravisor-enabled
POWER platforms. On such platforms, this driver will be used to manage
the movement of guest pages between the normal memory managed by
hypervisor (HV) and secure memory managed by Ultravisor (UV).

Private ZONE_DEVICE memory equal to the amount of secure memory
available in the platform for running secure guests is created
via a HMM device. The movement of pages between normal and secure
memory is done by ->alloc_and_copy() callback routine of migrate_vma().

The page-in or page-out requests from UV will come to HV as hcalls and
HV will call back into UV via uvcalls to satisfy these page requests.

These patches apply and work on top of the base Ultravisor v4 patches
posted by Claudio Carvalho at:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg152842.html

Changes in v5
=============
- Hold kvm->srcu lock until we are done migrating the page.
- Ensure we take heavier lock mmap_sem first before taking kvm->srcu
  lock.
- Code reorgs, comments updates and commit messages updates.
- Ensure we don't lookup HV side partition scoped page tables from
  memslot flush code, this is required for memory unplug to make
  progress.
- Fix reboot of secure SMP guests by unpinng the VPA pages during
  reboot (Ram Pai).
- Added documentation for the new KVM_PP_SVM_OFF ioctl.
- Using different bit slot to differentiate HMM PFN from other uses
  of rmap entries.
- Remove kvmppc_hmm_release_pfns() as releasing of HMM PFNs will be
  done by unmap_vmas() and its callers during VM shutdown.
- Carrying the patch that adds CONFIG_PPC_UV with this patchset.

v4: https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg151156.html

Anshuman Khandual (1):
  KVM: PPC: Ultravisor: Add PPC_UV config option

Bharata B Rao (6):
  kvmppc: HMM backend driver to manage pages of secure guest
  kvmppc: Shared pages support for secure guests
  kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
  kvmppc: Handle memory plug/unplug to secure VM
  kvmppc: Radix changes for secure guest
  kvmppc: Support reset of secure guest

 Documentation/virtual/kvm/api.txt         |  19 +
 arch/powerpc/Kconfig                      |  20 +
 arch/powerpc/include/asm/hvcall.h         |   9 +
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  48 ++
 arch/powerpc/include/asm/kvm_host.h       |  28 +
 arch/powerpc/include/asm/kvm_ppc.h        |   2 +
 arch/powerpc/include/asm/ultravisor-api.h |   6 +
 arch/powerpc/include/asm/ultravisor.h     |  47 ++
 arch/powerpc/kvm/Makefile                 |   3 +
 arch/powerpc/kvm/book3s_64_mmu_radix.c    |  22 +
 arch/powerpc/kvm/book3s_hv.c              | 115 ++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 656 ++++++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                |  12 +
 include/uapi/linux/kvm.h                  |   1 +
 tools/include/uapi/linux/kvm.h            |   1 +
 15 files changed, 989 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
 create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c

-- 
2.21.0


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

HMM driver for KVM PPC to manage page transitions of
secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.

H_SVM_PAGE_IN: Move the content of a normal page to secure page
H_SVM_PAGE_OUT: Move the content of a secure page to normal page

Private ZONE_DEVICE memory equal to the amount of secure memory
available in the platform for running secure guests is created
via a HMM device. The movement of pages between normal and secure
memory is done by ->alloc_and_copy() callback routine of migrate_vma().

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h         |   4 +
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  29 ++
 arch/powerpc/include/asm/kvm_host.h       |  12 +
 arch/powerpc/include/asm/ultravisor-api.h |   2 +
 arch/powerpc/include/asm/ultravisor.h     |  17 +
 arch/powerpc/kvm/Makefile                 |   3 +
 arch/powerpc/kvm/book3s_hv.c              |  19 +
 arch/powerpc/kvm/book3s_hv_hmm.c          | 482 ++++++++++++++++++++++
 8 files changed, 568 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
 create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 463c63a9fcf1..2f6b952deb0f 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -337,6 +337,10 @@
 #define H_TLB_INVALIDATE	0xF808
 #define H_COPY_TOFROM_GUEST	0xF80C
 
+/* Platform-specific hcalls used by the Ultravisor */
+#define H_SVM_PAGE_IN		0xEF00
+#define H_SVM_PAGE_OUT		0xEF04
+
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
 #define H_SET_MODE_RESOURCE_SET_DAWR		2
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
new file mode 100644
index 000000000000..21f3de5f2acb
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __POWERPC_KVM_PPC_HMM_H__
+#define __POWERPC_KVM_PPC_HMM_H__
+
+#ifdef CONFIG_PPC_UV
+extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm,
+					  unsigned long gra,
+					  unsigned long flags,
+					  unsigned long page_shift);
+extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
+					  unsigned long gra,
+					  unsigned long flags,
+					  unsigned long page_shift);
+#else
+static inline unsigned long
+kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
+		     unsigned long flags, unsigned long page_shift)
+{
+	return H_UNSUPPORTED;
+}
+
+static inline unsigned long
+kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
+		      unsigned long flags, unsigned long page_shift)
+{
+	return H_UNSUPPORTED;
+}
+#endif /* CONFIG_PPC_UV */
+#endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 184becb62ea4..ac1a101beb07 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -858,4 +858,16 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
+#ifdef CONFIG_PPC_UV
+extern int kvmppc_hmm_init(void);
+extern void kvmppc_hmm_free(void);
+#else
+static inline int kvmppc_hmm_init(void)
+{
+	return 0;
+}
+
+static inline void kvmppc_hmm_free(void) {}
+#endif /* CONFIG_PPC_UV */
+
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 7c4d0b4ced12..f1c5800ac705 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,5 +20,7 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
+#define UV_PAGE_IN			0xF128
+#define UV_PAGE_OUT			0xF12C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 996c1efd6c6d..16f8e0e8ec3f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -44,6 +44,23 @@ static inline int uv_register_pate(u64 lpid, u64 dw0, u64 dw1)
 	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
 }
 
+static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 flags,
+			     u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags,
+		     page_shift);
+}
+
+static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags,
+			      u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
+		     page_shift);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 4c67cc79de7c..d1a79fb5f806 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -71,6 +71,9 @@ kvm-hv-y += \
 	book3s_64_mmu_radix.o \
 	book3s_hv_nested.o
 
+kvm-hv-$(CONFIG_PPC_UV) += \
+	book3s_hv_hmm.o
+
 kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \
 	book3s_hv_tm.o
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index a104743291a9..8ee66aa0da58 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -75,6 +75,8 @@
 #include <asm/xics.h>
 #include <asm/xive.h>
 #include <asm/hw_breakpoint.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_book3s_hmm.h>
 
 #include "book3s.h"
 
@@ -1083,6 +1085,18 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 					 kvmppc_get_gpr(vcpu, 5),
 					 kvmppc_get_gpr(vcpu, 6));
 		break;
+	case H_SVM_PAGE_IN:
+		ret = kvmppc_h_svm_page_in(vcpu->kvm,
+					   kvmppc_get_gpr(vcpu, 4),
+					   kvmppc_get_gpr(vcpu, 5),
+					   kvmppc_get_gpr(vcpu, 6));
+		break;
+	case H_SVM_PAGE_OUT:
+		ret = kvmppc_h_svm_page_out(vcpu->kvm,
+					    kvmppc_get_gpr(vcpu, 4),
+					    kvmppc_get_gpr(vcpu, 5),
+					    kvmppc_get_gpr(vcpu, 6));
+		break;
 	default:
 		return RESUME_HOST;
 	}
@@ -5501,11 +5515,16 @@ static int kvmppc_book3s_init_hv(void)
 			no_mixing_hpt_and_radix = true;
 	}
 
+	r = kvmppc_hmm_init();
+	if (r < 0)
+		pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r);
+
 	return r;
 }
 
 static void kvmppc_book3s_exit_hv(void)
 {
+	kvmppc_hmm_free();
 	kvmppc_free_host_rm_ops();
 	if (kvmppc_radix_possible())
 		kvmppc_radix_exit();
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
new file mode 100644
index 000000000000..cd34323888b6
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -0,0 +1,482 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HMM driver to manage page migration between normal and secure
+ * memory.
+ *
+ * Based on Jérôme Glisse's HMM dummy driver.
+ *
+ * Copyright 2018 Bharata B Rao, IBM Corp. <bharata@linux.ibm.com>
+ */
+
+/*
+ * A pseries guest can be run as a secure guest on Ultravisor-enabled
+ * POWER platforms. On such platforms, this driver will be used to manage
+ * the movement of guest pages between the normal memory managed by
+ * hypervisor (HV) and secure memory managed by Ultravisor (UV).
+ *
+ * Private ZONE_DEVICE memory equal to the amount of secure memory
+ * available in the platform for running secure guests is created
+ * via a HMM device. The movement of pages between normal and secure
+ * memory is done by ->alloc_and_copy() callback routine of migrate_vma().
+ *
+ * The page-in or page-out requests from UV will come to HV as hcalls and
+ * HV will call back into UV via uvcalls to satisfy these page requests.
+ *
+ * For each page that gets moved into secure memory, a HMM PFN is used
+ * on the HV side and HMM migration PTE corresponding to that PFN would be
+ * populated in the QEMU page tables. HMM PFNs are stored in the rmap
+ * array. Whenever a guest page becomes secure, HMM PFN allocated for
+ * the same will be populated in the corresponding slot in the rmap
+ * array. The overloading of rmap array's usage which otherwise is
+ * used primarily by HPT guests means that this feature (secure
+ * guest on PEF platforms) is available only for Radix MMU guests.
+ * Also the same rmap array is used differently by nested HPT guests.
+ * Hence a secure guest can't have nested guests.
+ */
+
+#include <linux/hmm.h>
+#include <linux/kvm_host.h>
+#include <linux/sched/mm.h>
+#include <asm/ultravisor.h>
+
+struct kvmppc_hmm_device {
+	struct hmm_device *device;
+	struct hmm_devmem *devmem;
+	unsigned long *pfn_bitmap;
+};
+
+static struct kvmppc_hmm_device kvmppc_hmm;
+spinlock_t kvmppc_hmm_lock;
+
+struct kvmppc_hmm_page_pvt {
+	unsigned long *rmap;
+	unsigned int lpid;
+	unsigned long gpa;
+};
+
+struct kvmppc_hmm_migrate_args {
+	unsigned long *rmap;
+	unsigned int lpid;
+	unsigned long gpa;
+	unsigned long page_shift;
+};
+
+/*
+ * Bits 60:56 in the rmap entry will be used to identify the
+ * different uses/functions of rmap. This definition with move
+ * to a proper header when all other functions are defined.
+ */
+#define KVMPPC_PFN_HMM		(0x2ULL << 56)
+
+static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
+{
+	return !!(pfn & KVMPPC_PFN_HMM);
+}
+
+/*
+ * Get a free HMM PFN from the pool
+ *
+ * Called when a normal page is moved to secure memory (UV_PAGE_IN). HMM
+ * PFN will be used to keep track of the secure page on HV side.
+ *
+ * @rmap here is the slot in the rmap array that corresponds to @gpa.
+ * Thus a non-zero rmap entry indicates that the corresonding guest
+ * page has become secure, is not mapped on the HV side.
+ *
+ * NOTE: In this and subsequent functions, we pass around and access
+ * individual elements of kvm_memory_slot->arch.rmap[] without any
+ * protection. Should we use lock_rmap() here?
+ */
+static struct page *kvmppc_hmm_get_page(unsigned long *rmap,
+					unsigned long gpa, unsigned int lpid)
+{
+	struct page *dpage = NULL;
+	unsigned long bit, hmm_pfn;
+	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
+				kvmppc_hmm.devmem->pfn_first;
+	unsigned long flags;
+	struct kvmppc_hmm_page_pvt *pvt;
+
+	if (kvmppc_is_hmm_pfn(*rmap))
+		return NULL;
+
+	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
+	bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns);
+	if (bit >= nr_pfns)
+		goto out;
+
+	bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1);
+	hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first;
+	dpage = pfn_to_page(hmm_pfn);
+
+	if (!trylock_page(dpage))
+		goto out_clear;
+
+	*rmap = hmm_pfn | KVMPPC_PFN_HMM;
+	pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC);
+	if (!pvt)
+		goto out_unlock;
+	pvt->rmap = rmap;
+	pvt->gpa = gpa;
+	pvt->lpid = lpid;
+	hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt);
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+
+	get_page(dpage);
+	return dpage;
+
+out_unlock:
+	unlock_page(dpage);
+out_clear:
+	bitmap_clear(kvmppc_hmm.pfn_bitmap,
+		     hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1);
+out:
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+	return NULL;
+}
+
+/*
+ * Release the HMM PFN back to the pool
+ *
+ * Called when secure page becomes a normal page during UV_PAGE_OUT.
+ */
+static void kvmppc_hmm_put_page(struct page *page)
+{
+	unsigned long pfn = page_to_pfn(page);
+	unsigned long flags;
+	struct kvmppc_hmm_page_pvt *pvt;
+
+	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
+	pvt = (struct kvmppc_hmm_page_pvt *)hmm_devmem_page_get_drvdata(page);
+	hmm_devmem_page_set_drvdata(page, 0);
+
+	bitmap_clear(kvmppc_hmm.pfn_bitmap,
+		     pfn - kvmppc_hmm.devmem->pfn_first, 1);
+	*(pvt->rmap) = 0;
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+	kfree(pvt);
+}
+
+/*
+ * migrate_vma() callback to move page from normal memory to secure memory.
+ *
+ * We don't capture the return value of uv_page_in() here because when
+ * UV asks for a page and then fails to copy it over, we don't care.
+ */
+static void
+kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma,
+				  const unsigned long *src_pfn,
+				  unsigned long *dst_pfn,
+				  unsigned long start,
+				  unsigned long end,
+				  void *private)
+{
+	struct kvmppc_hmm_migrate_args *args = private;
+	struct page *spage = migrate_pfn_to_page(*src_pfn);
+	unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT;
+	struct page *dpage;
+
+	*dst_pfn = 0;
+	if (!(*src_pfn & MIGRATE_PFN_MIGRATE))
+		return;
+
+	dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid);
+	if (!dpage)
+		return;
+
+	if (spage)
+		uv_page_in(args->lpid, pfn << args->page_shift,
+			   args->gpa, 0, args->page_shift);
+
+	*dst_pfn = migrate_pfn(page_to_pfn(dpage)) |
+		    MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED;
+}
+
+/*
+ * This migrate_vma() callback is typically used to updated device
+ * page tables after successful migration. We have nothing to do here.
+ *
+ * Also as we don't care if UV successfully copied over the page in
+ * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check
+ * dst_pfn for any errors here.
+ */
+static void
+kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma,
+				    const unsigned long *src_pfn,
+				    const unsigned long *dst_pfn,
+				    unsigned long start,
+				    unsigned long end,
+				    void *private)
+{
+}
+
+static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
+	.alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy,
+	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
+};
+
+/*
+ * Move page from normal memory to secure memory.
+ */
+unsigned long
+kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
+		     unsigned long flags, unsigned long page_shift)
+{
+	unsigned long addr, end;
+	unsigned long src_pfn, dst_pfn;
+	struct kvmppc_hmm_migrate_args args;
+	struct vm_area_struct *vma;
+	int srcu_idx;
+	unsigned long gfn = gpa >> page_shift;
+	struct kvm_memory_slot *slot;
+	unsigned long *rmap;
+	int ret = H_SUCCESS;
+
+	if (page_shift != PAGE_SHIFT)
+		return H_P3;
+
+	if (flags)
+		return H_P2;
+
+	down_read(&kvm->mm->mmap_sem);
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
+	addr = gfn_to_hva(kvm, gpa >> page_shift);
+	if (kvm_is_error_hva(addr)) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	end = addr + (1UL << page_shift);
+	vma = find_vma_intersection(kvm->mm, addr, end);
+	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	args.rmap = rmap;
+	args.lpid = kvm->arch.lpid;
+	args.gpa = gpa;
+	args.page_shift = page_shift;
+	ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end,
+			  &src_pfn, &dst_pfn, &args);
+	if (ret < 0)
+		ret = H_PARAMETER;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	up_read(&kvm->mm->mmap_sem);
+	return ret;
+}
+
+/*
+ * We drop the HMM PFN here, provision a new page and populate
+ * the same in QEMU page tables.
+ */
+static void
+kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
+					const unsigned long *src_pfn,
+					unsigned long *dst_pfn,
+					unsigned long start,
+					unsigned long end,
+					void *private)
+{
+	struct page *dpage, *spage;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+	int ret = U_SUCCESS;
+
+	*dst_pfn = MIGRATE_PFN_ERROR;
+	spage = migrate_pfn_to_page(*src_pfn);
+	if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE))
+		return;
+	if (!is_zone_device_page(spage))
+		return;
+	dpage = hmm_vma_alloc_locked_page(vma, start);
+	if (!dpage)
+		return;
+	pvt = (struct kvmppc_hmm_page_pvt *)
+	       hmm_devmem_page_get_drvdata(spage);
+
+	pfn = page_to_pfn(dpage);
+	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
+			  pvt->gpa, 0, PAGE_SHIFT);
+	if (ret == U_SUCCESS)
+		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
+}
+
+/*
+ * We have nothing to do here.
+ */
+static void
+kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma,
+					  const unsigned long *src_pfn,
+					  const unsigned long *dst_pfn,
+					  unsigned long start,
+					  unsigned long end,
+					  void *private)
+{
+}
+
+static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = {
+	.alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy,
+	.finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map,
+};
+
+/*
+ * Fault handler callback when HV touches any page that has been
+ * moved to secure memory, we ask UV to give back the page by
+ * issuing a UV_PAGE_OUT uvcall.
+ */
+static vm_fault_t kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem,
+					  struct vm_area_struct *vma,
+					  unsigned long addr,
+					  const struct page *page,
+					  unsigned int flags,
+					  pmd_t *pmdp)
+{
+	unsigned long end = addr + PAGE_SIZE;
+	unsigned long src_pfn, dst_pfn = 0;
+
+	if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
+			&src_pfn, &dst_pfn, NULL))
+		return VM_FAULT_SIGBUS;
+	if (dst_pfn == MIGRATE_PFN_ERROR)
+		return VM_FAULT_SIGBUS;
+	return 0;
+}
+
+static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem,
+				   struct page *page)
+{
+	kvmppc_hmm_put_page(page);
+}
+
+static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = {
+	.free = kvmppc_hmm_devmem_free,
+	.fault = kvmppc_hmm_devmem_fault,
+};
+
+/*
+ * Move page from secure memory to normal memory.
+ */
+unsigned long
+kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
+		      unsigned long flags, unsigned long page_shift)
+{
+	unsigned long addr, end;
+	struct vm_area_struct *vma;
+	unsigned long src_pfn, dst_pfn = 0;
+	int srcu_idx;
+	int ret = H_SUCCESS;
+
+	if (page_shift != PAGE_SHIFT)
+		return H_P3;
+
+	if (flags)
+		return H_P2;
+
+	down_read(&kvm->mm->mmap_sem);
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	addr = gfn_to_hva(kvm, gpa >> page_shift);
+	if (kvm_is_error_hva(addr)) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	end = addr + (1UL << page_shift);
+	vma = find_vma_intersection(kvm->mm, addr, end);
+	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+	ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
+			  &src_pfn, &dst_pfn, NULL);
+	if (ret < 0)
+		ret = H_PARAMETER;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	up_read(&kvm->mm->mmap_sem);
+	return ret;
+}
+
+static u64 kvmppc_get_secmem_size(void)
+{
+	struct device_node *np;
+	int i, len;
+	const __be32 *prop;
+	u64 size = 0;
+
+	np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware");
+	if (!np)
+		goto out;
+
+	prop = of_get_property(np, "secure-memory-ranges", &len);
+	if (!prop)
+		goto out_put;
+
+	for (i = 0; i < len / (sizeof(*prop) * 4); i++)
+		size += of_read_number(prop + (i * 4) + 2, 2);
+
+out_put:
+	of_node_put(np);
+out:
+	return size;
+}
+
+static int kvmppc_hmm_pages_init(void)
+{
+	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
+				kvmppc_hmm.devmem->pfn_first;
+
+	kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
+					 sizeof(unsigned long), GFP_KERNEL);
+	if (!kvmppc_hmm.pfn_bitmap)
+		return -ENOMEM;
+
+	spin_lock_init(&kvmppc_hmm_lock);
+
+	return 0;
+}
+
+int kvmppc_hmm_init(void)
+{
+	int ret = 0;
+	unsigned long size;
+
+	size = kvmppc_get_secmem_size();
+	if (!size) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	kvmppc_hmm.device = hmm_device_new(NULL);
+	if (IS_ERR(kvmppc_hmm.device)) {
+		ret = PTR_ERR(kvmppc_hmm.device);
+		goto out;
+	}
+
+	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
+					   &kvmppc_hmm.device->device, size);
+	if (IS_ERR(kvmppc_hmm.devmem)) {
+		ret = PTR_ERR(kvmppc_hmm.devmem);
+		goto out_device;
+	}
+	ret = kvmppc_hmm_pages_init();
+	if (ret < 0)
+		goto out_device;
+
+	pr_info("KVMPPC-HMM: Secure Memory size 0x%lx\n", size);
+	return ret;
+
+out_device:
+	hmm_device_put(kvmppc_hmm.device);
+out:
+	return ret;
+}
+
+void kvmppc_hmm_free(void)
+{
+	kfree(kvmppc_hmm.pfn_bitmap);
+	hmm_device_put(kvmppc_hmm.device);
+}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

HMM driver for KVM PPC to manage page transitions of
secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.

H_SVM_PAGE_IN: Move the content of a normal page to secure page
H_SVM_PAGE_OUT: Move the content of a secure page to normal page

Private ZONE_DEVICE memory equal to the amount of secure memory
available in the platform for running secure guests is created
via a HMM device. The movement of pages between normal and secure
memory is done by ->alloc_and_copy() callback routine of migrate_vma().

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h         |   4 +
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  29 ++
 arch/powerpc/include/asm/kvm_host.h       |  12 +
 arch/powerpc/include/asm/ultravisor-api.h |   2 +
 arch/powerpc/include/asm/ultravisor.h     |  17 +
 arch/powerpc/kvm/Makefile                 |   3 +
 arch/powerpc/kvm/book3s_hv.c              |  19 +
 arch/powerpc/kvm/book3s_hv_hmm.c          | 482 ++++++++++++++++++++++
 8 files changed, 568 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
 create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 463c63a9fcf1..2f6b952deb0f 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -337,6 +337,10 @@
 #define H_TLB_INVALIDATE	0xF808
 #define H_COPY_TOFROM_GUEST	0xF80C
 
+/* Platform-specific hcalls used by the Ultravisor */
+#define H_SVM_PAGE_IN		0xEF00
+#define H_SVM_PAGE_OUT		0xEF04
+
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
 #define H_SET_MODE_RESOURCE_SET_DAWR		2
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
new file mode 100644
index 000000000000..21f3de5f2acb
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __POWERPC_KVM_PPC_HMM_H__
+#define __POWERPC_KVM_PPC_HMM_H__
+
+#ifdef CONFIG_PPC_UV
+extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm,
+					  unsigned long gra,
+					  unsigned long flags,
+					  unsigned long page_shift);
+extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
+					  unsigned long gra,
+					  unsigned long flags,
+					  unsigned long page_shift);
+#else
+static inline unsigned long
+kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
+		     unsigned long flags, unsigned long page_shift)
+{
+	return H_UNSUPPORTED;
+}
+
+static inline unsigned long
+kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
+		      unsigned long flags, unsigned long page_shift)
+{
+	return H_UNSUPPORTED;
+}
+#endif /* CONFIG_PPC_UV */
+#endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 184becb62ea4..ac1a101beb07 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -858,4 +858,16 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
+#ifdef CONFIG_PPC_UV
+extern int kvmppc_hmm_init(void);
+extern void kvmppc_hmm_free(void);
+#else
+static inline int kvmppc_hmm_init(void)
+{
+	return 0;
+}
+
+static inline void kvmppc_hmm_free(void) {}
+#endif /* CONFIG_PPC_UV */
+
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 7c4d0b4ced12..f1c5800ac705 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,5 +20,7 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
+#define UV_PAGE_IN			0xF128
+#define UV_PAGE_OUT			0xF12C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 996c1efd6c6d..16f8e0e8ec3f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -44,6 +44,23 @@ static inline int uv_register_pate(u64 lpid, u64 dw0, u64 dw1)
 	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
 }
 
+static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 flags,
+			     u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags,
+		     page_shift);
+}
+
+static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags,
+			      u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
+		     page_shift);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 4c67cc79de7c..d1a79fb5f806 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -71,6 +71,9 @@ kvm-hv-y += \
 	book3s_64_mmu_radix.o \
 	book3s_hv_nested.o
 
+kvm-hv-$(CONFIG_PPC_UV) += \
+	book3s_hv_hmm.o
+
 kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \
 	book3s_hv_tm.o
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index a104743291a9..8ee66aa0da58 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -75,6 +75,8 @@
 #include <asm/xics.h>
 #include <asm/xive.h>
 #include <asm/hw_breakpoint.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_book3s_hmm.h>
 
 #include "book3s.h"
 
@@ -1083,6 +1085,18 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 					 kvmppc_get_gpr(vcpu, 5),
 					 kvmppc_get_gpr(vcpu, 6));
 		break;
+	case H_SVM_PAGE_IN:
+		ret = kvmppc_h_svm_page_in(vcpu->kvm,
+					   kvmppc_get_gpr(vcpu, 4),
+					   kvmppc_get_gpr(vcpu, 5),
+					   kvmppc_get_gpr(vcpu, 6));
+		break;
+	case H_SVM_PAGE_OUT:
+		ret = kvmppc_h_svm_page_out(vcpu->kvm,
+					    kvmppc_get_gpr(vcpu, 4),
+					    kvmppc_get_gpr(vcpu, 5),
+					    kvmppc_get_gpr(vcpu, 6));
+		break;
 	default:
 		return RESUME_HOST;
 	}
@@ -5501,11 +5515,16 @@ static int kvmppc_book3s_init_hv(void)
 			no_mixing_hpt_and_radix = true;
 	}
 
+	r = kvmppc_hmm_init();
+	if (r < 0)
+		pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r);
+
 	return r;
 }
 
 static void kvmppc_book3s_exit_hv(void)
 {
+	kvmppc_hmm_free();
 	kvmppc_free_host_rm_ops();
 	if (kvmppc_radix_possible())
 		kvmppc_radix_exit();
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
new file mode 100644
index 000000000000..cd34323888b6
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -0,0 +1,482 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HMM driver to manage page migration between normal and secure
+ * memory.
+ *
+ * Based on Jérôme Glisse's HMM dummy driver.
+ *
+ * Copyright 2018 Bharata B Rao, IBM Corp. <bharata@linux.ibm.com>
+ */
+
+/*
+ * A pseries guest can be run as a secure guest on Ultravisor-enabled
+ * POWER platforms. On such platforms, this driver will be used to manage
+ * the movement of guest pages between the normal memory managed by
+ * hypervisor (HV) and secure memory managed by Ultravisor (UV).
+ *
+ * Private ZONE_DEVICE memory equal to the amount of secure memory
+ * available in the platform for running secure guests is created
+ * via a HMM device. The movement of pages between normal and secure
+ * memory is done by ->alloc_and_copy() callback routine of migrate_vma().
+ *
+ * The page-in or page-out requests from UV will come to HV as hcalls and
+ * HV will call back into UV via uvcalls to satisfy these page requests.
+ *
+ * For each page that gets moved into secure memory, a HMM PFN is used
+ * on the HV side and HMM migration PTE corresponding to that PFN would be
+ * populated in the QEMU page tables. HMM PFNs are stored in the rmap
+ * array. Whenever a guest page becomes secure, HMM PFN allocated for
+ * the same will be populated in the corresponding slot in the rmap
+ * array. The overloading of rmap array's usage which otherwise is
+ * used primarily by HPT guests means that this feature (secure
+ * guest on PEF platforms) is available only for Radix MMU guests.
+ * Also the same rmap array is used differently by nested HPT guests.
+ * Hence a secure guest can't have nested guests.
+ */
+
+#include <linux/hmm.h>
+#include <linux/kvm_host.h>
+#include <linux/sched/mm.h>
+#include <asm/ultravisor.h>
+
+struct kvmppc_hmm_device {
+	struct hmm_device *device;
+	struct hmm_devmem *devmem;
+	unsigned long *pfn_bitmap;
+};
+
+static struct kvmppc_hmm_device kvmppc_hmm;
+spinlock_t kvmppc_hmm_lock;
+
+struct kvmppc_hmm_page_pvt {
+	unsigned long *rmap;
+	unsigned int lpid;
+	unsigned long gpa;
+};
+
+struct kvmppc_hmm_migrate_args {
+	unsigned long *rmap;
+	unsigned int lpid;
+	unsigned long gpa;
+	unsigned long page_shift;
+};
+
+/*
+ * Bits 60:56 in the rmap entry will be used to identify the
+ * different uses/functions of rmap. This definition with move
+ * to a proper header when all other functions are defined.
+ */
+#define KVMPPC_PFN_HMM		(0x2ULL << 56)
+
+static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
+{
+	return !!(pfn & KVMPPC_PFN_HMM);
+}
+
+/*
+ * Get a free HMM PFN from the pool
+ *
+ * Called when a normal page is moved to secure memory (UV_PAGE_IN). HMM
+ * PFN will be used to keep track of the secure page on HV side.
+ *
+ * @rmap here is the slot in the rmap array that corresponds to @gpa.
+ * Thus a non-zero rmap entry indicates that the corresonding guest
+ * page has become secure, is not mapped on the HV side.
+ *
+ * NOTE: In this and subsequent functions, we pass around and access
+ * individual elements of kvm_memory_slot->arch.rmap[] without any
+ * protection. Should we use lock_rmap() here?
+ */
+static struct page *kvmppc_hmm_get_page(unsigned long *rmap,
+					unsigned long gpa, unsigned int lpid)
+{
+	struct page *dpage = NULL;
+	unsigned long bit, hmm_pfn;
+	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
+				kvmppc_hmm.devmem->pfn_first;
+	unsigned long flags;
+	struct kvmppc_hmm_page_pvt *pvt;
+
+	if (kvmppc_is_hmm_pfn(*rmap))
+		return NULL;
+
+	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
+	bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns);
+	if (bit >= nr_pfns)
+		goto out;
+
+	bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1);
+	hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first;
+	dpage = pfn_to_page(hmm_pfn);
+
+	if (!trylock_page(dpage))
+		goto out_clear;
+
+	*rmap = hmm_pfn | KVMPPC_PFN_HMM;
+	pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC);
+	if (!pvt)
+		goto out_unlock;
+	pvt->rmap = rmap;
+	pvt->gpa = gpa;
+	pvt->lpid = lpid;
+	hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt);
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+
+	get_page(dpage);
+	return dpage;
+
+out_unlock:
+	unlock_page(dpage);
+out_clear:
+	bitmap_clear(kvmppc_hmm.pfn_bitmap,
+		     hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1);
+out:
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+	return NULL;
+}
+
+/*
+ * Release the HMM PFN back to the pool
+ *
+ * Called when secure page becomes a normal page during UV_PAGE_OUT.
+ */
+static void kvmppc_hmm_put_page(struct page *page)
+{
+	unsigned long pfn = page_to_pfn(page);
+	unsigned long flags;
+	struct kvmppc_hmm_page_pvt *pvt;
+
+	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
+	pvt = (struct kvmppc_hmm_page_pvt *)hmm_devmem_page_get_drvdata(page);
+	hmm_devmem_page_set_drvdata(page, 0);
+
+	bitmap_clear(kvmppc_hmm.pfn_bitmap,
+		     pfn - kvmppc_hmm.devmem->pfn_first, 1);
+	*(pvt->rmap) = 0;
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+	kfree(pvt);
+}
+
+/*
+ * migrate_vma() callback to move page from normal memory to secure memory.
+ *
+ * We don't capture the return value of uv_page_in() here because when
+ * UV asks for a page and then fails to copy it over, we don't care.
+ */
+static void
+kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma,
+				  const unsigned long *src_pfn,
+				  unsigned long *dst_pfn,
+				  unsigned long start,
+				  unsigned long end,
+				  void *private)
+{
+	struct kvmppc_hmm_migrate_args *args = private;
+	struct page *spage = migrate_pfn_to_page(*src_pfn);
+	unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT;
+	struct page *dpage;
+
+	*dst_pfn = 0;
+	if (!(*src_pfn & MIGRATE_PFN_MIGRATE))
+		return;
+
+	dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid);
+	if (!dpage)
+		return;
+
+	if (spage)
+		uv_page_in(args->lpid, pfn << args->page_shift,
+			   args->gpa, 0, args->page_shift);
+
+	*dst_pfn = migrate_pfn(page_to_pfn(dpage)) |
+		    MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED;
+}
+
+/*
+ * This migrate_vma() callback is typically used to updated device
+ * page tables after successful migration. We have nothing to do here.
+ *
+ * Also as we don't care if UV successfully copied over the page in
+ * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check
+ * dst_pfn for any errors here.
+ */
+static void
+kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma,
+				    const unsigned long *src_pfn,
+				    const unsigned long *dst_pfn,
+				    unsigned long start,
+				    unsigned long end,
+				    void *private)
+{
+}
+
+static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
+	.alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy,
+	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
+};
+
+/*
+ * Move page from normal memory to secure memory.
+ */
+unsigned long
+kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
+		     unsigned long flags, unsigned long page_shift)
+{
+	unsigned long addr, end;
+	unsigned long src_pfn, dst_pfn;
+	struct kvmppc_hmm_migrate_args args;
+	struct vm_area_struct *vma;
+	int srcu_idx;
+	unsigned long gfn = gpa >> page_shift;
+	struct kvm_memory_slot *slot;
+	unsigned long *rmap;
+	int ret = H_SUCCESS;
+
+	if (page_shift != PAGE_SHIFT)
+		return H_P3;
+
+	if (flags)
+		return H_P2;
+
+	down_read(&kvm->mm->mmap_sem);
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
+	addr = gfn_to_hva(kvm, gpa >> page_shift);
+	if (kvm_is_error_hva(addr)) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	end = addr + (1UL << page_shift);
+	vma = find_vma_intersection(kvm->mm, addr, end);
+	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	args.rmap = rmap;
+	args.lpid = kvm->arch.lpid;
+	args.gpa = gpa;
+	args.page_shift = page_shift;
+	ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end,
+			  &src_pfn, &dst_pfn, &args);
+	if (ret < 0)
+		ret = H_PARAMETER;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	up_read(&kvm->mm->mmap_sem);
+	return ret;
+}
+
+/*
+ * We drop the HMM PFN here, provision a new page and populate
+ * the same in QEMU page tables.
+ */
+static void
+kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
+					const unsigned long *src_pfn,
+					unsigned long *dst_pfn,
+					unsigned long start,
+					unsigned long end,
+					void *private)
+{
+	struct page *dpage, *spage;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+	int ret = U_SUCCESS;
+
+	*dst_pfn = MIGRATE_PFN_ERROR;
+	spage = migrate_pfn_to_page(*src_pfn);
+	if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE))
+		return;
+	if (!is_zone_device_page(spage))
+		return;
+	dpage = hmm_vma_alloc_locked_page(vma, start);
+	if (!dpage)
+		return;
+	pvt = (struct kvmppc_hmm_page_pvt *)
+	       hmm_devmem_page_get_drvdata(spage);
+
+	pfn = page_to_pfn(dpage);
+	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
+			  pvt->gpa, 0, PAGE_SHIFT);
+	if (ret == U_SUCCESS)
+		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
+}
+
+/*
+ * We have nothing to do here.
+ */
+static void
+kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma,
+					  const unsigned long *src_pfn,
+					  const unsigned long *dst_pfn,
+					  unsigned long start,
+					  unsigned long end,
+					  void *private)
+{
+}
+
+static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = {
+	.alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy,
+	.finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map,
+};
+
+/*
+ * Fault handler callback when HV touches any page that has been
+ * moved to secure memory, we ask UV to give back the page by
+ * issuing a UV_PAGE_OUT uvcall.
+ */
+static vm_fault_t kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem,
+					  struct vm_area_struct *vma,
+					  unsigned long addr,
+					  const struct page *page,
+					  unsigned int flags,
+					  pmd_t *pmdp)
+{
+	unsigned long end = addr + PAGE_SIZE;
+	unsigned long src_pfn, dst_pfn = 0;
+
+	if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
+			&src_pfn, &dst_pfn, NULL))
+		return VM_FAULT_SIGBUS;
+	if (dst_pfn == MIGRATE_PFN_ERROR)
+		return VM_FAULT_SIGBUS;
+	return 0;
+}
+
+static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem,
+				   struct page *page)
+{
+	kvmppc_hmm_put_page(page);
+}
+
+static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = {
+	.free = kvmppc_hmm_devmem_free,
+	.fault = kvmppc_hmm_devmem_fault,
+};
+
+/*
+ * Move page from secure memory to normal memory.
+ */
+unsigned long
+kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
+		      unsigned long flags, unsigned long page_shift)
+{
+	unsigned long addr, end;
+	struct vm_area_struct *vma;
+	unsigned long src_pfn, dst_pfn = 0;
+	int srcu_idx;
+	int ret = H_SUCCESS;
+
+	if (page_shift != PAGE_SHIFT)
+		return H_P3;
+
+	if (flags)
+		return H_P2;
+
+	down_read(&kvm->mm->mmap_sem);
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	addr = gfn_to_hva(kvm, gpa >> page_shift);
+	if (kvm_is_error_hva(addr)) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	end = addr + (1UL << page_shift);
+	vma = find_vma_intersection(kvm->mm, addr, end);
+	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+	ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
+			  &src_pfn, &dst_pfn, NULL);
+	if (ret < 0)
+		ret = H_PARAMETER;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	up_read(&kvm->mm->mmap_sem);
+	return ret;
+}
+
+static u64 kvmppc_get_secmem_size(void)
+{
+	struct device_node *np;
+	int i, len;
+	const __be32 *prop;
+	u64 size = 0;
+
+	np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware");
+	if (!np)
+		goto out;
+
+	prop = of_get_property(np, "secure-memory-ranges", &len);
+	if (!prop)
+		goto out_put;
+
+	for (i = 0; i < len / (sizeof(*prop) * 4); i++)
+		size += of_read_number(prop + (i * 4) + 2, 2);
+
+out_put:
+	of_node_put(np);
+out:
+	return size;
+}
+
+static int kvmppc_hmm_pages_init(void)
+{
+	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
+				kvmppc_hmm.devmem->pfn_first;
+
+	kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
+					 sizeof(unsigned long), GFP_KERNEL);
+	if (!kvmppc_hmm.pfn_bitmap)
+		return -ENOMEM;
+
+	spin_lock_init(&kvmppc_hmm_lock);
+
+	return 0;
+}
+
+int kvmppc_hmm_init(void)
+{
+	int ret = 0;
+	unsigned long size;
+
+	size = kvmppc_get_secmem_size();
+	if (!size) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	kvmppc_hmm.device = hmm_device_new(NULL);
+	if (IS_ERR(kvmppc_hmm.device)) {
+		ret = PTR_ERR(kvmppc_hmm.device);
+		goto out;
+	}
+
+	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
+					   &kvmppc_hmm.device->device, size);
+	if (IS_ERR(kvmppc_hmm.devmem)) {
+		ret = PTR_ERR(kvmppc_hmm.devmem);
+		goto out_device;
+	}
+	ret = kvmppc_hmm_pages_init();
+	if (ret < 0)
+		goto out_device;
+
+	pr_info("KVMPPC-HMM: Secure Memory size 0x%lx\n", size);
+	return ret;
+
+out_device:
+	hmm_device_put(kvmppc_hmm.device);
+out:
+	return ret;
+}
+
+void kvmppc_hmm_free(void)
+{
+	kfree(kvmppc_hmm.pfn_bitmap);
+	hmm_device_put(kvmppc_hmm.device);
+}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 2/7] kvmppc: Shared pages support for secure guests
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

A secure guest will share some of its pages with hypervisor (Eg. virtio
bounce buffers etc). Support shared pages in HMM driver.

Once a secure page is converted to shared page, HMM driver will stop
tracking that page.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h |  3 ++
 arch/powerpc/kvm/book3s_hv_hmm.c  | 66 +++++++++++++++++++++++++++++--
 2 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 2f6b952deb0f..05b8536f6653 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -337,6 +337,9 @@
 #define H_TLB_INVALIDATE	0xF808
 #define H_COPY_TOFROM_GUEST	0xF80C
 
+/* Flags for H_SVM_PAGE_IN */
+#define H_PAGE_IN_SHARED        0x1
+
 /* Platform-specific hcalls used by the Ultravisor */
 #define H_SVM_PAGE_IN		0xEF00
 #define H_SVM_PAGE_OUT		0xEF04
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index cd34323888b6..36562b382e70 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -52,6 +52,7 @@ struct kvmppc_hmm_page_pvt {
 	unsigned long *rmap;
 	unsigned int lpid;
 	unsigned long gpa;
+	bool skip_page_out;
 };
 
 struct kvmppc_hmm_migrate_args {
@@ -215,6 +216,53 @@ static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
 	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
 };
 
+/*
+ * Shares the page with HV, thus making it a normal page.
+ *
+ * - If the page is already secure, then provision a new page and share
+ * - If the page is a normal page, share the existing page
+ *
+ * In the former case, uses the HMM fault handler to release the HMM page.
+ */
+static unsigned long
+kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift)
+{
+
+	int ret;
+	struct page *hmm_page;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+	unsigned long *rmap;
+	struct kvm_memory_slot *slot;
+	unsigned long gfn = gpa >> page_shift;
+	int srcu_idx;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	if (!slot) {
+		srcu_read_unlock(&kvm->srcu, srcu_idx);
+		return H_PARAMETER;
+	}
+	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+
+	if (kvmppc_is_hmm_pfn(*rmap)) {
+		hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
+		pvt = (struct kvmppc_hmm_page_pvt *)
+			hmm_devmem_page_get_drvdata(hmm_page);
+		pvt->skip_page_out = true;
+	}
+
+	pfn = gfn_to_pfn(kvm, gpa >> page_shift);
+	if (is_error_noslot_pfn(pfn))
+		return H_PARAMETER;
+
+	ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift);
+	kvm_release_pfn_clean(pfn);
+
+	return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER;
+}
+
 /*
  * Move page from normal memory to secure memory.
  */
@@ -235,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 	if (page_shift != PAGE_SHIFT)
 		return H_P3;
 
-	if (flags)
+	if (flags & ~H_PAGE_IN_SHARED)
 		return H_P2;
 
+	if (flags & H_PAGE_IN_SHARED)
+		return kvmppc_share_page(kvm, gpa, page_shift);
+
 	down_read(&kvm->mm->mmap_sem);
 	srcu_idx = srcu_read_lock(&kvm->srcu);
 	slot = gfn_to_memslot(kvm, gfn);
@@ -299,8 +350,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
 	       hmm_devmem_page_get_drvdata(spage);
 
 	pfn = page_to_pfn(dpage);
-	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
-			  pvt->gpa, 0, PAGE_SHIFT);
+
+	/*
+	 * This same alloc_and_copy() callback is used in two cases:
+	 * - When HV touches a secure page, for which we do page-out
+	 * - When a secure page is converted to shared page, we touch
+	 *   the page to essentially discard the HMM page. In this case we
+	 *   skip page-out.
+	 */
+	if (!pvt->skip_page_out)
+		ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
+				  pvt->gpa, 0, PAGE_SHIFT);
 	if (ret == U_SUCCESS)
 		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 2/7] kvmppc: Shared pages support for secure guests
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

A secure guest will share some of its pages with hypervisor (Eg. virtio
bounce buffers etc). Support shared pages in HMM driver.

Once a secure page is converted to shared page, HMM driver will stop
tracking that page.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h |  3 ++
 arch/powerpc/kvm/book3s_hv_hmm.c  | 66 +++++++++++++++++++++++++++++--
 2 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 2f6b952deb0f..05b8536f6653 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -337,6 +337,9 @@
 #define H_TLB_INVALIDATE	0xF808
 #define H_COPY_TOFROM_GUEST	0xF80C
 
+/* Flags for H_SVM_PAGE_IN */
+#define H_PAGE_IN_SHARED        0x1
+
 /* Platform-specific hcalls used by the Ultravisor */
 #define H_SVM_PAGE_IN		0xEF00
 #define H_SVM_PAGE_OUT		0xEF04
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index cd34323888b6..36562b382e70 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -52,6 +52,7 @@ struct kvmppc_hmm_page_pvt {
 	unsigned long *rmap;
 	unsigned int lpid;
 	unsigned long gpa;
+	bool skip_page_out;
 };
 
 struct kvmppc_hmm_migrate_args {
@@ -215,6 +216,53 @@ static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
 	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
 };
 
+/*
+ * Shares the page with HV, thus making it a normal page.
+ *
+ * - If the page is already secure, then provision a new page and share
+ * - If the page is a normal page, share the existing page
+ *
+ * In the former case, uses the HMM fault handler to release the HMM page.
+ */
+static unsigned long
+kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift)
+{
+
+	int ret;
+	struct page *hmm_page;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+	unsigned long *rmap;
+	struct kvm_memory_slot *slot;
+	unsigned long gfn = gpa >> page_shift;
+	int srcu_idx;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	if (!slot) {
+		srcu_read_unlock(&kvm->srcu, srcu_idx);
+		return H_PARAMETER;
+	}
+	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+
+	if (kvmppc_is_hmm_pfn(*rmap)) {
+		hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
+		pvt = (struct kvmppc_hmm_page_pvt *)
+			hmm_devmem_page_get_drvdata(hmm_page);
+		pvt->skip_page_out = true;
+	}
+
+	pfn = gfn_to_pfn(kvm, gpa >> page_shift);
+	if (is_error_noslot_pfn(pfn))
+		return H_PARAMETER;
+
+	ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift);
+	kvm_release_pfn_clean(pfn);
+
+	return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER;
+}
+
 /*
  * Move page from normal memory to secure memory.
  */
@@ -235,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 	if (page_shift != PAGE_SHIFT)
 		return H_P3;
 
-	if (flags)
+	if (flags & ~H_PAGE_IN_SHARED)
 		return H_P2;
 
+	if (flags & H_PAGE_IN_SHARED)
+		return kvmppc_share_page(kvm, gpa, page_shift);
+
 	down_read(&kvm->mm->mmap_sem);
 	srcu_idx = srcu_read_lock(&kvm->srcu);
 	slot = gfn_to_memslot(kvm, gfn);
@@ -299,8 +350,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
 	       hmm_devmem_page_get_drvdata(spage);
 
 	pfn = page_to_pfn(dpage);
-	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
-			  pvt->gpa, 0, PAGE_SHIFT);
+
+	/*
+	 * This same alloc_and_copy() callback is used in two cases:
+	 * - When HV touches a secure page, for which we do page-out
+	 * - When a secure page is converted to shared page, we touch
+	 *   the page to essentially discard the HMM page. In this case we
+	 *   skip page-out.
+	 */
+	if (!pvt->skip_page_out)
+		ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
+				  pvt->gpa, 0, PAGE_SHIFT);
 	if (ret == U_SUCCESS)
 		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao, Paul Mackerras

H_SVM_INIT_START: Initiate securing a VM
H_SVM_INIT_DONE: Conclude securing a VM

As part of H_SVM_INIT_START, register all existing memslots with
the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
the guest to secure mode is complete.

These two states (transition to secure mode STARTED and transition
to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
Setting these states will cause the assembly code that enters the
guest to call the UV_RETURN ucall instead of trying to enter the
guest directly.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/hvcall.h         |  2 ++
 arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++
 arch/powerpc/include/asm/kvm_host.h       |  4 +++
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
 arch/powerpc/kvm/book3s_hv.c              |  7 +++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 34 +++++++++++++++++++++++
 7 files changed, 69 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 05b8536f6653..fa7695928e30 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -343,6 +343,8 @@
 /* Platform-specific hcalls used by the Ultravisor */
 #define H_SVM_PAGE_IN		0xEF00
 #define H_SVM_PAGE_OUT		0xEF04
+#define H_SVM_INIT_START	0xEF08
+#define H_SVM_INIT_DONE		0xEF0C
 
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
index 21f3de5f2acb..8c7aacabb2e0 100644
--- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -11,6 +11,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 					  unsigned long gra,
 					  unsigned long flags,
 					  unsigned long page_shift);
+extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
+extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
 #else
 static inline unsigned long
 kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
@@ -25,5 +27,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
 {
 	return H_UNSUPPORTED;
 }
+
+static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
+{
+	return H_UNSUPPORTED;
+}
+
+static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
+{
+	return H_UNSUPPORTED;
+}
 #endif /* CONFIG_PPC_UV */
 #endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index ac1a101beb07..0c49c3401c63 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -272,6 +272,10 @@ struct kvm_hpt_info {
 
 struct kvm_resize_hpt;
 
+/* Flag values for kvm_arch.secure_guest */
+#define KVMPPC_SECURE_INIT_START	0x1 /* H_SVM_INIT_START has been called */
+#define KVMPPC_SECURE_INIT_DONE		0x2 /* H_SVM_INIT_DONE completed */
+
 struct kvm_arch {
 	unsigned int lpid;
 	unsigned int smt_mode;		/* # vcpus per virtual core */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index f1c5800ac705..07b7d638e7af 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,6 +20,7 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
+#define UV_REGISTER_MEM_SLOT		0xF120
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 16f8e0e8ec3f..b46042f1aa8f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -61,6 +61,15 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags,
 	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
 		     page_shift);
 }
+
+static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size,
+				       u64 flags, u64 slotid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
+		     size, flags, slotid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8ee66aa0da58..b8f801d00ad4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1097,6 +1097,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 					    kvmppc_get_gpr(vcpu, 5),
 					    kvmppc_get_gpr(vcpu, 6));
 		break;
+	case H_SVM_INIT_START:
+		ret = kvmppc_h_svm_init_start(vcpu->kvm);
+		break;
+	case H_SVM_INIT_DONE:
+		ret = kvmppc_h_svm_init_done(vcpu->kvm);
+		break;
+
 	default:
 		return RESUME_HOST;
 	}
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 36562b382e70..55bab9c4e60a 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -62,6 +62,40 @@ struct kvmppc_hmm_migrate_args {
 	unsigned long page_shift;
 };
 
+unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int ret = H_SUCCESS;
+	int srcu_idx;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slots = kvm_memslots(kvm);
+	kvm_for_each_memslot(memslot, slots) {
+		ret = uv_register_mem_slot(kvm->arch.lpid,
+					   memslot->base_gfn << PAGE_SHIFT,
+					   memslot->npages * PAGE_SIZE,
+					   0, memslot->id);
+		if (ret < 0) {
+			ret = H_PARAMETER;
+			goto out;
+		}
+	}
+	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_START;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return ret;
+}
+
+unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
+{
+	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
+		return H_UNSUPPORTED;
+
+	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
+	return H_SUCCESS;
+}
+
 /*
  * Bits 60:56 in the rmap entry will be used to identify the
  * different uses/functions of rmap. This definition with move
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

H_SVM_INIT_START: Initiate securing a VM
H_SVM_INIT_DONE: Conclude securing a VM

As part of H_SVM_INIT_START, register all existing memslots with
the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
the guest to secure mode is complete.

These two states (transition to secure mode STARTED and transition
to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
Setting these states will cause the assembly code that enters the
guest to call the UV_RETURN ucall instead of trying to enter the
guest directly.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/hvcall.h         |  2 ++
 arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++
 arch/powerpc/include/asm/kvm_host.h       |  4 +++
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
 arch/powerpc/kvm/book3s_hv.c              |  7 +++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 34 +++++++++++++++++++++++
 7 files changed, 69 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 05b8536f6653..fa7695928e30 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -343,6 +343,8 @@
 /* Platform-specific hcalls used by the Ultravisor */
 #define H_SVM_PAGE_IN		0xEF00
 #define H_SVM_PAGE_OUT		0xEF04
+#define H_SVM_INIT_START	0xEF08
+#define H_SVM_INIT_DONE		0xEF0C
 
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
index 21f3de5f2acb..8c7aacabb2e0 100644
--- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -11,6 +11,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 					  unsigned long gra,
 					  unsigned long flags,
 					  unsigned long page_shift);
+extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
+extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
 #else
 static inline unsigned long
 kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
@@ -25,5 +27,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
 {
 	return H_UNSUPPORTED;
 }
+
+static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
+{
+	return H_UNSUPPORTED;
+}
+
+static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
+{
+	return H_UNSUPPORTED;
+}
 #endif /* CONFIG_PPC_UV */
 #endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index ac1a101beb07..0c49c3401c63 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -272,6 +272,10 @@ struct kvm_hpt_info {
 
 struct kvm_resize_hpt;
 
+/* Flag values for kvm_arch.secure_guest */
+#define KVMPPC_SECURE_INIT_START	0x1 /* H_SVM_INIT_START has been called */
+#define KVMPPC_SECURE_INIT_DONE		0x2 /* H_SVM_INIT_DONE completed */
+
 struct kvm_arch {
 	unsigned int lpid;
 	unsigned int smt_mode;		/* # vcpus per virtual core */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index f1c5800ac705..07b7d638e7af 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,6 +20,7 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
+#define UV_REGISTER_MEM_SLOT		0xF120
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 16f8e0e8ec3f..b46042f1aa8f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -61,6 +61,15 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags,
 	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
 		     page_shift);
 }
+
+static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size,
+				       u64 flags, u64 slotid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
+		     size, flags, slotid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8ee66aa0da58..b8f801d00ad4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1097,6 +1097,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 					    kvmppc_get_gpr(vcpu, 5),
 					    kvmppc_get_gpr(vcpu, 6));
 		break;
+	case H_SVM_INIT_START:
+		ret = kvmppc_h_svm_init_start(vcpu->kvm);
+		break;
+	case H_SVM_INIT_DONE:
+		ret = kvmppc_h_svm_init_done(vcpu->kvm);
+		break;
+
 	default:
 		return RESUME_HOST;
 	}
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 36562b382e70..55bab9c4e60a 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -62,6 +62,40 @@ struct kvmppc_hmm_migrate_args {
 	unsigned long page_shift;
 };
 
+unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int ret = H_SUCCESS;
+	int srcu_idx;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slots = kvm_memslots(kvm);
+	kvm_for_each_memslot(memslot, slots) {
+		ret = uv_register_mem_slot(kvm->arch.lpid,
+					   memslot->base_gfn << PAGE_SHIFT,
+					   memslot->npages * PAGE_SIZE,
+					   0, memslot->id);
+		if (ret < 0) {
+			ret = H_PARAMETER;
+			goto out;
+		}
+	}
+	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_START;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return ret;
+}
+
+unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
+{
+	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
+		return H_UNSUPPORTED;
+
+	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
+	return H_SUCCESS;
+}
+
 /*
  * Bits 60:56 in the rmap entry will be used to identify the
  * different uses/functions of rmap. This definition with move
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao, Paul Mackerras

Register the new memslot with UV during plug and unregister
the memslot during unplug.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
 arch/powerpc/kvm/book3s_hv.c              | 19 +++++++++++++++++++
 3 files changed, 27 insertions(+)

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 07b7d638e7af..d6d6eb2e6e6b 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -21,6 +21,7 @@
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
 #define UV_REGISTER_MEM_SLOT		0xF120
+#define UV_UNREGISTER_MEM_SLOT		0xF124
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index b46042f1aa8f..fe45be9ee63b 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -70,6 +70,13 @@ static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size,
 	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
 		     size, flags, slotid);
 }
+
+static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b8f801d00ad4..7cbb5edaed01 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -77,6 +77,7 @@
 #include <asm/hw_breakpoint.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_book3s_hmm.h>
+#include <asm/ultravisor.h>
 
 #include "book3s.h"
 
@@ -4504,6 +4505,24 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 	if (change == KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) &&
 	    ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES))
 		kvmppc_radix_flush_memslot(kvm, old);
+	/*
+	 * If UV hasn't yet called H_SVM_INIT_START, don't register memslots.
+	 */
+	if (!kvm->arch.secure_guest)
+		return;
+
+	/*
+	 * TODO: Handle KVM_MR_MOVE
+	 */
+	if (change == KVM_MR_CREATE) {
+		uv_register_mem_slot(kvm->arch.lpid,
+					   new->base_gfn << PAGE_SHIFT,
+					   new->npages * PAGE_SIZE,
+					   0,
+					   new->id);
+	} else if (change == KVM_MR_DELETE) {
+		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
+	}
 }
 
 /*
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

Register the new memslot with UV during plug and unregister
the memslot during unplug.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
 arch/powerpc/kvm/book3s_hv.c              | 19 +++++++++++++++++++
 3 files changed, 27 insertions(+)

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 07b7d638e7af..d6d6eb2e6e6b 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -21,6 +21,7 @@
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
 #define UV_REGISTER_MEM_SLOT		0xF120
+#define UV_UNREGISTER_MEM_SLOT		0xF124
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index b46042f1aa8f..fe45be9ee63b 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -70,6 +70,13 @@ static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size,
 	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
 		     size, flags, slotid);
 }
+
+static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b8f801d00ad4..7cbb5edaed01 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -77,6 +77,7 @@
 #include <asm/hw_breakpoint.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_book3s_hmm.h>
+#include <asm/ultravisor.h>
 
 #include "book3s.h"
 
@@ -4504,6 +4505,24 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 	if (change == KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) &&
 	    ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES))
 		kvmppc_radix_flush_memslot(kvm, old);
+	/*
+	 * If UV hasn't yet called H_SVM_INIT_START, don't register memslots.
+	 */
+	if (!kvm->arch.secure_guest)
+		return;
+
+	/*
+	 * TODO: Handle KVM_MR_MOVE
+	 */
+	if (change == KVM_MR_CREATE) {
+		uv_register_mem_slot(kvm->arch.lpid,
+					   new->base_gfn << PAGE_SHIFT,
+					   new->npages * PAGE_SIZE,
+					   0,
+					   new->id);
+	} else if (change == KVM_MR_DELETE) {
+		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
+	}
 }
 
 /*
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

- After the guest becomes secure, when we handle a page fault of a page
  belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
- Whenever a page is unmapped on the HV side, inform UV via UV_PAGE_INVAL.
- Ensure all those routines that walk the secondary page tables of
  the guest don't do so in case of secure VM. For secure guest, the
  active secondary page tables are in secure memory and the secondary
  page tables in HV are freed when guest becomes secure.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/kvm_host.h       | 12 ++++++++++++
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
 arch/powerpc/kvm/book3s_64_mmu_radix.c    | 22 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 20 ++++++++++++++++++++
 5 files changed, 62 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0c49c3401c63..dcbf7480cb10 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -865,6 +865,8 @@ static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 #ifdef CONFIG_PPC_UV
 extern int kvmppc_hmm_init(void);
 extern void kvmppc_hmm_free(void);
+extern bool kvmppc_is_guest_secure(struct kvm *kvm);
+extern int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa);
 #else
 static inline int kvmppc_hmm_init(void)
 {
@@ -872,6 +874,16 @@ static inline int kvmppc_hmm_init(void)
 }
 
 static inline void kvmppc_hmm_free(void) {}
+
+static inline bool kvmppc_is_guest_secure(struct kvm *kvm)
+{
+	return false;
+}
+
+static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
+{
+	return -EFAULT;
+}
 #endif /* CONFIG_PPC_UV */
 
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index d6d6eb2e6e6b..9f5510b55892 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -24,5 +24,6 @@
 #define UV_UNREGISTER_MEM_SLOT		0xF124
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
+#define UV_PAGE_INVAL			0xF138
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index fe45be9ee63b..f4f674794b35 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -77,6 +77,13 @@ static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
 
 	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
 }
+
+static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index f55ef071883f..c454600c454f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -21,6 +21,8 @@
 #include <asm/pgtable.h>
 #include <asm/pgalloc.h>
 #include <asm/pte-walk.h>
+#include <asm/ultravisor.h>
+#include <asm/kvm_host.h>
 
 /*
  * Supported radix tree geometry.
@@ -923,6 +925,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	if (!(dsisr & DSISR_PRTABLE_FAULT))
 		gpa |= ea & 0xfff;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return kvmppc_send_page_to_uv(kvm, gpa & PAGE_MASK);
+
 	/* Get the corresponding memslot */
 	memslot = gfn_to_memslot(kvm, gfn);
 
@@ -980,6 +985,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned long gpa = gfn << PAGE_SHIFT;
 	unsigned int shift;
 
+	if (kvmppc_is_guest_secure(kvm)) {
+		uv_page_inval(kvm->arch.lpid, gpa, PAGE_SIZE);
+		return 0;
+	}
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
@@ -997,6 +1007,9 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	int ref = 0;
 	unsigned long old, *rmapp;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ref;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
 		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
@@ -1021,6 +1034,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned int shift;
 	int ref = 0;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ref;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep))
 		ref = 1;
@@ -1038,6 +1054,9 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 	int ret = 0;
 	unsigned long old, *rmapp;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ret;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
 		ret = 1;
@@ -1090,6 +1109,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 	unsigned long gpa;
 	unsigned int shift;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return;
+
 	gpa = memslot->base_gfn << PAGE_SHIFT;
 	spin_lock(&kvm->mmu_lock);
 	for (n = memslot->npages; n; --n) {
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 55bab9c4e60a..9e6c88de456f 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -62,6 +62,11 @@ struct kvmppc_hmm_migrate_args {
 	unsigned long page_shift;
 };
 
+bool kvmppc_is_guest_secure(struct kvm *kvm)
+{
+	return !!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE);
+}
+
 unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 {
 	struct kvm_memslots *slots;
@@ -494,6 +499,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
 	return ret;
 }
 
+int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
+{
+	unsigned long pfn;
+	int ret;
+
+	pfn = gfn_to_pfn(kvm, gpa >> PAGE_SHIFT);
+	if (is_error_noslot_pfn(pfn))
+		return -EFAULT;
+
+	ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gpa, 0, PAGE_SHIFT);
+	kvm_release_pfn_clean(pfn);
+
+	return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT;
+}
+
 static u64 kvmppc_get_secmem_size(void)
 {
 	struct device_node *np;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

- After the guest becomes secure, when we handle a page fault of a page
  belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
- Whenever a page is unmapped on the HV side, inform UV via UV_PAGE_INVAL.
- Ensure all those routines that walk the secondary page tables of
  the guest don't do so in case of secure VM. For secure guest, the
  active secondary page tables are in secure memory and the secondary
  page tables in HV are freed when guest becomes secure.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/kvm_host.h       | 12 ++++++++++++
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
 arch/powerpc/kvm/book3s_64_mmu_radix.c    | 22 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 20 ++++++++++++++++++++
 5 files changed, 62 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0c49c3401c63..dcbf7480cb10 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -865,6 +865,8 @@ static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 #ifdef CONFIG_PPC_UV
 extern int kvmppc_hmm_init(void);
 extern void kvmppc_hmm_free(void);
+extern bool kvmppc_is_guest_secure(struct kvm *kvm);
+extern int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa);
 #else
 static inline int kvmppc_hmm_init(void)
 {
@@ -872,6 +874,16 @@ static inline int kvmppc_hmm_init(void)
 }
 
 static inline void kvmppc_hmm_free(void) {}
+
+static inline bool kvmppc_is_guest_secure(struct kvm *kvm)
+{
+	return false;
+}
+
+static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
+{
+	return -EFAULT;
+}
 #endif /* CONFIG_PPC_UV */
 
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index d6d6eb2e6e6b..9f5510b55892 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -24,5 +24,6 @@
 #define UV_UNREGISTER_MEM_SLOT		0xF124
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
+#define UV_PAGE_INVAL			0xF138
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index fe45be9ee63b..f4f674794b35 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -77,6 +77,13 @@ static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
 
 	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
 }
+
+static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index f55ef071883f..c454600c454f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -21,6 +21,8 @@
 #include <asm/pgtable.h>
 #include <asm/pgalloc.h>
 #include <asm/pte-walk.h>
+#include <asm/ultravisor.h>
+#include <asm/kvm_host.h>
 
 /*
  * Supported radix tree geometry.
@@ -923,6 +925,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	if (!(dsisr & DSISR_PRTABLE_FAULT))
 		gpa |= ea & 0xfff;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return kvmppc_send_page_to_uv(kvm, gpa & PAGE_MASK);
+
 	/* Get the corresponding memslot */
 	memslot = gfn_to_memslot(kvm, gfn);
 
@@ -980,6 +985,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned long gpa = gfn << PAGE_SHIFT;
 	unsigned int shift;
 
+	if (kvmppc_is_guest_secure(kvm)) {
+		uv_page_inval(kvm->arch.lpid, gpa, PAGE_SIZE);
+		return 0;
+	}
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
@@ -997,6 +1007,9 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	int ref = 0;
 	unsigned long old, *rmapp;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ref;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
 		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
@@ -1021,6 +1034,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned int shift;
 	int ref = 0;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ref;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep))
 		ref = 1;
@@ -1038,6 +1054,9 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 	int ret = 0;
 	unsigned long old, *rmapp;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ret;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
 		ret = 1;
@@ -1090,6 +1109,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 	unsigned long gpa;
 	unsigned int shift;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return;
+
 	gpa = memslot->base_gfn << PAGE_SHIFT;
 	spin_lock(&kvm->mmu_lock);
 	for (n = memslot->npages; n; --n) {
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 55bab9c4e60a..9e6c88de456f 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -62,6 +62,11 @@ struct kvmppc_hmm_migrate_args {
 	unsigned long page_shift;
 };
 
+bool kvmppc_is_guest_secure(struct kvm *kvm)
+{
+	return !!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE);
+}
+
 unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 {
 	struct kvm_memslots *slots;
@@ -494,6 +499,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
 	return ret;
 }
 
+int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
+{
+	unsigned long pfn;
+	int ret;
+
+	pfn = gfn_to_pfn(kvm, gpa >> PAGE_SHIFT);
+	if (is_error_noslot_pfn(pfn))
+		return -EFAULT;
+
+	ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gpa, 0, PAGE_SHIFT);
+	kvm_release_pfn_clean(pfn);
+
+	return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT;
+}
+
 static u64 kvmppc_get_secmem_size(void)
 {
 	struct device_node *np;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH v5 6/7] kvmppc: Support reset of secure guest
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
This ioctl will be issued by QEMU during reset and includes the
the following steps:

- Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
- Unpin the VPA pages so that they can be migrated back to secure
  side when guest becomes secure again. This is required because
  pinned pages can't be migrated.
- Reinitialize guest's partitioned scoped page tables. These are
  freed when guest become secure (H_SVM_INIT_DONE)
- Release all HMM pages of the secure guest.

After these steps, guest is ready to issue UV_ESM call once again
to switch to secure mode.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
	[Implementation of uv_svm_terminate() and its call from
	guest shutdown path]
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
	[Unpinning of VPA pages]
---
 Documentation/virtual/kvm/api.txt         | 19 ++++++
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  7 +++
 arch/powerpc/include/asm/kvm_ppc.h        |  2 +
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++
 arch/powerpc/kvm/book3s_hv.c              | 70 +++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 60 +++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                | 12 ++++
 include/uapi/linux/kvm.h                  |  1 +
 tools/include/uapi/linux/kvm.h            |  1 +
 10 files changed, 180 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index ba6c42c576dd..c89c24ad86ed 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -4065,6 +4065,25 @@ KVM_ARM_VCPU_FINALIZE call.
 See KVM_ARM_VCPU_INIT for details of vcpu features that require finalization
 using this ioctl.
 
+4.120 KVM_PPC_SVM_OFF
+
+Capability: basic
+Architectures: powerpc
+Type: vm ioctl
+Parameters: none
+Returns: 0 on successful completion,
+Errors:
+  EINVAL:    if ultravisor failed to terminate the secure guest
+  ENOMEM:    if hypervisor failed to allocate new radix page tables for guest
+
+This ioctl is used to turn off the secure mode of the guest or transition
+the guest from secure mode to normal mode. This is invoked when the guest
+is reset. This has no effect if called for a normal guest.
+
+This ioctl issues an ultravisor call to terminate the secure guest,
+unpin the VPA pages, reinitialize guest's partition scoped page
+tables and releases all the HMM pages that is associated with this guest.
+
 5. The kvm_run structure
 ------------------------
 
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
index 8c7aacabb2e0..cd2b0f927766 100644
--- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 					  unsigned long page_shift);
 extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
 extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
+extern void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
+		struct kvm_memslots *slots);
 #else
 static inline unsigned long
 kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
@@ -37,5 +39,10 @@ static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 {
 	return H_UNSUPPORTED;
 }
+
+static inline void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
+		struct kvm_memslots *slots)
+{
+}
 #endif /* CONFIG_PPC_UV */
 #endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index bc892380e6cd..d80ece28d65d 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -188,6 +188,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
 extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm);
 extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm);
 extern void kvmppc_setup_partition_table(struct kvm *kvm);
+extern int kvmppc_reinit_partition_table(struct kvm *kvm);
 
 extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
 				struct kvm_create_spapr_tce_64 *args);
@@ -332,6 +333,7 @@ struct kvmppc_ops {
 			       int size);
 	int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
 			      int size);
+	int (*svm_off)(struct kvm *kvm);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 9f5510b55892..c8180427fa01 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -25,5 +25,6 @@
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 #define UV_PAGE_INVAL			0xF138
+#define UV_SVM_TERMINATE		0xF13C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f4f674794b35..a5e4516c8ddb 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -84,6 +84,13 @@ static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
 
 	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
 }
+
+static inline int uv_svm_terminate(u64 lpid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_SVM_TERMINATE, retbuf, lpid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7cbb5edaed01..41b0fff199d3 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2437,6 +2437,15 @@ static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 					vpa->dirty);
 }
 
+static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa)
+{
+	unpin_vpa(kvm, vpa);
+	vpa->gpa = 0;
+	vpa->pinned_addr = NULL;
+	vpa->dirty = false;
+	vpa->update_pending = 0;
+}
+
 static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu)
 {
 	spin_lock(&vcpu->arch.vpa_update_lock);
@@ -4577,6 +4586,22 @@ void kvmppc_setup_partition_table(struct kvm *kvm)
 	kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1);
 }
 
+/*
+ * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure
+ * guest is converted back to normal guest.
+ */
+int kvmppc_reinit_partition_table(struct kvm *kvm)
+{
+	int ret;
+
+	ret = kvmppc_init_vm_radix(kvm);
+	if (ret)
+		return ret;
+
+	kvmppc_setup_partition_table(kvm);
+	return 0;
+}
+
 /*
  * Set up HPT (hashed page table) and RMA (real-mode area).
  * Must be called with kvm->lock held.
@@ -4956,6 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm)
 		if (nesting_enabled(kvm))
 			kvmhv_release_all_nested(kvm);
 		kvm->arch.process_table = 0;
+		uv_svm_terminate(kvm->arch.lpid);
 		kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0);
 	}
 	kvmppc_free_lpid(kvm->arch.lpid);
@@ -5397,6 +5423,49 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
 	return rc;
 }
 
+/*
+ *  IOCTL handler to turn off secure mode of guest
+ *
+ * - Issue ucall to terminate the guest on the UV side
+ * - Unpin the VPA pages (Enables these pages to be migrated back
+ *   when VM becomes secure again)
+ * - Recreate partition table as the guest is transitioning back to
+ *   normal mode
+ * - Release all HMM pages
+ */
+static int kvmhv_svm_off(struct kvm *kvm)
+{
+	struct kvm_vcpu *vcpu;
+	int ret = 0;
+	int i;
+
+	if (kvmppc_is_guest_secure(kvm)) {
+		ret = uv_svm_terminate(kvm->arch.lpid);
+		if (ret != U_SUCCESS) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		kvm_for_each_vcpu(i, vcpu, kvm) {
+			spin_lock(&vcpu->arch.vpa_update_lock);
+			unpin_vpa_reset(kvm, &vcpu->arch.dtl);
+			unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow);
+			unpin_vpa_reset(kvm, &vcpu->arch.vpa);
+			spin_unlock(&vcpu->arch.vpa_update_lock);
+		}
+
+		ret = kvmppc_reinit_partition_table(kvm);
+		if (ret)
+			goto out;
+		kvm->arch.secure_guest = 0;
+		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+			kvmppc_hmm_free_memslot_pfns(kvm,
+			__kvm_memslots(kvm, i));
+	}
+out:
+	return ret;
+}
+
 static struct kvmppc_ops kvm_ops_hv = {
 	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
 	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
@@ -5439,6 +5508,7 @@ static struct kvmppc_ops kvm_ops_hv = {
 	.enable_nested = kvmhv_enable_nested,
 	.load_from_eaddr = kvmhv_load_from_eaddr,
 	.store_to_eaddr = kvmhv_store_to_eaddr,
+	.svm_off = kvmhv_svm_off,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 9e6c88de456f..165926c069ba 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -38,6 +38,8 @@
 #include <linux/kvm_host.h>
 #include <linux/sched/mm.h>
 #include <asm/ultravisor.h>
+#include <asm/kvm_ppc.h>
+#include <asm/kvm_book3s.h>
 
 struct kvmppc_hmm_device {
 	struct hmm_device *device;
@@ -98,6 +100,12 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 		return H_UNSUPPORTED;
 
 	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
+	if (kvm_is_radix(kvm)) {
+		pr_info("LPID %d went secure, freeing HV side radix pgtables\n",
+			kvm->arch.lpid);
+		kvmppc_free_radix(kvm);
+	}
+
 	return H_SUCCESS;
 }
 
@@ -113,6 +121,58 @@ static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
 	return !!(pfn & KVMPPC_PFN_HMM);
 }
 
+/*
+ * Drop HMM pages that we maintain for the secure guest
+ *
+ * We mark the pages to be skipped from UV_PAGE_OUT when there is HMM
+ * fault on these pages. Next we get these pages, force HMM fault,
+ * do fault migration to replace the HMM PTEs from QEMU page tables
+ * with normal PTEs from newly allocated pages.
+ */
+static void kvmppc_hmm_drop_pages(struct kvm_memory_slot *free, struct kvm *kvm)
+{
+	int i;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+
+	for (i = 0; i < free->npages; i++) {
+		unsigned long *rmap = &free->arch.rmap[i];
+		struct page *hmm_page;
+
+		if (kvmppc_is_hmm_pfn(*rmap)) {
+			hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
+			pvt = (struct kvmppc_hmm_page_pvt *)
+				hmm_devmem_page_get_drvdata(hmm_page);
+			pvt->skip_page_out = true;
+
+			pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT);
+			if (is_error_noslot_pfn(pfn))
+				continue;
+			kvm_release_pfn_clean(pfn);
+		}
+	}
+}
+
+/*
+ * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset
+ *
+ * UV has already cleaned up the guest, we release any HMM pages that
+ * we maintain
+ */
+void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots *slots)
+{
+	struct kvm_memory_slot *memslot;
+	int srcu_idx;
+
+	if (!slots)
+		return;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	kvm_for_each_memslot(memslot, slots)
+		kvmppc_hmm_drop_pages(memslot, kvm);
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+}
+
 /*
  * Get a free HMM PFN from the pool
  *
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3393b166817a..0c5fae004adf 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -42,6 +42,8 @@
 #include <asm/hvcall.h>
 #include <asm/plpar_wrappers.h>
 #endif
+#include <asm/ultravisor.h>
+#include <asm/kvm_host.h>
 
 #include "timing.h"
 #include "irq.h"
@@ -2423,6 +2425,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			r = -EFAULT;
 		break;
 	}
+	case KVM_PPC_SVM_OFF: {
+		struct kvm *kvm = filp->private_data;
+
+		r = 0;
+		if (!kvm->arch.kvm_ops->svm_off)
+			goto out;
+
+		r = kvm->arch.kvm_ops->svm_off(kvm);
+		break;
+	}
 	default: {
 		struct kvm *kvm = filp->private_data;
 		r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2fe12b40d503..c1dbdd428025 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1327,6 +1327,7 @@ struct kvm_s390_ucas_mapping {
 #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct kvm_ppc_rmmu_info)
 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
 #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct kvm_ppc_cpu_char)
+#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
 
 /* ioctl for vm fd */
 #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct kvm_create_device)
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index 6d4ea4b6c922..03c89f4a01b2 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -1319,6 +1319,7 @@ struct kvm_s390_ucas_mapping {
 #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct kvm_ppc_rmmu_info)
 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
 #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct kvm_ppc_cpu_char)
+#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
 
 /* ioctl for vm fd */
 #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct kvm_create_device)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH v5 6/7] kvmppc: Support reset of secure guest
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev

Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
This ioctl will be issued by QEMU during reset and includes the
the following steps:

- Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
- Unpin the VPA pages so that they can be migrated back to secure
  side when guest becomes secure again. This is required because
  pinned pages can't be migrated.
- Reinitialize guest's partitioned scoped page tables. These are
  freed when guest become secure (H_SVM_INIT_DONE)
- Release all HMM pages of the secure guest.

After these steps, guest is ready to issue UV_ESM call once again
to switch to secure mode.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
	[Implementation of uv_svm_terminate() and its call from
	guest shutdown path]
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
	[Unpinning of VPA pages]
---
 Documentation/virtual/kvm/api.txt         | 19 ++++++
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  7 +++
 arch/powerpc/include/asm/kvm_ppc.h        |  2 +
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++
 arch/powerpc/kvm/book3s_hv.c              | 70 +++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 60 +++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                | 12 ++++
 include/uapi/linux/kvm.h                  |  1 +
 tools/include/uapi/linux/kvm.h            |  1 +
 10 files changed, 180 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index ba6c42c576dd..c89c24ad86ed 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -4065,6 +4065,25 @@ KVM_ARM_VCPU_FINALIZE call.
 See KVM_ARM_VCPU_INIT for details of vcpu features that require finalization
 using this ioctl.
 
+4.120 KVM_PPC_SVM_OFF
+
+Capability: basic
+Architectures: powerpc
+Type: vm ioctl
+Parameters: none
+Returns: 0 on successful completion,
+Errors:
+  EINVAL:    if ultravisor failed to terminate the secure guest
+  ENOMEM:    if hypervisor failed to allocate new radix page tables for guest
+
+This ioctl is used to turn off the secure mode of the guest or transition
+the guest from secure mode to normal mode. This is invoked when the guest
+is reset. This has no effect if called for a normal guest.
+
+This ioctl issues an ultravisor call to terminate the secure guest,
+unpin the VPA pages, reinitialize guest's partition scoped page
+tables and releases all the HMM pages that is associated with this guest.
+
 5. The kvm_run structure
 ------------------------
 
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
index 8c7aacabb2e0..cd2b0f927766 100644
--- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 					  unsigned long page_shift);
 extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
 extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
+extern void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
+		struct kvm_memslots *slots);
 #else
 static inline unsigned long
 kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
@@ -37,5 +39,10 @@ static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 {
 	return H_UNSUPPORTED;
 }
+
+static inline void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
+		struct kvm_memslots *slots)
+{
+}
 #endif /* CONFIG_PPC_UV */
 #endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index bc892380e6cd..d80ece28d65d 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -188,6 +188,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
 extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm);
 extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm);
 extern void kvmppc_setup_partition_table(struct kvm *kvm);
+extern int kvmppc_reinit_partition_table(struct kvm *kvm);
 
 extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
 				struct kvm_create_spapr_tce_64 *args);
@@ -332,6 +333,7 @@ struct kvmppc_ops {
 			       int size);
 	int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
 			      int size);
+	int (*svm_off)(struct kvm *kvm);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 9f5510b55892..c8180427fa01 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -25,5 +25,6 @@
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 #define UV_PAGE_INVAL			0xF138
+#define UV_SVM_TERMINATE		0xF13C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f4f674794b35..a5e4516c8ddb 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -84,6 +84,13 @@ static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
 
 	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
 }
+
+static inline int uv_svm_terminate(u64 lpid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_SVM_TERMINATE, retbuf, lpid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7cbb5edaed01..41b0fff199d3 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2437,6 +2437,15 @@ static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 					vpa->dirty);
 }
 
+static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa)
+{
+	unpin_vpa(kvm, vpa);
+	vpa->gpa = 0;
+	vpa->pinned_addr = NULL;
+	vpa->dirty = false;
+	vpa->update_pending = 0;
+}
+
 static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu)
 {
 	spin_lock(&vcpu->arch.vpa_update_lock);
@@ -4577,6 +4586,22 @@ void kvmppc_setup_partition_table(struct kvm *kvm)
 	kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1);
 }
 
+/*
+ * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure
+ * guest is converted back to normal guest.
+ */
+int kvmppc_reinit_partition_table(struct kvm *kvm)
+{
+	int ret;
+
+	ret = kvmppc_init_vm_radix(kvm);
+	if (ret)
+		return ret;
+
+	kvmppc_setup_partition_table(kvm);
+	return 0;
+}
+
 /*
  * Set up HPT (hashed page table) and RMA (real-mode area).
  * Must be called with kvm->lock held.
@@ -4956,6 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm)
 		if (nesting_enabled(kvm))
 			kvmhv_release_all_nested(kvm);
 		kvm->arch.process_table = 0;
+		uv_svm_terminate(kvm->arch.lpid);
 		kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0);
 	}
 	kvmppc_free_lpid(kvm->arch.lpid);
@@ -5397,6 +5423,49 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
 	return rc;
 }
 
+/*
+ *  IOCTL handler to turn off secure mode of guest
+ *
+ * - Issue ucall to terminate the guest on the UV side
+ * - Unpin the VPA pages (Enables these pages to be migrated back
+ *   when VM becomes secure again)
+ * - Recreate partition table as the guest is transitioning back to
+ *   normal mode
+ * - Release all HMM pages
+ */
+static int kvmhv_svm_off(struct kvm *kvm)
+{
+	struct kvm_vcpu *vcpu;
+	int ret = 0;
+	int i;
+
+	if (kvmppc_is_guest_secure(kvm)) {
+		ret = uv_svm_terminate(kvm->arch.lpid);
+		if (ret != U_SUCCESS) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		kvm_for_each_vcpu(i, vcpu, kvm) {
+			spin_lock(&vcpu->arch.vpa_update_lock);
+			unpin_vpa_reset(kvm, &vcpu->arch.dtl);
+			unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow);
+			unpin_vpa_reset(kvm, &vcpu->arch.vpa);
+			spin_unlock(&vcpu->arch.vpa_update_lock);
+		}
+
+		ret = kvmppc_reinit_partition_table(kvm);
+		if (ret)
+			goto out;
+		kvm->arch.secure_guest = 0;
+		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+			kvmppc_hmm_free_memslot_pfns(kvm,
+			__kvm_memslots(kvm, i));
+	}
+out:
+	return ret;
+}
+
 static struct kvmppc_ops kvm_ops_hv = {
 	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
 	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
@@ -5439,6 +5508,7 @@ static struct kvmppc_ops kvm_ops_hv = {
 	.enable_nested = kvmhv_enable_nested,
 	.load_from_eaddr = kvmhv_load_from_eaddr,
 	.store_to_eaddr = kvmhv_store_to_eaddr,
+	.svm_off = kvmhv_svm_off,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 9e6c88de456f..165926c069ba 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -38,6 +38,8 @@
 #include <linux/kvm_host.h>
 #include <linux/sched/mm.h>
 #include <asm/ultravisor.h>
+#include <asm/kvm_ppc.h>
+#include <asm/kvm_book3s.h>
 
 struct kvmppc_hmm_device {
 	struct hmm_device *device;
@@ -98,6 +100,12 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 		return H_UNSUPPORTED;
 
 	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
+	if (kvm_is_radix(kvm)) {
+		pr_info("LPID %d went secure, freeing HV side radix pgtables\n",
+			kvm->arch.lpid);
+		kvmppc_free_radix(kvm);
+	}
+
 	return H_SUCCESS;
 }
 
@@ -113,6 +121,58 @@ static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
 	return !!(pfn & KVMPPC_PFN_HMM);
 }
 
+/*
+ * Drop HMM pages that we maintain for the secure guest
+ *
+ * We mark the pages to be skipped from UV_PAGE_OUT when there is HMM
+ * fault on these pages. Next we get these pages, force HMM fault,
+ * do fault migration to replace the HMM PTEs from QEMU page tables
+ * with normal PTEs from newly allocated pages.
+ */
+static void kvmppc_hmm_drop_pages(struct kvm_memory_slot *free, struct kvm *kvm)
+{
+	int i;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+
+	for (i = 0; i < free->npages; i++) {
+		unsigned long *rmap = &free->arch.rmap[i];
+		struct page *hmm_page;
+
+		if (kvmppc_is_hmm_pfn(*rmap)) {
+			hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
+			pvt = (struct kvmppc_hmm_page_pvt *)
+				hmm_devmem_page_get_drvdata(hmm_page);
+			pvt->skip_page_out = true;
+
+			pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT);
+			if (is_error_noslot_pfn(pfn))
+				continue;
+			kvm_release_pfn_clean(pfn);
+		}
+	}
+}
+
+/*
+ * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset
+ *
+ * UV has already cleaned up the guest, we release any HMM pages that
+ * we maintain
+ */
+void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots *slots)
+{
+	struct kvm_memory_slot *memslot;
+	int srcu_idx;
+
+	if (!slots)
+		return;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	kvm_for_each_memslot(memslot, slots)
+		kvmppc_hmm_drop_pages(memslot, kvm);
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+}
+
 /*
  * Get a free HMM PFN from the pool
  *
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3393b166817a..0c5fae004adf 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -42,6 +42,8 @@
 #include <asm/hvcall.h>
 #include <asm/plpar_wrappers.h>
 #endif
+#include <asm/ultravisor.h>
+#include <asm/kvm_host.h>
 
 #include "timing.h"
 #include "irq.h"
@@ -2423,6 +2425,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			r = -EFAULT;
 		break;
 	}
+	case KVM_PPC_SVM_OFF: {
+		struct kvm *kvm = filp->private_data;
+
+		r = 0;
+		if (!kvm->arch.kvm_ops->svm_off)
+			goto out;
+
+		r = kvm->arch.kvm_ops->svm_off(kvm);
+		break;
+	}
 	default: {
 		struct kvm *kvm = filp->private_data;
 		r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2fe12b40d503..c1dbdd428025 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1327,6 +1327,7 @@ struct kvm_s390_ucas_mapping {
 #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct kvm_ppc_rmmu_info)
 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
 #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct kvm_ppc_cpu_char)
+#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
 
 /* ioctl for vm fd */
 #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct kvm_create_device)
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index 6d4ea4b6c922..03c89f4a01b2 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -1319,6 +1319,7 @@ struct kvm_s390_ucas_mapping {
 #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct kvm_ppc_rmmu_info)
 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
 #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct kvm_ppc_cpu_char)
+#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
 
 /* ioctl for vm fd */
 #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct kvm_create_device)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
  2019-07-09 10:25 ` Bharata B Rao
  (?)
@ 2019-07-09 10:25   ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Anshuman Khandual, Bharata B Rao

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

CONFIG_PPC_UV adds support for ultravisor.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[ Update config help and commit message ]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/Kconfig | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f0e5b38d52e8..20c6c213d2be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
        ---help---
          Support user-mode Transactional Memory on POWERPC.
 
+config PPC_UV
+	bool "Ultravisor support"
+	depends on KVM_BOOK3S_HV_POSSIBLE
+	select HMM_MIRROR
+	select HMM
+	select ZONE_DEVICE
+	select MIGRATE_VMA_HELPER
+	select DEV_PAGEMAP_OPS
+	select DEVICE_PRIVATE
+	select MEMORY_HOTPLUG
+	select MEMORY_HOTREMOVE
+	default n
+	help
+	  This option paravirtualizes the kernel to run in POWER platforms that
+	  supports the Protected Execution Facility (PEF). In such platforms,
+	  the ultravisor firmware runs at a privilege level above the
+	  hypervisor.
+
+	  If unsure, say "N".
+
 config LD_HEAD_STUB_CATCH
 	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT
 	depends on PPC64
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:25 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: linuxram, cclaudio, kvm-ppc, Bharata B Rao, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Anshuman Khandual

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

CONFIG_PPC_UV adds support for ultravisor.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[ Update config help and commit message ]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/Kconfig | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f0e5b38d52e8..20c6c213d2be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
        ---help---
          Support user-mode Transactional Memory on POWERPC.
 
+config PPC_UV
+	bool "Ultravisor support"
+	depends on KVM_BOOK3S_HV_POSSIBLE
+	select HMM_MIRROR
+	select HMM
+	select ZONE_DEVICE
+	select MIGRATE_VMA_HELPER
+	select DEV_PAGEMAP_OPS
+	select DEVICE_PRIVATE
+	select MEMORY_HOTPLUG
+	select MEMORY_HOTREMOVE
+	default n
+	help
+	  This option paravirtualizes the kernel to run in POWER platforms that
+	  supports the Protected Execution Facility (PEF). In such platforms,
+	  the ultravisor firmware runs at a privilege level above the
+	  hypervisor.
+
+	  If unsure, say "N".
+
 config LD_HEAD_STUB_CATCH
 	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT
 	depends on PPC64
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 0/7] kvmppc: HMM driver to manage pages of secure guest
@ 2019-07-09 10:25 ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

Hi,

A pseries guest can be run as a secure guest on Ultravisor-enabled
POWER platforms. On such platforms, this driver will be used to manage
the movement of guest pages between the normal memory managed by
hypervisor (HV) and secure memory managed by Ultravisor (UV).

Private ZONE_DEVICE memory equal to the amount of secure memory
available in the platform for running secure guests is created
via a HMM device. The movement of pages between normal and secure
memory is done by ->alloc_and_copy() callback routine of migrate_vma().

The page-in or page-out requests from UV will come to HV as hcalls and
HV will call back into UV via uvcalls to satisfy these page requests.

These patches apply and work on top of the base Ultravisor v4 patches
posted by Claudio Carvalho at:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg152842.html

Changes in v5
======- Hold kvm->srcu lock until we are done migrating the page.
- Ensure we take heavier lock mmap_sem first before taking kvm->srcu
  lock.
- Code reorgs, comments updates and commit messages updates.
- Ensure we don't lookup HV side partition scoped page tables from
  memslot flush code, this is required for memory unplug to make
  progress.
- Fix reboot of secure SMP guests by unpinng the VPA pages during
  reboot (Ram Pai).
- Added documentation for the new KVM_PP_SVM_OFF ioctl.
- Using different bit slot to differentiate HMM PFN from other uses
  of rmap entries.
- Remove kvmppc_hmm_release_pfns() as releasing of HMM PFNs will be
  done by unmap_vmas() and its callers during VM shutdown.
- Carrying the patch that adds CONFIG_PPC_UV with this patchset.

v4: https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg151156.html

Anshuman Khandual (1):
  KVM: PPC: Ultravisor: Add PPC_UV config option

Bharata B Rao (6):
  kvmppc: HMM backend driver to manage pages of secure guest
  kvmppc: Shared pages support for secure guests
  kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
  kvmppc: Handle memory plug/unplug to secure VM
  kvmppc: Radix changes for secure guest
  kvmppc: Support reset of secure guest

 Documentation/virtual/kvm/api.txt         |  19 +
 arch/powerpc/Kconfig                      |  20 +
 arch/powerpc/include/asm/hvcall.h         |   9 +
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  48 ++
 arch/powerpc/include/asm/kvm_host.h       |  28 +
 arch/powerpc/include/asm/kvm_ppc.h        |   2 +
 arch/powerpc/include/asm/ultravisor-api.h |   6 +
 arch/powerpc/include/asm/ultravisor.h     |  47 ++
 arch/powerpc/kvm/Makefile                 |   3 +
 arch/powerpc/kvm/book3s_64_mmu_radix.c    |  22 +
 arch/powerpc/kvm/book3s_hv.c              | 115 ++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 656 ++++++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                |  12 +
 include/uapi/linux/kvm.h                  |   1 +
 tools/include/uapi/linux/kvm.h            |   1 +
 15 files changed, 989 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
 create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c

-- 
2.21.0

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

HMM driver for KVM PPC to manage page transitions of
secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.

H_SVM_PAGE_IN: Move the content of a normal page to secure page
H_SVM_PAGE_OUT: Move the content of a secure page to normal page

Private ZONE_DEVICE memory equal to the amount of secure memory
available in the platform for running secure guests is created
via a HMM device. The movement of pages between normal and secure
memory is done by ->alloc_and_copy() callback routine of migrate_vma().

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h         |   4 +
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  29 ++
 arch/powerpc/include/asm/kvm_host.h       |  12 +
 arch/powerpc/include/asm/ultravisor-api.h |   2 +
 arch/powerpc/include/asm/ultravisor.h     |  17 +
 arch/powerpc/kvm/Makefile                 |   3 +
 arch/powerpc/kvm/book3s_hv.c              |  19 +
 arch/powerpc/kvm/book3s_hv_hmm.c          | 482 ++++++++++++++++++++++
 8 files changed, 568 insertions(+)
 create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
 create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 463c63a9fcf1..2f6b952deb0f 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -337,6 +337,10 @@
 #define H_TLB_INVALIDATE	0xF808
 #define H_COPY_TOFROM_GUEST	0xF80C
 
+/* Platform-specific hcalls used by the Ultravisor */
+#define H_SVM_PAGE_IN		0xEF00
+#define H_SVM_PAGE_OUT		0xEF04
+
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
 #define H_SET_MODE_RESOURCE_SET_DAWR		2
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
new file mode 100644
index 000000000000..21f3de5f2acb
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __POWERPC_KVM_PPC_HMM_H__
+#define __POWERPC_KVM_PPC_HMM_H__
+
+#ifdef CONFIG_PPC_UV
+extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm,
+					  unsigned long gra,
+					  unsigned long flags,
+					  unsigned long page_shift);
+extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
+					  unsigned long gra,
+					  unsigned long flags,
+					  unsigned long page_shift);
+#else
+static inline unsigned long
+kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
+		     unsigned long flags, unsigned long page_shift)
+{
+	return H_UNSUPPORTED;
+}
+
+static inline unsigned long
+kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
+		      unsigned long flags, unsigned long page_shift)
+{
+	return H_UNSUPPORTED;
+}
+#endif /* CONFIG_PPC_UV */
+#endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 184becb62ea4..ac1a101beb07 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -858,4 +858,16 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
+#ifdef CONFIG_PPC_UV
+extern int kvmppc_hmm_init(void);
+extern void kvmppc_hmm_free(void);
+#else
+static inline int kvmppc_hmm_init(void)
+{
+	return 0;
+}
+
+static inline void kvmppc_hmm_free(void) {}
+#endif /* CONFIG_PPC_UV */
+
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 7c4d0b4ced12..f1c5800ac705 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,5 +20,7 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
+#define UV_PAGE_IN			0xF128
+#define UV_PAGE_OUT			0xF12C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 996c1efd6c6d..16f8e0e8ec3f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -44,6 +44,23 @@ static inline int uv_register_pate(u64 lpid, u64 dw0, u64 dw1)
 	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
 }
 
+static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 flags,
+			     u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags,
+		     page_shift);
+}
+
+static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags,
+			      u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
+		     page_shift);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 4c67cc79de7c..d1a79fb5f806 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -71,6 +71,9 @@ kvm-hv-y += \
 	book3s_64_mmu_radix.o \
 	book3s_hv_nested.o
 
+kvm-hv-$(CONFIG_PPC_UV) += \
+	book3s_hv_hmm.o
+
 kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \
 	book3s_hv_tm.o
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index a104743291a9..8ee66aa0da58 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -75,6 +75,8 @@
 #include <asm/xics.h>
 #include <asm/xive.h>
 #include <asm/hw_breakpoint.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_book3s_hmm.h>
 
 #include "book3s.h"
 
@@ -1083,6 +1085,18 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 					 kvmppc_get_gpr(vcpu, 5),
 					 kvmppc_get_gpr(vcpu, 6));
 		break;
+	case H_SVM_PAGE_IN:
+		ret = kvmppc_h_svm_page_in(vcpu->kvm,
+					   kvmppc_get_gpr(vcpu, 4),
+					   kvmppc_get_gpr(vcpu, 5),
+					   kvmppc_get_gpr(vcpu, 6));
+		break;
+	case H_SVM_PAGE_OUT:
+		ret = kvmppc_h_svm_page_out(vcpu->kvm,
+					    kvmppc_get_gpr(vcpu, 4),
+					    kvmppc_get_gpr(vcpu, 5),
+					    kvmppc_get_gpr(vcpu, 6));
+		break;
 	default:
 		return RESUME_HOST;
 	}
@@ -5501,11 +5515,16 @@ static int kvmppc_book3s_init_hv(void)
 			no_mixing_hpt_and_radix = true;
 	}
 
+	r = kvmppc_hmm_init();
+	if (r < 0)
+		pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r);
+
 	return r;
 }
 
 static void kvmppc_book3s_exit_hv(void)
 {
+	kvmppc_hmm_free();
 	kvmppc_free_host_rm_ops();
 	if (kvmppc_radix_possible())
 		kvmppc_radix_exit();
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
new file mode 100644
index 000000000000..cd34323888b6
--- /dev/null
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -0,0 +1,482 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HMM driver to manage page migration between normal and secure
+ * memory.
+ *
+ * Based on Jérôme Glisse's HMM dummy driver.
+ *
+ * Copyright 2018 Bharata B Rao, IBM Corp. <bharata@linux.ibm.com>
+ */
+
+/*
+ * A pseries guest can be run as a secure guest on Ultravisor-enabled
+ * POWER platforms. On such platforms, this driver will be used to manage
+ * the movement of guest pages between the normal memory managed by
+ * hypervisor (HV) and secure memory managed by Ultravisor (UV).
+ *
+ * Private ZONE_DEVICE memory equal to the amount of secure memory
+ * available in the platform for running secure guests is created
+ * via a HMM device. The movement of pages between normal and secure
+ * memory is done by ->alloc_and_copy() callback routine of migrate_vma().
+ *
+ * The page-in or page-out requests from UV will come to HV as hcalls and
+ * HV will call back into UV via uvcalls to satisfy these page requests.
+ *
+ * For each page that gets moved into secure memory, a HMM PFN is used
+ * on the HV side and HMM migration PTE corresponding to that PFN would be
+ * populated in the QEMU page tables. HMM PFNs are stored in the rmap
+ * array. Whenever a guest page becomes secure, HMM PFN allocated for
+ * the same will be populated in the corresponding slot in the rmap
+ * array. The overloading of rmap array's usage which otherwise is
+ * used primarily by HPT guests means that this feature (secure
+ * guest on PEF platforms) is available only for Radix MMU guests.
+ * Also the same rmap array is used differently by nested HPT guests.
+ * Hence a secure guest can't have nested guests.
+ */
+
+#include <linux/hmm.h>
+#include <linux/kvm_host.h>
+#include <linux/sched/mm.h>
+#include <asm/ultravisor.h>
+
+struct kvmppc_hmm_device {
+	struct hmm_device *device;
+	struct hmm_devmem *devmem;
+	unsigned long *pfn_bitmap;
+};
+
+static struct kvmppc_hmm_device kvmppc_hmm;
+spinlock_t kvmppc_hmm_lock;
+
+struct kvmppc_hmm_page_pvt {
+	unsigned long *rmap;
+	unsigned int lpid;
+	unsigned long gpa;
+};
+
+struct kvmppc_hmm_migrate_args {
+	unsigned long *rmap;
+	unsigned int lpid;
+	unsigned long gpa;
+	unsigned long page_shift;
+};
+
+/*
+ * Bits 60:56 in the rmap entry will be used to identify the
+ * different uses/functions of rmap. This definition with move
+ * to a proper header when all other functions are defined.
+ */
+#define KVMPPC_PFN_HMM		(0x2ULL << 56)
+
+static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
+{
+	return !!(pfn & KVMPPC_PFN_HMM);
+}
+
+/*
+ * Get a free HMM PFN from the pool
+ *
+ * Called when a normal page is moved to secure memory (UV_PAGE_IN). HMM
+ * PFN will be used to keep track of the secure page on HV side.
+ *
+ * @rmap here is the slot in the rmap array that corresponds to @gpa.
+ * Thus a non-zero rmap entry indicates that the corresonding guest
+ * page has become secure, is not mapped on the HV side.
+ *
+ * NOTE: In this and subsequent functions, we pass around and access
+ * individual elements of kvm_memory_slot->arch.rmap[] without any
+ * protection. Should we use lock_rmap() here?
+ */
+static struct page *kvmppc_hmm_get_page(unsigned long *rmap,
+					unsigned long gpa, unsigned int lpid)
+{
+	struct page *dpage = NULL;
+	unsigned long bit, hmm_pfn;
+	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
+				kvmppc_hmm.devmem->pfn_first;
+	unsigned long flags;
+	struct kvmppc_hmm_page_pvt *pvt;
+
+	if (kvmppc_is_hmm_pfn(*rmap))
+		return NULL;
+
+	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
+	bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns);
+	if (bit >= nr_pfns)
+		goto out;
+
+	bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1);
+	hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first;
+	dpage = pfn_to_page(hmm_pfn);
+
+	if (!trylock_page(dpage))
+		goto out_clear;
+
+	*rmap = hmm_pfn | KVMPPC_PFN_HMM;
+	pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC);
+	if (!pvt)
+		goto out_unlock;
+	pvt->rmap = rmap;
+	pvt->gpa = gpa;
+	pvt->lpid = lpid;
+	hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt);
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+
+	get_page(dpage);
+	return dpage;
+
+out_unlock:
+	unlock_page(dpage);
+out_clear:
+	bitmap_clear(kvmppc_hmm.pfn_bitmap,
+		     hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1);
+out:
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+	return NULL;
+}
+
+/*
+ * Release the HMM PFN back to the pool
+ *
+ * Called when secure page becomes a normal page during UV_PAGE_OUT.
+ */
+static void kvmppc_hmm_put_page(struct page *page)
+{
+	unsigned long pfn = page_to_pfn(page);
+	unsigned long flags;
+	struct kvmppc_hmm_page_pvt *pvt;
+
+	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
+	pvt = (struct kvmppc_hmm_page_pvt *)hmm_devmem_page_get_drvdata(page);
+	hmm_devmem_page_set_drvdata(page, 0);
+
+	bitmap_clear(kvmppc_hmm.pfn_bitmap,
+		     pfn - kvmppc_hmm.devmem->pfn_first, 1);
+	*(pvt->rmap) = 0;
+	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
+	kfree(pvt);
+}
+
+/*
+ * migrate_vma() callback to move page from normal memory to secure memory.
+ *
+ * We don't capture the return value of uv_page_in() here because when
+ * UV asks for a page and then fails to copy it over, we don't care.
+ */
+static void
+kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma,
+				  const unsigned long *src_pfn,
+				  unsigned long *dst_pfn,
+				  unsigned long start,
+				  unsigned long end,
+				  void *private)
+{
+	struct kvmppc_hmm_migrate_args *args = private;
+	struct page *spage = migrate_pfn_to_page(*src_pfn);
+	unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT;
+	struct page *dpage;
+
+	*dst_pfn = 0;
+	if (!(*src_pfn & MIGRATE_PFN_MIGRATE))
+		return;
+
+	dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid);
+	if (!dpage)
+		return;
+
+	if (spage)
+		uv_page_in(args->lpid, pfn << args->page_shift,
+			   args->gpa, 0, args->page_shift);
+
+	*dst_pfn = migrate_pfn(page_to_pfn(dpage)) |
+		    MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED;
+}
+
+/*
+ * This migrate_vma() callback is typically used to updated device
+ * page tables after successful migration. We have nothing to do here.
+ *
+ * Also as we don't care if UV successfully copied over the page in
+ * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check
+ * dst_pfn for any errors here.
+ */
+static void
+kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma,
+				    const unsigned long *src_pfn,
+				    const unsigned long *dst_pfn,
+				    unsigned long start,
+				    unsigned long end,
+				    void *private)
+{
+}
+
+static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
+	.alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy,
+	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
+};
+
+/*
+ * Move page from normal memory to secure memory.
+ */
+unsigned long
+kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
+		     unsigned long flags, unsigned long page_shift)
+{
+	unsigned long addr, end;
+	unsigned long src_pfn, dst_pfn;
+	struct kvmppc_hmm_migrate_args args;
+	struct vm_area_struct *vma;
+	int srcu_idx;
+	unsigned long gfn = gpa >> page_shift;
+	struct kvm_memory_slot *slot;
+	unsigned long *rmap;
+	int ret = H_SUCCESS;
+
+	if (page_shift != PAGE_SHIFT)
+		return H_P3;
+
+	if (flags)
+		return H_P2;
+
+	down_read(&kvm->mm->mmap_sem);
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
+	addr = gfn_to_hva(kvm, gpa >> page_shift);
+	if (kvm_is_error_hva(addr)) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	end = addr + (1UL << page_shift);
+	vma = find_vma_intersection(kvm->mm, addr, end);
+	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	args.rmap = rmap;
+	args.lpid = kvm->arch.lpid;
+	args.gpa = gpa;
+	args.page_shift = page_shift;
+	ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end,
+			  &src_pfn, &dst_pfn, &args);
+	if (ret < 0)
+		ret = H_PARAMETER;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	up_read(&kvm->mm->mmap_sem);
+	return ret;
+}
+
+/*
+ * We drop the HMM PFN here, provision a new page and populate
+ * the same in QEMU page tables.
+ */
+static void
+kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
+					const unsigned long *src_pfn,
+					unsigned long *dst_pfn,
+					unsigned long start,
+					unsigned long end,
+					void *private)
+{
+	struct page *dpage, *spage;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+	int ret = U_SUCCESS;
+
+	*dst_pfn = MIGRATE_PFN_ERROR;
+	spage = migrate_pfn_to_page(*src_pfn);
+	if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE))
+		return;
+	if (!is_zone_device_page(spage))
+		return;
+	dpage = hmm_vma_alloc_locked_page(vma, start);
+	if (!dpage)
+		return;
+	pvt = (struct kvmppc_hmm_page_pvt *)
+	       hmm_devmem_page_get_drvdata(spage);
+
+	pfn = page_to_pfn(dpage);
+	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
+			  pvt->gpa, 0, PAGE_SHIFT);
+	if (ret = U_SUCCESS)
+		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
+}
+
+/*
+ * We have nothing to do here.
+ */
+static void
+kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma,
+					  const unsigned long *src_pfn,
+					  const unsigned long *dst_pfn,
+					  unsigned long start,
+					  unsigned long end,
+					  void *private)
+{
+}
+
+static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = {
+	.alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy,
+	.finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map,
+};
+
+/*
+ * Fault handler callback when HV touches any page that has been
+ * moved to secure memory, we ask UV to give back the page by
+ * issuing a UV_PAGE_OUT uvcall.
+ */
+static vm_fault_t kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem,
+					  struct vm_area_struct *vma,
+					  unsigned long addr,
+					  const struct page *page,
+					  unsigned int flags,
+					  pmd_t *pmdp)
+{
+	unsigned long end = addr + PAGE_SIZE;
+	unsigned long src_pfn, dst_pfn = 0;
+
+	if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
+			&src_pfn, &dst_pfn, NULL))
+		return VM_FAULT_SIGBUS;
+	if (dst_pfn = MIGRATE_PFN_ERROR)
+		return VM_FAULT_SIGBUS;
+	return 0;
+}
+
+static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem,
+				   struct page *page)
+{
+	kvmppc_hmm_put_page(page);
+}
+
+static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = {
+	.free = kvmppc_hmm_devmem_free,
+	.fault = kvmppc_hmm_devmem_fault,
+};
+
+/*
+ * Move page from secure memory to normal memory.
+ */
+unsigned long
+kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
+		      unsigned long flags, unsigned long page_shift)
+{
+	unsigned long addr, end;
+	struct vm_area_struct *vma;
+	unsigned long src_pfn, dst_pfn = 0;
+	int srcu_idx;
+	int ret = H_SUCCESS;
+
+	if (page_shift != PAGE_SHIFT)
+		return H_P3;
+
+	if (flags)
+		return H_P2;
+
+	down_read(&kvm->mm->mmap_sem);
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	addr = gfn_to_hva(kvm, gpa >> page_shift);
+	if (kvm_is_error_hva(addr)) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+
+	end = addr + (1UL << page_shift);
+	vma = find_vma_intersection(kvm->mm, addr, end);
+	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
+		ret = H_PARAMETER;
+		goto out;
+	}
+	ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
+			  &src_pfn, &dst_pfn, NULL);
+	if (ret < 0)
+		ret = H_PARAMETER;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	up_read(&kvm->mm->mmap_sem);
+	return ret;
+}
+
+static u64 kvmppc_get_secmem_size(void)
+{
+	struct device_node *np;
+	int i, len;
+	const __be32 *prop;
+	u64 size = 0;
+
+	np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware");
+	if (!np)
+		goto out;
+
+	prop = of_get_property(np, "secure-memory-ranges", &len);
+	if (!prop)
+		goto out_put;
+
+	for (i = 0; i < len / (sizeof(*prop) * 4); i++)
+		size += of_read_number(prop + (i * 4) + 2, 2);
+
+out_put:
+	of_node_put(np);
+out:
+	return size;
+}
+
+static int kvmppc_hmm_pages_init(void)
+{
+	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
+				kvmppc_hmm.devmem->pfn_first;
+
+	kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
+					 sizeof(unsigned long), GFP_KERNEL);
+	if (!kvmppc_hmm.pfn_bitmap)
+		return -ENOMEM;
+
+	spin_lock_init(&kvmppc_hmm_lock);
+
+	return 0;
+}
+
+int kvmppc_hmm_init(void)
+{
+	int ret = 0;
+	unsigned long size;
+
+	size = kvmppc_get_secmem_size();
+	if (!size) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	kvmppc_hmm.device = hmm_device_new(NULL);
+	if (IS_ERR(kvmppc_hmm.device)) {
+		ret = PTR_ERR(kvmppc_hmm.device);
+		goto out;
+	}
+
+	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
+					   &kvmppc_hmm.device->device, size);
+	if (IS_ERR(kvmppc_hmm.devmem)) {
+		ret = PTR_ERR(kvmppc_hmm.devmem);
+		goto out_device;
+	}
+	ret = kvmppc_hmm_pages_init();
+	if (ret < 0)
+		goto out_device;
+
+	pr_info("KVMPPC-HMM: Secure Memory size 0x%lx\n", size);
+	return ret;
+
+out_device:
+	hmm_device_put(kvmppc_hmm.device);
+out:
+	return ret;
+}
+
+void kvmppc_hmm_free(void)
+{
+	kfree(kvmppc_hmm.pfn_bitmap);
+	hmm_device_put(kvmppc_hmm.device);
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 2/7] kvmppc: Shared pages support for secure guests
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

A secure guest will share some of its pages with hypervisor (Eg. virtio
bounce buffers etc). Support shared pages in HMM driver.

Once a secure page is converted to shared page, HMM driver will stop
tracking that page.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/hvcall.h |  3 ++
 arch/powerpc/kvm/book3s_hv_hmm.c  | 66 +++++++++++++++++++++++++++++--
 2 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 2f6b952deb0f..05b8536f6653 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -337,6 +337,9 @@
 #define H_TLB_INVALIDATE	0xF808
 #define H_COPY_TOFROM_GUEST	0xF80C
 
+/* Flags for H_SVM_PAGE_IN */
+#define H_PAGE_IN_SHARED        0x1
+
 /* Platform-specific hcalls used by the Ultravisor */
 #define H_SVM_PAGE_IN		0xEF00
 #define H_SVM_PAGE_OUT		0xEF04
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index cd34323888b6..36562b382e70 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -52,6 +52,7 @@ struct kvmppc_hmm_page_pvt {
 	unsigned long *rmap;
 	unsigned int lpid;
 	unsigned long gpa;
+	bool skip_page_out;
 };
 
 struct kvmppc_hmm_migrate_args {
@@ -215,6 +216,53 @@ static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
 	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
 };
 
+/*
+ * Shares the page with HV, thus making it a normal page.
+ *
+ * - If the page is already secure, then provision a new page and share
+ * - If the page is a normal page, share the existing page
+ *
+ * In the former case, uses the HMM fault handler to release the HMM page.
+ */
+static unsigned long
+kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift)
+{
+
+	int ret;
+	struct page *hmm_page;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+	unsigned long *rmap;
+	struct kvm_memory_slot *slot;
+	unsigned long gfn = gpa >> page_shift;
+	int srcu_idx;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	if (!slot) {
+		srcu_read_unlock(&kvm->srcu, srcu_idx);
+		return H_PARAMETER;
+	}
+	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+
+	if (kvmppc_is_hmm_pfn(*rmap)) {
+		hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
+		pvt = (struct kvmppc_hmm_page_pvt *)
+			hmm_devmem_page_get_drvdata(hmm_page);
+		pvt->skip_page_out = true;
+	}
+
+	pfn = gfn_to_pfn(kvm, gpa >> page_shift);
+	if (is_error_noslot_pfn(pfn))
+		return H_PARAMETER;
+
+	ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift);
+	kvm_release_pfn_clean(pfn);
+
+	return (ret = U_SUCCESS) ? H_SUCCESS : H_PARAMETER;
+}
+
 /*
  * Move page from normal memory to secure memory.
  */
@@ -235,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 	if (page_shift != PAGE_SHIFT)
 		return H_P3;
 
-	if (flags)
+	if (flags & ~H_PAGE_IN_SHARED)
 		return H_P2;
 
+	if (flags & H_PAGE_IN_SHARED)
+		return kvmppc_share_page(kvm, gpa, page_shift);
+
 	down_read(&kvm->mm->mmap_sem);
 	srcu_idx = srcu_read_lock(&kvm->srcu);
 	slot = gfn_to_memslot(kvm, gfn);
@@ -299,8 +350,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
 	       hmm_devmem_page_get_drvdata(spage);
 
 	pfn = page_to_pfn(dpage);
-	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
-			  pvt->gpa, 0, PAGE_SHIFT);
+
+	/*
+	 * This same alloc_and_copy() callback is used in two cases:
+	 * - When HV touches a secure page, for which we do page-out
+	 * - When a secure page is converted to shared page, we touch
+	 *   the page to essentially discard the HMM page. In this case we
+	 *   skip page-out.
+	 */
+	if (!pvt->skip_page_out)
+		ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
+				  pvt->gpa, 0, PAGE_SHIFT);
 	if (ret = U_SUCCESS)
 		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
 }
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao, Paul Mackerras

H_SVM_INIT_START: Initiate securing a VM
H_SVM_INIT_DONE: Conclude securing a VM

As part of H_SVM_INIT_START, register all existing memslots with
the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
the guest to secure mode is complete.

These two states (transition to secure mode STARTED and transition
to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
Setting these states will cause the assembly code that enters the
guest to call the UV_RETURN ucall instead of trying to enter the
guest directly.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/hvcall.h         |  2 ++
 arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++
 arch/powerpc/include/asm/kvm_host.h       |  4 +++
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
 arch/powerpc/kvm/book3s_hv.c              |  7 +++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 34 +++++++++++++++++++++++
 7 files changed, 69 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 05b8536f6653..fa7695928e30 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -343,6 +343,8 @@
 /* Platform-specific hcalls used by the Ultravisor */
 #define H_SVM_PAGE_IN		0xEF00
 #define H_SVM_PAGE_OUT		0xEF04
+#define H_SVM_INIT_START	0xEF08
+#define H_SVM_INIT_DONE		0xEF0C
 
 /* Values for 2nd argument to H_SET_MODE */
 #define H_SET_MODE_RESOURCE_SET_CIABR		1
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
index 21f3de5f2acb..8c7aacabb2e0 100644
--- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -11,6 +11,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 					  unsigned long gra,
 					  unsigned long flags,
 					  unsigned long page_shift);
+extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
+extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
 #else
 static inline unsigned long
 kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
@@ -25,5 +27,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
 {
 	return H_UNSUPPORTED;
 }
+
+static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
+{
+	return H_UNSUPPORTED;
+}
+
+static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
+{
+	return H_UNSUPPORTED;
+}
 #endif /* CONFIG_PPC_UV */
 #endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index ac1a101beb07..0c49c3401c63 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -272,6 +272,10 @@ struct kvm_hpt_info {
 
 struct kvm_resize_hpt;
 
+/* Flag values for kvm_arch.secure_guest */
+#define KVMPPC_SECURE_INIT_START	0x1 /* H_SVM_INIT_START has been called */
+#define KVMPPC_SECURE_INIT_DONE		0x2 /* H_SVM_INIT_DONE completed */
+
 struct kvm_arch {
 	unsigned int lpid;
 	unsigned int smt_mode;		/* # vcpus per virtual core */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index f1c5800ac705..07b7d638e7af 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,6 +20,7 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
+#define UV_REGISTER_MEM_SLOT		0xF120
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 16f8e0e8ec3f..b46042f1aa8f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -61,6 +61,15 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags,
 	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
 		     page_shift);
 }
+
+static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size,
+				       u64 flags, u64 slotid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
+		     size, flags, slotid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8ee66aa0da58..b8f801d00ad4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1097,6 +1097,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 					    kvmppc_get_gpr(vcpu, 5),
 					    kvmppc_get_gpr(vcpu, 6));
 		break;
+	case H_SVM_INIT_START:
+		ret = kvmppc_h_svm_init_start(vcpu->kvm);
+		break;
+	case H_SVM_INIT_DONE:
+		ret = kvmppc_h_svm_init_done(vcpu->kvm);
+		break;
+
 	default:
 		return RESUME_HOST;
 	}
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 36562b382e70..55bab9c4e60a 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -62,6 +62,40 @@ struct kvmppc_hmm_migrate_args {
 	unsigned long page_shift;
 };
 
+unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int ret = H_SUCCESS;
+	int srcu_idx;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	slots = kvm_memslots(kvm);
+	kvm_for_each_memslot(memslot, slots) {
+		ret = uv_register_mem_slot(kvm->arch.lpid,
+					   memslot->base_gfn << PAGE_SHIFT,
+					   memslot->npages * PAGE_SIZE,
+					   0, memslot->id);
+		if (ret < 0) {
+			ret = H_PARAMETER;
+			goto out;
+		}
+	}
+	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_START;
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return ret;
+}
+
+unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
+{
+	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
+		return H_UNSUPPORTED;
+
+	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
+	return H_SUCCESS;
+}
+
 /*
  * Bits 60:56 in the rmap entry will be used to identify the
  * different uses/functions of rmap. This definition with move
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao, Paul Mackerras

Register the new memslot with UV during plug and unregister
the memslot during unplug.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
---
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
 arch/powerpc/kvm/book3s_hv.c              | 19 +++++++++++++++++++
 3 files changed, 27 insertions(+)

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 07b7d638e7af..d6d6eb2e6e6b 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -21,6 +21,7 @@
 #define UV_WRITE_PATE			0xF104
 #define UV_RETURN			0xF11C
 #define UV_REGISTER_MEM_SLOT		0xF120
+#define UV_UNREGISTER_MEM_SLOT		0xF124
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index b46042f1aa8f..fe45be9ee63b 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -70,6 +70,13 @@ static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size,
 	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
 		     size, flags, slotid);
 }
+
+static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index b8f801d00ad4..7cbb5edaed01 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -77,6 +77,7 @@
 #include <asm/hw_breakpoint.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_book3s_hmm.h>
+#include <asm/ultravisor.h>
 
 #include "book3s.h"
 
@@ -4504,6 +4505,24 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
 	if (change = KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) &&
 	    ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES))
 		kvmppc_radix_flush_memslot(kvm, old);
+	/*
+	 * If UV hasn't yet called H_SVM_INIT_START, don't register memslots.
+	 */
+	if (!kvm->arch.secure_guest)
+		return;
+
+	/*
+	 * TODO: Handle KVM_MR_MOVE
+	 */
+	if (change = KVM_MR_CREATE) {
+		uv_register_mem_slot(kvm->arch.lpid,
+					   new->base_gfn << PAGE_SHIFT,
+					   new->npages * PAGE_SIZE,
+					   0,
+					   new->id);
+	} else if (change = KVM_MR_DELETE) {
+		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
+	}
 }
 
 /*
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

- After the guest becomes secure, when we handle a page fault of a page
  belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
- Whenever a page is unmapped on the HV side, inform UV via UV_PAGE_INVAL.
- Ensure all those routines that walk the secondary page tables of
  the guest don't do so in case of secure VM. For secure guest, the
  active secondary page tables are in secure memory and the secondary
  page tables in HV are freed when guest becomes secure.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
---
 arch/powerpc/include/asm/kvm_host.h       | 12 ++++++++++++
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
 arch/powerpc/kvm/book3s_64_mmu_radix.c    | 22 ++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 20 ++++++++++++++++++++
 5 files changed, 62 insertions(+)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0c49c3401c63..dcbf7480cb10 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -865,6 +865,8 @@ static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 #ifdef CONFIG_PPC_UV
 extern int kvmppc_hmm_init(void);
 extern void kvmppc_hmm_free(void);
+extern bool kvmppc_is_guest_secure(struct kvm *kvm);
+extern int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa);
 #else
 static inline int kvmppc_hmm_init(void)
 {
@@ -872,6 +874,16 @@ static inline int kvmppc_hmm_init(void)
 }
 
 static inline void kvmppc_hmm_free(void) {}
+
+static inline bool kvmppc_is_guest_secure(struct kvm *kvm)
+{
+	return false;
+}
+
+static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
+{
+	return -EFAULT;
+}
 #endif /* CONFIG_PPC_UV */
 
 #endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index d6d6eb2e6e6b..9f5510b55892 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -24,5 +24,6 @@
 #define UV_UNREGISTER_MEM_SLOT		0xF124
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
+#define UV_PAGE_INVAL			0xF138
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index fe45be9ee63b..f4f674794b35 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -77,6 +77,13 @@ static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
 
 	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
 }
+
+static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index f55ef071883f..c454600c454f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -21,6 +21,8 @@
 #include <asm/pgtable.h>
 #include <asm/pgalloc.h>
 #include <asm/pte-walk.h>
+#include <asm/ultravisor.h>
+#include <asm/kvm_host.h>
 
 /*
  * Supported radix tree geometry.
@@ -923,6 +925,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	if (!(dsisr & DSISR_PRTABLE_FAULT))
 		gpa |= ea & 0xfff;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return kvmppc_send_page_to_uv(kvm, gpa & PAGE_MASK);
+
 	/* Get the corresponding memslot */
 	memslot = gfn_to_memslot(kvm, gfn);
 
@@ -980,6 +985,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned long gpa = gfn << PAGE_SHIFT;
 	unsigned int shift;
 
+	if (kvmppc_is_guest_secure(kvm)) {
+		uv_page_inval(kvm->arch.lpid, gpa, PAGE_SIZE);
+		return 0;
+	}
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
@@ -997,6 +1007,9 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	int ref = 0;
 	unsigned long old, *rmapp;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ref;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
 		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
@@ -1021,6 +1034,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned int shift;
 	int ref = 0;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ref;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep))
 		ref = 1;
@@ -1038,6 +1054,9 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 	int ret = 0;
 	unsigned long old, *rmapp;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return ret;
+
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
 		ret = 1;
@@ -1090,6 +1109,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 	unsigned long gpa;
 	unsigned int shift;
 
+	if (kvmppc_is_guest_secure(kvm))
+		return;
+
 	gpa = memslot->base_gfn << PAGE_SHIFT;
 	spin_lock(&kvm->mmu_lock);
 	for (n = memslot->npages; n; --n) {
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 55bab9c4e60a..9e6c88de456f 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -62,6 +62,11 @@ struct kvmppc_hmm_migrate_args {
 	unsigned long page_shift;
 };
 
+bool kvmppc_is_guest_secure(struct kvm *kvm)
+{
+	return !!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE);
+}
+
 unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
 {
 	struct kvm_memslots *slots;
@@ -494,6 +499,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
 	return ret;
 }
 
+int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
+{
+	unsigned long pfn;
+	int ret;
+
+	pfn = gfn_to_pfn(kvm, gpa >> PAGE_SHIFT);
+	if (is_error_noslot_pfn(pfn))
+		return -EFAULT;
+
+	ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gpa, 0, PAGE_SHIFT);
+	kvm_release_pfn_clean(pfn);
+
+	return (ret = U_SUCCESS) ? RESUME_GUEST : -EFAULT;
+}
+
 static u64 kvmppc_get_secmem_size(void)
 {
 	struct device_node *np;
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH v5 6/7] kvmppc: Support reset of secure guest
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Bharata B Rao

Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
This ioctl will be issued by QEMU during reset and includes the
the following steps:

- Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
- Unpin the VPA pages so that they can be migrated back to secure
  side when guest becomes secure again. This is required because
  pinned pages can't be migrated.
- Reinitialize guest's partitioned scoped page tables. These are
  freed when guest become secure (H_SVM_INIT_DONE)
- Release all HMM pages of the secure guest.

After these steps, guest is ready to issue UV_ESM call once again
to switch to secure mode.

Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
	[Implementation of uv_svm_terminate() and its call from
	guest shutdown path]
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
	[Unpinning of VPA pages]
---
 Documentation/virtual/kvm/api.txt         | 19 ++++++
 arch/powerpc/include/asm/kvm_book3s_hmm.h |  7 +++
 arch/powerpc/include/asm/kvm_ppc.h        |  2 +
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/include/asm/ultravisor.h     |  7 +++
 arch/powerpc/kvm/book3s_hv.c              | 70 +++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_hmm.c          | 60 +++++++++++++++++++
 arch/powerpc/kvm/powerpc.c                | 12 ++++
 include/uapi/linux/kvm.h                  |  1 +
 tools/include/uapi/linux/kvm.h            |  1 +
 10 files changed, 180 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index ba6c42c576dd..c89c24ad86ed 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -4065,6 +4065,25 @@ KVM_ARM_VCPU_FINALIZE call.
 See KVM_ARM_VCPU_INIT for details of vcpu features that require finalization
 using this ioctl.
 
+4.120 KVM_PPC_SVM_OFF
+
+Capability: basic
+Architectures: powerpc
+Type: vm ioctl
+Parameters: none
+Returns: 0 on successful completion,
+Errors:
+  EINVAL:    if ultravisor failed to terminate the secure guest
+  ENOMEM:    if hypervisor failed to allocate new radix page tables for guest
+
+This ioctl is used to turn off the secure mode of the guest or transition
+the guest from secure mode to normal mode. This is invoked when the guest
+is reset. This has no effect if called for a normal guest.
+
+This ioctl issues an ultravisor call to terminate the secure guest,
+unpin the VPA pages, reinitialize guest's partition scoped page
+tables and releases all the HMM pages that is associated with this guest.
+
 5. The kvm_run structure
 ------------------------
 
diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h b/arch/powerpc/include/asm/kvm_book3s_hmm.h
index 8c7aacabb2e0..cd2b0f927766 100644
--- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
+++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
@@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
 					  unsigned long page_shift);
 extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
 extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
+extern void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
+		struct kvm_memslots *slots);
 #else
 static inline unsigned long
 kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
@@ -37,5 +39,10 @@ static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 {
 	return H_UNSUPPORTED;
 }
+
+static inline void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
+		struct kvm_memslots *slots)
+{
+}
 #endif /* CONFIG_PPC_UV */
 #endif /* __POWERPC_KVM_PPC_HMM_H__ */
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index bc892380e6cd..d80ece28d65d 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -188,6 +188,7 @@ extern void kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
 extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm);
 extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm);
 extern void kvmppc_setup_partition_table(struct kvm *kvm);
+extern int kvmppc_reinit_partition_table(struct kvm *kvm);
 
 extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
 				struct kvm_create_spapr_tce_64 *args);
@@ -332,6 +333,7 @@ struct kvmppc_ops {
 			       int size);
 	int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
 			      int size);
+	int (*svm_off)(struct kvm *kvm);
 };
 
 extern struct kvmppc_ops *kvmppc_hv_ops;
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 9f5510b55892..c8180427fa01 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -25,5 +25,6 @@
 #define UV_PAGE_IN			0xF128
 #define UV_PAGE_OUT			0xF12C
 #define UV_PAGE_INVAL			0xF138
+#define UV_SVM_TERMINATE		0xF13C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f4f674794b35..a5e4516c8ddb 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -84,6 +84,13 @@ static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
 
 	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
 }
+
+static inline int uv_svm_terminate(u64 lpid)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_SVM_TERMINATE, retbuf, lpid);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7cbb5edaed01..41b0fff199d3 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2437,6 +2437,15 @@ static void unpin_vpa(struct kvm *kvm, struct kvmppc_vpa *vpa)
 					vpa->dirty);
 }
 
+static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa)
+{
+	unpin_vpa(kvm, vpa);
+	vpa->gpa = 0;
+	vpa->pinned_addr = NULL;
+	vpa->dirty = false;
+	vpa->update_pending = 0;
+}
+
 static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu)
 {
 	spin_lock(&vcpu->arch.vpa_update_lock);
@@ -4577,6 +4586,22 @@ void kvmppc_setup_partition_table(struct kvm *kvm)
 	kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1);
 }
 
+/*
+ * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure
+ * guest is converted back to normal guest.
+ */
+int kvmppc_reinit_partition_table(struct kvm *kvm)
+{
+	int ret;
+
+	ret = kvmppc_init_vm_radix(kvm);
+	if (ret)
+		return ret;
+
+	kvmppc_setup_partition_table(kvm);
+	return 0;
+}
+
 /*
  * Set up HPT (hashed page table) and RMA (real-mode area).
  * Must be called with kvm->lock held.
@@ -4956,6 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm)
 		if (nesting_enabled(kvm))
 			kvmhv_release_all_nested(kvm);
 		kvm->arch.process_table = 0;
+		uv_svm_terminate(kvm->arch.lpid);
 		kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0);
 	}
 	kvmppc_free_lpid(kvm->arch.lpid);
@@ -5397,6 +5423,49 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
 	return rc;
 }
 
+/*
+ *  IOCTL handler to turn off secure mode of guest
+ *
+ * - Issue ucall to terminate the guest on the UV side
+ * - Unpin the VPA pages (Enables these pages to be migrated back
+ *   when VM becomes secure again)
+ * - Recreate partition table as the guest is transitioning back to
+ *   normal mode
+ * - Release all HMM pages
+ */
+static int kvmhv_svm_off(struct kvm *kvm)
+{
+	struct kvm_vcpu *vcpu;
+	int ret = 0;
+	int i;
+
+	if (kvmppc_is_guest_secure(kvm)) {
+		ret = uv_svm_terminate(kvm->arch.lpid);
+		if (ret != U_SUCCESS) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		kvm_for_each_vcpu(i, vcpu, kvm) {
+			spin_lock(&vcpu->arch.vpa_update_lock);
+			unpin_vpa_reset(kvm, &vcpu->arch.dtl);
+			unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow);
+			unpin_vpa_reset(kvm, &vcpu->arch.vpa);
+			spin_unlock(&vcpu->arch.vpa_update_lock);
+		}
+
+		ret = kvmppc_reinit_partition_table(kvm);
+		if (ret)
+			goto out;
+		kvm->arch.secure_guest = 0;
+		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+			kvmppc_hmm_free_memslot_pfns(kvm,
+			__kvm_memslots(kvm, i));
+	}
+out:
+	return ret;
+}
+
 static struct kvmppc_ops kvm_ops_hv = {
 	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
 	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
@@ -5439,6 +5508,7 @@ static struct kvmppc_ops kvm_ops_hv = {
 	.enable_nested = kvmhv_enable_nested,
 	.load_from_eaddr = kvmhv_load_from_eaddr,
 	.store_to_eaddr = kvmhv_store_to_eaddr,
+	.svm_off = kvmhv_svm_off,
 };
 
 static int kvm_init_subcore_bitmap(void)
diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c
index 9e6c88de456f..165926c069ba 100644
--- a/arch/powerpc/kvm/book3s_hv_hmm.c
+++ b/arch/powerpc/kvm/book3s_hv_hmm.c
@@ -38,6 +38,8 @@
 #include <linux/kvm_host.h>
 #include <linux/sched/mm.h>
 #include <asm/ultravisor.h>
+#include <asm/kvm_ppc.h>
+#include <asm/kvm_book3s.h>
 
 struct kvmppc_hmm_device {
 	struct hmm_device *device;
@@ -98,6 +100,12 @@ unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
 		return H_UNSUPPORTED;
 
 	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
+	if (kvm_is_radix(kvm)) {
+		pr_info("LPID %d went secure, freeing HV side radix pgtables\n",
+			kvm->arch.lpid);
+		kvmppc_free_radix(kvm);
+	}
+
 	return H_SUCCESS;
 }
 
@@ -113,6 +121,58 @@ static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
 	return !!(pfn & KVMPPC_PFN_HMM);
 }
 
+/*
+ * Drop HMM pages that we maintain for the secure guest
+ *
+ * We mark the pages to be skipped from UV_PAGE_OUT when there is HMM
+ * fault on these pages. Next we get these pages, force HMM fault,
+ * do fault migration to replace the HMM PTEs from QEMU page tables
+ * with normal PTEs from newly allocated pages.
+ */
+static void kvmppc_hmm_drop_pages(struct kvm_memory_slot *free, struct kvm *kvm)
+{
+	int i;
+	struct kvmppc_hmm_page_pvt *pvt;
+	unsigned long pfn;
+
+	for (i = 0; i < free->npages; i++) {
+		unsigned long *rmap = &free->arch.rmap[i];
+		struct page *hmm_page;
+
+		if (kvmppc_is_hmm_pfn(*rmap)) {
+			hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
+			pvt = (struct kvmppc_hmm_page_pvt *)
+				hmm_devmem_page_get_drvdata(hmm_page);
+			pvt->skip_page_out = true;
+
+			pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT);
+			if (is_error_noslot_pfn(pfn))
+				continue;
+			kvm_release_pfn_clean(pfn);
+		}
+	}
+}
+
+/*
+ * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset
+ *
+ * UV has already cleaned up the guest, we release any HMM pages that
+ * we maintain
+ */
+void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots *slots)
+{
+	struct kvm_memory_slot *memslot;
+	int srcu_idx;
+
+	if (!slots)
+		return;
+
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	kvm_for_each_memslot(memslot, slots)
+		kvmppc_hmm_drop_pages(memslot, kvm);
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+}
+
 /*
  * Get a free HMM PFN from the pool
  *
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3393b166817a..0c5fae004adf 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -42,6 +42,8 @@
 #include <asm/hvcall.h>
 #include <asm/plpar_wrappers.h>
 #endif
+#include <asm/ultravisor.h>
+#include <asm/kvm_host.h>
 
 #include "timing.h"
 #include "irq.h"
@@ -2423,6 +2425,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
 			r = -EFAULT;
 		break;
 	}
+	case KVM_PPC_SVM_OFF: {
+		struct kvm *kvm = filp->private_data;
+
+		r = 0;
+		if (!kvm->arch.kvm_ops->svm_off)
+			goto out;
+
+		r = kvm->arch.kvm_ops->svm_off(kvm);
+		break;
+	}
 	default: {
 		struct kvm *kvm = filp->private_data;
 		r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2fe12b40d503..c1dbdd428025 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1327,6 +1327,7 @@ struct kvm_s390_ucas_mapping {
 #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct kvm_ppc_rmmu_info)
 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
 #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct kvm_ppc_cpu_char)
+#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
 
 /* ioctl for vm fd */
 #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct kvm_create_device)
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index 6d4ea4b6c922..03c89f4a01b2 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -1319,6 +1319,7 @@ struct kvm_s390_ucas_mapping {
 #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct kvm_ppc_rmmu_info)
 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
 #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct kvm_ppc_cpu_char)
+#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
 
 /* ioctl for vm fd */
 #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct kvm_create_device)
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
@ 2019-07-09 10:25   ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-09 10:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: kvm-ppc, linux-mm, paulus, aneesh.kumar, jglisse, linuxram,
	sukadev, cclaudio, Anshuman Khandual, Bharata B Rao

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

CONFIG_PPC_UV adds support for ultravisor.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[ Update config help and commit message ]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/Kconfig | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f0e5b38d52e8..20c6c213d2be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
        ---help---
          Support user-mode Transactional Memory on POWERPC.
 
+config PPC_UV
+	bool "Ultravisor support"
+	depends on KVM_BOOK3S_HV_POSSIBLE
+	select HMM_MIRROR
+	select HMM
+	select ZONE_DEVICE
+	select MIGRATE_VMA_HELPER
+	select DEV_PAGEMAP_OPS
+	select DEVICE_PRIVATE
+	select MEMORY_HOTPLUG
+	select MEMORY_HOTREMOVE
+	default n
+	help
+	  This option paravirtualizes the kernel to run in POWER platforms that
+	  supports the Protected Execution Facility (PEF). In such platforms,
+	  the ultravisor firmware runs at a privilege level above the
+	  hypervisor.
+
+	  If unsure, say "N".
+
 config LD_HEAD_STUB_CATCH
 	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT
 	depends on PPC64
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-09 18:55     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 18:55 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> HMM driver for KVM PPC to manage page transitions of
> secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.
> 
> H_SVM_PAGE_IN: Move the content of a normal page to secure page
> H_SVM_PAGE_OUT: Move the content of a secure page to normal page
> 
> Private ZONE_DEVICE memory equal to the amount of secure memory
> available in the platform for running secure guests is created
> via a HMM device. The movement of pages between normal and secure
> memory is done by ->alloc_and_copy() callback routine of migrate_vma().
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h         |   4 +
>  arch/powerpc/include/asm/kvm_book3s_hmm.h |  29 ++
>  arch/powerpc/include/asm/kvm_host.h       |  12 +
>  arch/powerpc/include/asm/ultravisor-api.h |   2 +
>  arch/powerpc/include/asm/ultravisor.h     |  17 +
>  arch/powerpc/kvm/Makefile                 |   3 +
>  arch/powerpc/kvm/book3s_hv.c              |  19 +
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 482 ++++++++++++++++++++++
>  8 files changed, 568 insertions(+)
>  create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
>  create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 463c63a9fcf1..2f6b952deb0f 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -337,6 +337,10 @@
>  #define H_TLB_INVALIDATE	0xF808
>  #define H_COPY_TOFROM_GUEST	0xF80C
> 
> +/* Platform-specific hcalls used by the Ultravisor */
> +#define H_SVM_PAGE_IN		0xEF00
> +#define H_SVM_PAGE_OUT		0xEF04
> +
>  /* Values for 2nd argument to H_SET_MODE */
>  #define H_SET_MODE_RESOURCE_SET_CIABR		1
>  #define H_SET_MODE_RESOURCE_SET_DAWR		2
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> new file mode 100644
> index 000000000000..21f3de5f2acb
> --- /dev/null
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __POWERPC_KVM_PPC_HMM_H__
> +#define __POWERPC_KVM_PPC_HMM_H__
> +
> +#ifdef CONFIG_PPC_UV
> +extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm,
> +					  unsigned long gra,
> +					  unsigned long flags,
> +					  unsigned long page_shift);
> +extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
> +					  unsigned long gra,
> +					  unsigned long flags,
> +					  unsigned long page_shift);
> +#else
> +static inline unsigned long
> +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> +		     unsigned long flags, unsigned long page_shift)
> +{
> +	return H_UNSUPPORTED;
> +}
> +
> +static inline unsigned long
> +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
> +		      unsigned long flags, unsigned long page_shift)
> +{
> +	return H_UNSUPPORTED;
> +}
> +#endif /* CONFIG_PPC_UV */
> +#endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index 184becb62ea4..ac1a101beb07 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -858,4 +858,16 @@ static inline void kvm_arch_vcpu_blocking(struct
> kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) 
> {}
> 
> +#ifdef CONFIG_PPC_UV
> +extern int kvmppc_hmm_init(void);
> +extern void kvmppc_hmm_free(void);
> +#else
> +static inline int kvmppc_hmm_init(void)
> +{
> +	return 0;
> +}
> +
> +static inline void kvmppc_hmm_free(void) {}
> +#endif /* CONFIG_PPC_UV */
> +
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 7c4d0b4ced12..f1c5800ac705 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -20,5 +20,7 @@
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
> +#define UV_PAGE_IN			0xF128
> +#define UV_PAGE_OUT			0xF12C
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index 996c1efd6c6d..16f8e0e8ec3f 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -44,6 +44,23 @@ static inline int uv_register_pate(u64 lpid, u64
> dw0, u64 dw1)
>  	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
>  }
> 
> +static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 
> flags,
> +			     u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags,
> +		     page_shift);
> +}
> +
> +static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 
> flags,
> +			      u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
> +		     page_shift);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
> index 4c67cc79de7c..d1a79fb5f806 100644
> --- a/arch/powerpc/kvm/Makefile
> +++ b/arch/powerpc/kvm/Makefile
> @@ -71,6 +71,9 @@ kvm-hv-y += \
>  	book3s_64_mmu_radix.o \
>  	book3s_hv_nested.o
> 
> +kvm-hv-$(CONFIG_PPC_UV) += \
> +	book3s_hv_hmm.o
> +
>  kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \
>  	book3s_hv_tm.o
> 
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index a104743291a9..8ee66aa0da58 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -75,6 +75,8 @@
>  #include <asm/xics.h>
>  #include <asm/xive.h>
>  #include <asm/hw_breakpoint.h>
> +#include <asm/kvm_host.h>
> +#include <asm/kvm_book3s_hmm.h>
> 
>  #include "book3s.h"
> 
> @@ -1083,6 +1085,18 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu 
> *vcpu)
>  					 kvmppc_get_gpr(vcpu, 5),
>  					 kvmppc_get_gpr(vcpu, 6));
>  		break;
> +	case H_SVM_PAGE_IN:
> +		ret = kvmppc_h_svm_page_in(vcpu->kvm,
> +					   kvmppc_get_gpr(vcpu, 4),
> +					   kvmppc_get_gpr(vcpu, 5),
> +					   kvmppc_get_gpr(vcpu, 6));
> +		break;
> +	case H_SVM_PAGE_OUT:
> +		ret = kvmppc_h_svm_page_out(vcpu->kvm,
> +					    kvmppc_get_gpr(vcpu, 4),
> +					    kvmppc_get_gpr(vcpu, 5),
> +					    kvmppc_get_gpr(vcpu, 6));
> +		break;
>  	default:
>  		return RESUME_HOST;
>  	}
> @@ -5501,11 +5515,16 @@ static int kvmppc_book3s_init_hv(void)
>  			no_mixing_hpt_and_radix = true;
>  	}
> 
> +	r = kvmppc_hmm_init();
> +	if (r < 0)
> +		pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r);
> +
>  	return r;
>  }
> 
>  static void kvmppc_book3s_exit_hv(void)
>  {
> +	kvmppc_hmm_free();
>  	kvmppc_free_host_rm_ops();
>  	if (kvmppc_radix_possible())
>  		kvmppc_radix_exit();
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> new file mode 100644
> index 000000000000..cd34323888b6
> --- /dev/null
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -0,0 +1,482 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * HMM driver to manage page migration between normal and secure
> + * memory.
> + *
> + * Based on Jérôme Glisse's HMM dummy driver.
> + *
> + * Copyright 2018 Bharata B Rao, IBM Corp. <bharata@linux.ibm.com>
> + */
> +
> +/*
> + * A pseries guest can be run as a secure guest on Ultravisor-enabled
> + * POWER platforms. On such platforms, this driver will be used to 
> manage
> + * the movement of guest pages between the normal memory managed by
> + * hypervisor (HV) and secure memory managed by Ultravisor (UV).
> + *
> + * Private ZONE_DEVICE memory equal to the amount of secure memory
> + * available in the platform for running secure guests is created
> + * via a HMM device. The movement of pages between normal and secure
> + * memory is done by ->alloc_and_copy() callback routine of 
> migrate_vma().
> + *
> + * The page-in or page-out requests from UV will come to HV as hcalls 
> and
> + * HV will call back into UV via uvcalls to satisfy these page 
> requests.
> + *
> + * For each page that gets moved into secure memory, a HMM PFN is used
> + * on the HV side and HMM migration PTE corresponding to that PFN 
> would be
> + * populated in the QEMU page tables. HMM PFNs are stored in the rmap
> + * array. Whenever a guest page becomes secure, HMM PFN allocated for
> + * the same will be populated in the corresponding slot in the rmap
> + * array. The overloading of rmap array's usage which otherwise is
> + * used primarily by HPT guests means that this feature (secure
> + * guest on PEF platforms) is available only for Radix MMU guests.
> + * Also the same rmap array is used differently by nested HPT guests.
> + * Hence a secure guest can't have nested guests.
> + */
> +
> +#include <linux/hmm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/sched/mm.h>
> +#include <asm/ultravisor.h>
> +
> +struct kvmppc_hmm_device {
> +	struct hmm_device *device;
> +	struct hmm_devmem *devmem;
> +	unsigned long *pfn_bitmap;
> +};
> +
> +static struct kvmppc_hmm_device kvmppc_hmm;
> +spinlock_t kvmppc_hmm_lock;
> +
> +struct kvmppc_hmm_page_pvt {
> +	unsigned long *rmap;
> +	unsigned int lpid;
> +	unsigned long gpa;
> +};
> +
> +struct kvmppc_hmm_migrate_args {
> +	unsigned long *rmap;
> +	unsigned int lpid;
> +	unsigned long gpa;
> +	unsigned long page_shift;
> +};
> +
> +/*
> + * Bits 60:56 in the rmap entry will be used to identify the
> + * different uses/functions of rmap. This definition with move
> + * to a proper header when all other functions are defined.
> + */
> +#define KVMPPC_PFN_HMM		(0x2ULL << 56)
> +
> +static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
> +{
> +	return !!(pfn & KVMPPC_PFN_HMM);
> +}
> +
> +/*
> + * Get a free HMM PFN from the pool
> + *
> + * Called when a normal page is moved to secure memory (UV_PAGE_IN). 
> HMM
> + * PFN will be used to keep track of the secure page on HV side.
> + *
> + * @rmap here is the slot in the rmap array that corresponds to @gpa.
> + * Thus a non-zero rmap entry indicates that the corresonding guest
> + * page has become secure, is not mapped on the HV side.
> + *
> + * NOTE: In this and subsequent functions, we pass around and access
> + * individual elements of kvm_memory_slot->arch.rmap[] without any
> + * protection. Should we use lock_rmap() here?
> + */
> +static struct page *kvmppc_hmm_get_page(unsigned long *rmap,
> +					unsigned long gpa, unsigned int lpid)
> +{
> +	struct page *dpage = NULL;
> +	unsigned long bit, hmm_pfn;
> +	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
> +				kvmppc_hmm.devmem->pfn_first;
> +	unsigned long flags;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +
> +	if (kvmppc_is_hmm_pfn(*rmap))
> +		return NULL;
> +
> +	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
> +	bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns);
> +	if (bit >= nr_pfns)
> +		goto out;
> +
> +	bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1);
> +	hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first;
> +	dpage = pfn_to_page(hmm_pfn);
> +
> +	if (!trylock_page(dpage))
> +		goto out_clear;
> +
> +	*rmap = hmm_pfn | KVMPPC_PFN_HMM;
> +	pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC);
> +	if (!pvt)
> +		goto out_unlock;
> +	pvt->rmap = rmap;
> +	pvt->gpa = gpa;
> +	pvt->lpid = lpid;
> +	hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt);
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +
> +	get_page(dpage);
> +	return dpage;
> +
> +out_unlock:
> +	unlock_page(dpage);
> +out_clear:
> +	bitmap_clear(kvmppc_hmm.pfn_bitmap,
> +		     hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1);
> +out:
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +	return NULL;
> +}
> +
> +/*
> + * Release the HMM PFN back to the pool
> + *
> + * Called when secure page becomes a normal page during UV_PAGE_OUT.
> + */
> +static void kvmppc_hmm_put_page(struct page *page)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +	unsigned long flags;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +
> +	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
> +	pvt = (struct kvmppc_hmm_page_pvt 
> *)hmm_devmem_page_get_drvdata(page);
> +	hmm_devmem_page_set_drvdata(page, 0);
> +
> +	bitmap_clear(kvmppc_hmm.pfn_bitmap,
> +		     pfn - kvmppc_hmm.devmem->pfn_first, 1);
> +	*(pvt->rmap) = 0;
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +	kfree(pvt);
> +}
> +
> +/*
> + * migrate_vma() callback to move page from normal memory to secure 
> memory.
> + *
> + * We don't capture the return value of uv_page_in() here because when
> + * UV asks for a page and then fails to copy it over, we don't care.
> + */
> +static void
> +kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma,
> +				  const unsigned long *src_pfn,
> +				  unsigned long *dst_pfn,
> +				  unsigned long start,
> +				  unsigned long end,
> +				  void *private)
> +{
> +	struct kvmppc_hmm_migrate_args *args = private;
> +	struct page *spage = migrate_pfn_to_page(*src_pfn);
> +	unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT;
> +	struct page *dpage;
> +
> +	*dst_pfn = 0;
> +	if (!(*src_pfn & MIGRATE_PFN_MIGRATE))
> +		return;
> +
> +	dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid);
> +	if (!dpage)
> +		return;
> +
> +	if (spage)
> +		uv_page_in(args->lpid, pfn << args->page_shift,
> +			   args->gpa, 0, args->page_shift);
> +
> +	*dst_pfn = migrate_pfn(page_to_pfn(dpage)) |
> +		    MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED;
> +}
> +
> +/*
> + * This migrate_vma() callback is typically used to updated device
> + * page tables after successful migration. We have nothing to do here.
> + *
> + * Also as we don't care if UV successfully copied over the page in
> + * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check
> + * dst_pfn for any errors here.
> + */
> +static void
> +kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma,
> +				    const unsigned long *src_pfn,
> +				    const unsigned long *dst_pfn,
> +				    unsigned long start,
> +				    unsigned long end,
> +				    void *private)
> +{
> +}
> +
> +static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
> +	.alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy,
> +	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
> +};
> +
> +/*
> + * Move page from normal memory to secure memory.
> + */
> +unsigned long
> +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
> +		     unsigned long flags, unsigned long page_shift)
> +{
> +	unsigned long addr, end;
> +	unsigned long src_pfn, dst_pfn;
> +	struct kvmppc_hmm_migrate_args args;
> +	struct vm_area_struct *vma;
> +	int srcu_idx;
> +	unsigned long gfn = gpa >> page_shift;
> +	struct kvm_memory_slot *slot;
> +	unsigned long *rmap;
> +	int ret = H_SUCCESS;
> +
> +	if (page_shift != PAGE_SHIFT)
> +		return H_P3;
> +
> +	if (flags)
> +		return H_P2;
> +
> +	down_read(&kvm->mm->mmap_sem);
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slot = gfn_to_memslot(kvm, gfn);
> +	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
> +	addr = gfn_to_hva(kvm, gpa >> page_shift);
> +	if (kvm_is_error_hva(addr)) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	end = addr + (1UL << page_shift);
> +	vma = find_vma_intersection(kvm->mm, addr, end);
> +	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	args.rmap = rmap;
> +	args.lpid = kvm->arch.lpid;
> +	args.gpa = gpa;
> +	args.page_shift = page_shift;
> +	ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end,
> +			  &src_pfn, &dst_pfn, &args);
> +	if (ret < 0)
> +		ret = H_PARAMETER;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	up_read(&kvm->mm->mmap_sem);
> +	return ret;
> +}
> +
> +/*
> + * We drop the HMM PFN here, provision a new page and populate
> + * the same in QEMU page tables.
> + */
> +static void
> +kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
> +					const unsigned long *src_pfn,
> +					unsigned long *dst_pfn,
> +					unsigned long start,
> +					unsigned long end,
> +					void *private)
> +{
> +	struct page *dpage, *spage;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +	int ret = U_SUCCESS;
> +
> +	*dst_pfn = MIGRATE_PFN_ERROR;
> +	spage = migrate_pfn_to_page(*src_pfn);
> +	if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE))
> +		return;
> +	if (!is_zone_device_page(spage))
> +		return;
> +	dpage = hmm_vma_alloc_locked_page(vma, start);
> +	if (!dpage)
> +		return;
> +	pvt = (struct kvmppc_hmm_page_pvt *)
> +	       hmm_devmem_page_get_drvdata(spage);
> +
> +	pfn = page_to_pfn(dpage);
> +	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> +			  pvt->gpa, 0, PAGE_SHIFT);
> +	if (ret == U_SUCCESS)
> +		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
> +}
> +
> +/*
> + * We have nothing to do here.
> + */
> +static void
> +kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma,
> +					  const unsigned long *src_pfn,
> +					  const unsigned long *dst_pfn,
> +					  unsigned long start,
> +					  unsigned long end,
> +					  void *private)
> +{
> +}
> +
> +static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = {
> +	.alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy,
> +	.finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map,
> +};
> +
> +/*
> + * Fault handler callback when HV touches any page that has been
> + * moved to secure memory, we ask UV to give back the page by
> + * issuing a UV_PAGE_OUT uvcall.
> + */
> +static vm_fault_t kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem,
> +					  struct vm_area_struct *vma,
> +					  unsigned long addr,
> +					  const struct page *page,
> +					  unsigned int flags,
> +					  pmd_t *pmdp)
> +{
> +	unsigned long end = addr + PAGE_SIZE;
> +	unsigned long src_pfn, dst_pfn = 0;
> +
> +	if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> +			&src_pfn, &dst_pfn, NULL))
> +		return VM_FAULT_SIGBUS;
> +	if (dst_pfn == MIGRATE_PFN_ERROR)
> +		return VM_FAULT_SIGBUS;
> +	return 0;
> +}
> +
> +static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem,
> +				   struct page *page)
> +{
> +	kvmppc_hmm_put_page(page);
> +}
> +
> +static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = {
> +	.free = kvmppc_hmm_devmem_free,
> +	.fault = kvmppc_hmm_devmem_fault,
> +};
> +
> +/*
> + * Move page from secure memory to normal memory.
> + */
> +unsigned long
> +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
> +		      unsigned long flags, unsigned long page_shift)
> +{
> +	unsigned long addr, end;
> +	struct vm_area_struct *vma;
> +	unsigned long src_pfn, dst_pfn = 0;
> +	int srcu_idx;
> +	int ret = H_SUCCESS;
> +
> +	if (page_shift != PAGE_SHIFT)
> +		return H_P3;
> +
> +	if (flags)
> +		return H_P2;
> +
> +	down_read(&kvm->mm->mmap_sem);
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	addr = gfn_to_hva(kvm, gpa >> page_shift);
> +	if (kvm_is_error_hva(addr)) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	end = addr + (1UL << page_shift);
> +	vma = find_vma_intersection(kvm->mm, addr, end);
> +	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +	ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> +			  &src_pfn, &dst_pfn, NULL);
> +	if (ret < 0)
> +		ret = H_PARAMETER;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	up_read(&kvm->mm->mmap_sem);
> +	return ret;
> +}
> +
> +static u64 kvmppc_get_secmem_size(void)
> +{
> +	struct device_node *np;
> +	int i, len;
> +	const __be32 *prop;
> +	u64 size = 0;
> +
> +	np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware");
> +	if (!np)
> +		goto out;
> +
> +	prop = of_get_property(np, "secure-memory-ranges", &len);
> +	if (!prop)
> +		goto out_put;
> +
> +	for (i = 0; i < len / (sizeof(*prop) * 4); i++)
> +		size += of_read_number(prop + (i * 4) + 2, 2);
> +
> +out_put:
> +	of_node_put(np);
> +out:
> +	return size;
> +}
> +
> +static int kvmppc_hmm_pages_init(void)
> +{
> +	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
> +				kvmppc_hmm.devmem->pfn_first;
> +
> +	kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
> +					 sizeof(unsigned long), GFP_KERNEL);
> +	if (!kvmppc_hmm.pfn_bitmap)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&kvmppc_hmm_lock);
> +
> +	return 0;
> +}
> +
> +int kvmppc_hmm_init(void)
> +{
> +	int ret = 0;
> +	unsigned long size;
> +
> +	size = kvmppc_get_secmem_size();
> +	if (!size) {
> +		ret = -ENODEV;
> +		goto out;
> +	}
> +
> +	kvmppc_hmm.device = hmm_device_new(NULL);
> +	if (IS_ERR(kvmppc_hmm.device)) {
> +		ret = PTR_ERR(kvmppc_hmm.device);
> +		goto out;
> +	}
> +
> +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> +					   &kvmppc_hmm.device->device, size);
> +	if (IS_ERR(kvmppc_hmm.devmem)) {
> +		ret = PTR_ERR(kvmppc_hmm.devmem);
> +		goto out_device;
> +	}
> +	ret = kvmppc_hmm_pages_init();
> +	if (ret < 0)
> +		goto out_device;
> +
> +	pr_info("KVMPPC-HMM: Secure Memory size 0x%lx\n", size);
> +	return ret;
> +
> +out_device:
> +	hmm_device_put(kvmppc_hmm.device);
> +out:
> +	return ret;
> +}
> +
> +void kvmppc_hmm_free(void)
> +{
> +	kfree(kvmppc_hmm.pfn_bitmap);
> +	hmm_device_put(kvmppc_hmm.device);
> +}


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-09 18:55     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 18:55 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> HMM driver for KVM PPC to manage page transitions of
> secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.
> 
> H_SVM_PAGE_IN: Move the content of a normal page to secure page
> H_SVM_PAGE_OUT: Move the content of a secure page to normal page
> 
> Private ZONE_DEVICE memory equal to the amount of secure memory
> available in the platform for running secure guests is created
> via a HMM device. The movement of pages between normal and secure
> memory is done by ->alloc_and_copy() callback routine of migrate_vma().
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h         |   4 +
>  arch/powerpc/include/asm/kvm_book3s_hmm.h |  29 ++
>  arch/powerpc/include/asm/kvm_host.h       |  12 +
>  arch/powerpc/include/asm/ultravisor-api.h |   2 +
>  arch/powerpc/include/asm/ultravisor.h     |  17 +
>  arch/powerpc/kvm/Makefile                 |   3 +
>  arch/powerpc/kvm/book3s_hv.c              |  19 +
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 482 ++++++++++++++++++++++
>  8 files changed, 568 insertions(+)
>  create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
>  create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 463c63a9fcf1..2f6b952deb0f 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -337,6 +337,10 @@
>  #define H_TLB_INVALIDATE	0xF808
>  #define H_COPY_TOFROM_GUEST	0xF80C
> 
> +/* Platform-specific hcalls used by the Ultravisor */
> +#define H_SVM_PAGE_IN		0xEF00
> +#define H_SVM_PAGE_OUT		0xEF04
> +
>  /* Values for 2nd argument to H_SET_MODE */
>  #define H_SET_MODE_RESOURCE_SET_CIABR		1
>  #define H_SET_MODE_RESOURCE_SET_DAWR		2
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> new file mode 100644
> index 000000000000..21f3de5f2acb
> --- /dev/null
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __POWERPC_KVM_PPC_HMM_H__
> +#define __POWERPC_KVM_PPC_HMM_H__
> +
> +#ifdef CONFIG_PPC_UV
> +extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm,
> +					  unsigned long gra,
> +					  unsigned long flags,
> +					  unsigned long page_shift);
> +extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
> +					  unsigned long gra,
> +					  unsigned long flags,
> +					  unsigned long page_shift);
> +#else
> +static inline unsigned long
> +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> +		     unsigned long flags, unsigned long page_shift)
> +{
> +	return H_UNSUPPORTED;
> +}
> +
> +static inline unsigned long
> +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
> +		      unsigned long flags, unsigned long page_shift)
> +{
> +	return H_UNSUPPORTED;
> +}
> +#endif /* CONFIG_PPC_UV */
> +#endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index 184becb62ea4..ac1a101beb07 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -858,4 +858,16 @@ static inline void kvm_arch_vcpu_blocking(struct
> kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) 
> {}
> 
> +#ifdef CONFIG_PPC_UV
> +extern int kvmppc_hmm_init(void);
> +extern void kvmppc_hmm_free(void);
> +#else
> +static inline int kvmppc_hmm_init(void)
> +{
> +	return 0;
> +}
> +
> +static inline void kvmppc_hmm_free(void) {}
> +#endif /* CONFIG_PPC_UV */
> +
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 7c4d0b4ced12..f1c5800ac705 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -20,5 +20,7 @@
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
> +#define UV_PAGE_IN			0xF128
> +#define UV_PAGE_OUT			0xF12C
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index 996c1efd6c6d..16f8e0e8ec3f 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -44,6 +44,23 @@ static inline int uv_register_pate(u64 lpid, u64
> dw0, u64 dw1)
>  	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
>  }
> 
> +static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 
> flags,
> +			     u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags,
> +		     page_shift);
> +}
> +
> +static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 
> flags,
> +			      u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
> +		     page_shift);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
> index 4c67cc79de7c..d1a79fb5f806 100644
> --- a/arch/powerpc/kvm/Makefile
> +++ b/arch/powerpc/kvm/Makefile
> @@ -71,6 +71,9 @@ kvm-hv-y += \
>  	book3s_64_mmu_radix.o \
>  	book3s_hv_nested.o
> 
> +kvm-hv-$(CONFIG_PPC_UV) += \
> +	book3s_hv_hmm.o
> +
>  kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \
>  	book3s_hv_tm.o
> 
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index a104743291a9..8ee66aa0da58 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -75,6 +75,8 @@
>  #include <asm/xics.h>
>  #include <asm/xive.h>
>  #include <asm/hw_breakpoint.h>
> +#include <asm/kvm_host.h>
> +#include <asm/kvm_book3s_hmm.h>
> 
>  #include "book3s.h"
> 
> @@ -1083,6 +1085,18 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu 
> *vcpu)
>  					 kvmppc_get_gpr(vcpu, 5),
>  					 kvmppc_get_gpr(vcpu, 6));
>  		break;
> +	case H_SVM_PAGE_IN:
> +		ret = kvmppc_h_svm_page_in(vcpu->kvm,
> +					   kvmppc_get_gpr(vcpu, 4),
> +					   kvmppc_get_gpr(vcpu, 5),
> +					   kvmppc_get_gpr(vcpu, 6));
> +		break;
> +	case H_SVM_PAGE_OUT:
> +		ret = kvmppc_h_svm_page_out(vcpu->kvm,
> +					    kvmppc_get_gpr(vcpu, 4),
> +					    kvmppc_get_gpr(vcpu, 5),
> +					    kvmppc_get_gpr(vcpu, 6));
> +		break;
>  	default:
>  		return RESUME_HOST;
>  	}
> @@ -5501,11 +5515,16 @@ static int kvmppc_book3s_init_hv(void)
>  			no_mixing_hpt_and_radix = true;
>  	}
> 
> +	r = kvmppc_hmm_init();
> +	if (r < 0)
> +		pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r);
> +
>  	return r;
>  }
> 
>  static void kvmppc_book3s_exit_hv(void)
>  {
> +	kvmppc_hmm_free();
>  	kvmppc_free_host_rm_ops();
>  	if (kvmppc_radix_possible())
>  		kvmppc_radix_exit();
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> new file mode 100644
> index 000000000000..cd34323888b6
> --- /dev/null
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -0,0 +1,482 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * HMM driver to manage page migration between normal and secure
> + * memory.
> + *
> + * Based on Jérôme Glisse's HMM dummy driver.
> + *
> + * Copyright 2018 Bharata B Rao, IBM Corp. <bharata@linux.ibm.com>
> + */
> +
> +/*
> + * A pseries guest can be run as a secure guest on Ultravisor-enabled
> + * POWER platforms. On such platforms, this driver will be used to 
> manage
> + * the movement of guest pages between the normal memory managed by
> + * hypervisor (HV) and secure memory managed by Ultravisor (UV).
> + *
> + * Private ZONE_DEVICE memory equal to the amount of secure memory
> + * available in the platform for running secure guests is created
> + * via a HMM device. The movement of pages between normal and secure
> + * memory is done by ->alloc_and_copy() callback routine of 
> migrate_vma().
> + *
> + * The page-in or page-out requests from UV will come to HV as hcalls 
> and
> + * HV will call back into UV via uvcalls to satisfy these page 
> requests.
> + *
> + * For each page that gets moved into secure memory, a HMM PFN is used
> + * on the HV side and HMM migration PTE corresponding to that PFN 
> would be
> + * populated in the QEMU page tables. HMM PFNs are stored in the rmap
> + * array. Whenever a guest page becomes secure, HMM PFN allocated for
> + * the same will be populated in the corresponding slot in the rmap
> + * array. The overloading of rmap array's usage which otherwise is
> + * used primarily by HPT guests means that this feature (secure
> + * guest on PEF platforms) is available only for Radix MMU guests.
> + * Also the same rmap array is used differently by nested HPT guests.
> + * Hence a secure guest can't have nested guests.
> + */
> +
> +#include <linux/hmm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/sched/mm.h>
> +#include <asm/ultravisor.h>
> +
> +struct kvmppc_hmm_device {
> +	struct hmm_device *device;
> +	struct hmm_devmem *devmem;
> +	unsigned long *pfn_bitmap;
> +};
> +
> +static struct kvmppc_hmm_device kvmppc_hmm;
> +spinlock_t kvmppc_hmm_lock;
> +
> +struct kvmppc_hmm_page_pvt {
> +	unsigned long *rmap;
> +	unsigned int lpid;
> +	unsigned long gpa;
> +};
> +
> +struct kvmppc_hmm_migrate_args {
> +	unsigned long *rmap;
> +	unsigned int lpid;
> +	unsigned long gpa;
> +	unsigned long page_shift;
> +};
> +
> +/*
> + * Bits 60:56 in the rmap entry will be used to identify the
> + * different uses/functions of rmap. This definition with move
> + * to a proper header when all other functions are defined.
> + */
> +#define KVMPPC_PFN_HMM		(0x2ULL << 56)
> +
> +static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
> +{
> +	return !!(pfn & KVMPPC_PFN_HMM);
> +}
> +
> +/*
> + * Get a free HMM PFN from the pool
> + *
> + * Called when a normal page is moved to secure memory (UV_PAGE_IN). 
> HMM
> + * PFN will be used to keep track of the secure page on HV side.
> + *
> + * @rmap here is the slot in the rmap array that corresponds to @gpa.
> + * Thus a non-zero rmap entry indicates that the corresonding guest
> + * page has become secure, is not mapped on the HV side.
> + *
> + * NOTE: In this and subsequent functions, we pass around and access
> + * individual elements of kvm_memory_slot->arch.rmap[] without any
> + * protection. Should we use lock_rmap() here?
> + */
> +static struct page *kvmppc_hmm_get_page(unsigned long *rmap,
> +					unsigned long gpa, unsigned int lpid)
> +{
> +	struct page *dpage = NULL;
> +	unsigned long bit, hmm_pfn;
> +	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
> +				kvmppc_hmm.devmem->pfn_first;
> +	unsigned long flags;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +
> +	if (kvmppc_is_hmm_pfn(*rmap))
> +		return NULL;
> +
> +	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
> +	bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns);
> +	if (bit >= nr_pfns)
> +		goto out;
> +
> +	bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1);
> +	hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first;
> +	dpage = pfn_to_page(hmm_pfn);
> +
> +	if (!trylock_page(dpage))
> +		goto out_clear;
> +
> +	*rmap = hmm_pfn | KVMPPC_PFN_HMM;
> +	pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC);
> +	if (!pvt)
> +		goto out_unlock;
> +	pvt->rmap = rmap;
> +	pvt->gpa = gpa;
> +	pvt->lpid = lpid;
> +	hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt);
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +
> +	get_page(dpage);
> +	return dpage;
> +
> +out_unlock:
> +	unlock_page(dpage);
> +out_clear:
> +	bitmap_clear(kvmppc_hmm.pfn_bitmap,
> +		     hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1);
> +out:
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +	return NULL;
> +}
> +
> +/*
> + * Release the HMM PFN back to the pool
> + *
> + * Called when secure page becomes a normal page during UV_PAGE_OUT.
> + */
> +static void kvmppc_hmm_put_page(struct page *page)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +	unsigned long flags;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +
> +	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
> +	pvt = (struct kvmppc_hmm_page_pvt 
> *)hmm_devmem_page_get_drvdata(page);
> +	hmm_devmem_page_set_drvdata(page, 0);
> +
> +	bitmap_clear(kvmppc_hmm.pfn_bitmap,
> +		     pfn - kvmppc_hmm.devmem->pfn_first, 1);
> +	*(pvt->rmap) = 0;
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +	kfree(pvt);
> +}
> +
> +/*
> + * migrate_vma() callback to move page from normal memory to secure 
> memory.
> + *
> + * We don't capture the return value of uv_page_in() here because when
> + * UV asks for a page and then fails to copy it over, we don't care.
> + */
> +static void
> +kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma,
> +				  const unsigned long *src_pfn,
> +				  unsigned long *dst_pfn,
> +				  unsigned long start,
> +				  unsigned long end,
> +				  void *private)
> +{
> +	struct kvmppc_hmm_migrate_args *args = private;
> +	struct page *spage = migrate_pfn_to_page(*src_pfn);
> +	unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT;
> +	struct page *dpage;
> +
> +	*dst_pfn = 0;
> +	if (!(*src_pfn & MIGRATE_PFN_MIGRATE))
> +		return;
> +
> +	dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid);
> +	if (!dpage)
> +		return;
> +
> +	if (spage)
> +		uv_page_in(args->lpid, pfn << args->page_shift,
> +			   args->gpa, 0, args->page_shift);
> +
> +	*dst_pfn = migrate_pfn(page_to_pfn(dpage)) |
> +		    MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED;
> +}
> +
> +/*
> + * This migrate_vma() callback is typically used to updated device
> + * page tables after successful migration. We have nothing to do here.
> + *
> + * Also as we don't care if UV successfully copied over the page in
> + * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check
> + * dst_pfn for any errors here.
> + */
> +static void
> +kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma,
> +				    const unsigned long *src_pfn,
> +				    const unsigned long *dst_pfn,
> +				    unsigned long start,
> +				    unsigned long end,
> +				    void *private)
> +{
> +}
> +
> +static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
> +	.alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy,
> +	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
> +};
> +
> +/*
> + * Move page from normal memory to secure memory.
> + */
> +unsigned long
> +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
> +		     unsigned long flags, unsigned long page_shift)
> +{
> +	unsigned long addr, end;
> +	unsigned long src_pfn, dst_pfn;
> +	struct kvmppc_hmm_migrate_args args;
> +	struct vm_area_struct *vma;
> +	int srcu_idx;
> +	unsigned long gfn = gpa >> page_shift;
> +	struct kvm_memory_slot *slot;
> +	unsigned long *rmap;
> +	int ret = H_SUCCESS;
> +
> +	if (page_shift != PAGE_SHIFT)
> +		return H_P3;
> +
> +	if (flags)
> +		return H_P2;
> +
> +	down_read(&kvm->mm->mmap_sem);
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slot = gfn_to_memslot(kvm, gfn);
> +	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
> +	addr = gfn_to_hva(kvm, gpa >> page_shift);
> +	if (kvm_is_error_hva(addr)) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	end = addr + (1UL << page_shift);
> +	vma = find_vma_intersection(kvm->mm, addr, end);
> +	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	args.rmap = rmap;
> +	args.lpid = kvm->arch.lpid;
> +	args.gpa = gpa;
> +	args.page_shift = page_shift;
> +	ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end,
> +			  &src_pfn, &dst_pfn, &args);
> +	if (ret < 0)
> +		ret = H_PARAMETER;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	up_read(&kvm->mm->mmap_sem);
> +	return ret;
> +}
> +
> +/*
> + * We drop the HMM PFN here, provision a new page and populate
> + * the same in QEMU page tables.
> + */
> +static void
> +kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
> +					const unsigned long *src_pfn,
> +					unsigned long *dst_pfn,
> +					unsigned long start,
> +					unsigned long end,
> +					void *private)
> +{
> +	struct page *dpage, *spage;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +	int ret = U_SUCCESS;
> +
> +	*dst_pfn = MIGRATE_PFN_ERROR;
> +	spage = migrate_pfn_to_page(*src_pfn);
> +	if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE))
> +		return;
> +	if (!is_zone_device_page(spage))
> +		return;
> +	dpage = hmm_vma_alloc_locked_page(vma, start);
> +	if (!dpage)
> +		return;
> +	pvt = (struct kvmppc_hmm_page_pvt *)
> +	       hmm_devmem_page_get_drvdata(spage);
> +
> +	pfn = page_to_pfn(dpage);
> +	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> +			  pvt->gpa, 0, PAGE_SHIFT);
> +	if (ret == U_SUCCESS)
> +		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
> +}
> +
> +/*
> + * We have nothing to do here.
> + */
> +static void
> +kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma,
> +					  const unsigned long *src_pfn,
> +					  const unsigned long *dst_pfn,
> +					  unsigned long start,
> +					  unsigned long end,
> +					  void *private)
> +{
> +}
> +
> +static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = {
> +	.alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy,
> +	.finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map,
> +};
> +
> +/*
> + * Fault handler callback when HV touches any page that has been
> + * moved to secure memory, we ask UV to give back the page by
> + * issuing a UV_PAGE_OUT uvcall.
> + */
> +static vm_fault_t kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem,
> +					  struct vm_area_struct *vma,
> +					  unsigned long addr,
> +					  const struct page *page,
> +					  unsigned int flags,
> +					  pmd_t *pmdp)
> +{
> +	unsigned long end = addr + PAGE_SIZE;
> +	unsigned long src_pfn, dst_pfn = 0;
> +
> +	if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> +			&src_pfn, &dst_pfn, NULL))
> +		return VM_FAULT_SIGBUS;
> +	if (dst_pfn == MIGRATE_PFN_ERROR)
> +		return VM_FAULT_SIGBUS;
> +	return 0;
> +}
> +
> +static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem,
> +				   struct page *page)
> +{
> +	kvmppc_hmm_put_page(page);
> +}
> +
> +static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = {
> +	.free = kvmppc_hmm_devmem_free,
> +	.fault = kvmppc_hmm_devmem_fault,
> +};
> +
> +/*
> + * Move page from secure memory to normal memory.
> + */
> +unsigned long
> +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
> +		      unsigned long flags, unsigned long page_shift)
> +{
> +	unsigned long addr, end;
> +	struct vm_area_struct *vma;
> +	unsigned long src_pfn, dst_pfn = 0;
> +	int srcu_idx;
> +	int ret = H_SUCCESS;
> +
> +	if (page_shift != PAGE_SHIFT)
> +		return H_P3;
> +
> +	if (flags)
> +		return H_P2;
> +
> +	down_read(&kvm->mm->mmap_sem);
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	addr = gfn_to_hva(kvm, gpa >> page_shift);
> +	if (kvm_is_error_hva(addr)) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	end = addr + (1UL << page_shift);
> +	vma = find_vma_intersection(kvm->mm, addr, end);
> +	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +	ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> +			  &src_pfn, &dst_pfn, NULL);
> +	if (ret < 0)
> +		ret = H_PARAMETER;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	up_read(&kvm->mm->mmap_sem);
> +	return ret;
> +}
> +
> +static u64 kvmppc_get_secmem_size(void)
> +{
> +	struct device_node *np;
> +	int i, len;
> +	const __be32 *prop;
> +	u64 size = 0;
> +
> +	np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware");
> +	if (!np)
> +		goto out;
> +
> +	prop = of_get_property(np, "secure-memory-ranges", &len);
> +	if (!prop)
> +		goto out_put;
> +
> +	for (i = 0; i < len / (sizeof(*prop) * 4); i++)
> +		size += of_read_number(prop + (i * 4) + 2, 2);
> +
> +out_put:
> +	of_node_put(np);
> +out:
> +	return size;
> +}
> +
> +static int kvmppc_hmm_pages_init(void)
> +{
> +	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
> +				kvmppc_hmm.devmem->pfn_first;
> +
> +	kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
> +					 sizeof(unsigned long), GFP_KERNEL);
> +	if (!kvmppc_hmm.pfn_bitmap)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&kvmppc_hmm_lock);
> +
> +	return 0;
> +}
> +
> +int kvmppc_hmm_init(void)
> +{
> +	int ret = 0;
> +	unsigned long size;
> +
> +	size = kvmppc_get_secmem_size();
> +	if (!size) {
> +		ret = -ENODEV;
> +		goto out;
> +	}
> +
> +	kvmppc_hmm.device = hmm_device_new(NULL);
> +	if (IS_ERR(kvmppc_hmm.device)) {
> +		ret = PTR_ERR(kvmppc_hmm.device);
> +		goto out;
> +	}
> +
> +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> +					   &kvmppc_hmm.device->device, size);
> +	if (IS_ERR(kvmppc_hmm.devmem)) {
> +		ret = PTR_ERR(kvmppc_hmm.devmem);
> +		goto out_device;
> +	}
> +	ret = kvmppc_hmm_pages_init();
> +	if (ret < 0)
> +		goto out_device;
> +
> +	pr_info("KVMPPC-HMM: Secure Memory size 0x%lx\n", size);
> +	return ret;
> +
> +out_device:
> +	hmm_device_put(kvmppc_hmm.device);
> +out:
> +	return ret;
> +}
> +
> +void kvmppc_hmm_free(void)
> +{
> +	kfree(kvmppc_hmm.pfn_bitmap);
> +	hmm_device_put(kvmppc_hmm.device);
> +}


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-09 18:55     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 18:55 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> HMM driver for KVM PPC to manage page transitions of
> secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls.
> 
> H_SVM_PAGE_IN: Move the content of a normal page to secure page
> H_SVM_PAGE_OUT: Move the content of a secure page to normal page
> 
> Private ZONE_DEVICE memory equal to the amount of secure memory
> available in the platform for running secure guests is created
> via a HMM device. The movement of pages between normal and secure
> memory is done by ->alloc_and_copy() callback routine of migrate_vma().
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h         |   4 +
>  arch/powerpc/include/asm/kvm_book3s_hmm.h |  29 ++
>  arch/powerpc/include/asm/kvm_host.h       |  12 +
>  arch/powerpc/include/asm/ultravisor-api.h |   2 +
>  arch/powerpc/include/asm/ultravisor.h     |  17 +
>  arch/powerpc/kvm/Makefile                 |   3 +
>  arch/powerpc/kvm/book3s_hv.c              |  19 +
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 482 ++++++++++++++++++++++
>  8 files changed, 568 insertions(+)
>  create mode 100644 arch/powerpc/include/asm/kvm_book3s_hmm.h
>  create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 463c63a9fcf1..2f6b952deb0f 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -337,6 +337,10 @@
>  #define H_TLB_INVALIDATE	0xF808
>  #define H_COPY_TOFROM_GUEST	0xF80C
> 
> +/* Platform-specific hcalls used by the Ultravisor */
> +#define H_SVM_PAGE_IN		0xEF00
> +#define H_SVM_PAGE_OUT		0xEF04
> +
>  /* Values for 2nd argument to H_SET_MODE */
>  #define H_SET_MODE_RESOURCE_SET_CIABR		1
>  #define H_SET_MODE_RESOURCE_SET_DAWR		2
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> new file mode 100644
> index 000000000000..21f3de5f2acb
> --- /dev/null
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __POWERPC_KVM_PPC_HMM_H__
> +#define __POWERPC_KVM_PPC_HMM_H__
> +
> +#ifdef CONFIG_PPC_UV
> +extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm,
> +					  unsigned long gra,
> +					  unsigned long flags,
> +					  unsigned long page_shift);
> +extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm,
> +					  unsigned long gra,
> +					  unsigned long flags,
> +					  unsigned long page_shift);
> +#else
> +static inline unsigned long
> +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> +		     unsigned long flags, unsigned long page_shift)
> +{
> +	return H_UNSUPPORTED;
> +}
> +
> +static inline unsigned long
> +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra,
> +		      unsigned long flags, unsigned long page_shift)
> +{
> +	return H_UNSUPPORTED;
> +}
> +#endif /* CONFIG_PPC_UV */
> +#endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index 184becb62ea4..ac1a101beb07 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -858,4 +858,16 @@ static inline void kvm_arch_vcpu_blocking(struct
> kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) 
> {}
> 
> +#ifdef CONFIG_PPC_UV
> +extern int kvmppc_hmm_init(void);
> +extern void kvmppc_hmm_free(void);
> +#else
> +static inline int kvmppc_hmm_init(void)
> +{
> +	return 0;
> +}
> +
> +static inline void kvmppc_hmm_free(void) {}
> +#endif /* CONFIG_PPC_UV */
> +
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 7c4d0b4ced12..f1c5800ac705 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -20,5 +20,7 @@
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
> +#define UV_PAGE_IN			0xF128
> +#define UV_PAGE_OUT			0xF12C
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index 996c1efd6c6d..16f8e0e8ec3f 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -44,6 +44,23 @@ static inline int uv_register_pate(u64 lpid, u64
> dw0, u64 dw1)
>  	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
>  }
> 
> +static inline int uv_page_in(u64 lpid, u64 src_ra, u64 dst_gpa, u64 
> flags,
> +			     u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_IN, retbuf, lpid, src_ra, dst_gpa, flags,
> +		     page_shift);
> +}
> +
> +static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 
> flags,
> +			      u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
> +		     page_shift);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
> index 4c67cc79de7c..d1a79fb5f806 100644
> --- a/arch/powerpc/kvm/Makefile
> +++ b/arch/powerpc/kvm/Makefile
> @@ -71,6 +71,9 @@ kvm-hv-y += \
>  	book3s_64_mmu_radix.o \
>  	book3s_hv_nested.o
> 
> +kvm-hv-$(CONFIG_PPC_UV) += \
> +	book3s_hv_hmm.o
> +
>  kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \
>  	book3s_hv_tm.o
> 
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index a104743291a9..8ee66aa0da58 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -75,6 +75,8 @@
>  #include <asm/xics.h>
>  #include <asm/xive.h>
>  #include <asm/hw_breakpoint.h>
> +#include <asm/kvm_host.h>
> +#include <asm/kvm_book3s_hmm.h>
> 
>  #include "book3s.h"
> 
> @@ -1083,6 +1085,18 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu 
> *vcpu)
>  					 kvmppc_get_gpr(vcpu, 5),
>  					 kvmppc_get_gpr(vcpu, 6));
>  		break;
> +	case H_SVM_PAGE_IN:
> +		ret = kvmppc_h_svm_page_in(vcpu->kvm,
> +					   kvmppc_get_gpr(vcpu, 4),
> +					   kvmppc_get_gpr(vcpu, 5),
> +					   kvmppc_get_gpr(vcpu, 6));
> +		break;
> +	case H_SVM_PAGE_OUT:
> +		ret = kvmppc_h_svm_page_out(vcpu->kvm,
> +					    kvmppc_get_gpr(vcpu, 4),
> +					    kvmppc_get_gpr(vcpu, 5),
> +					    kvmppc_get_gpr(vcpu, 6));
> +		break;
>  	default:
>  		return RESUME_HOST;
>  	}
> @@ -5501,11 +5515,16 @@ static int kvmppc_book3s_init_hv(void)
>  			no_mixing_hpt_and_radix = true;
>  	}
> 
> +	r = kvmppc_hmm_init();
> +	if (r < 0)
> +		pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r);
> +
>  	return r;
>  }
> 
>  static void kvmppc_book3s_exit_hv(void)
>  {
> +	kvmppc_hmm_free();
>  	kvmppc_free_host_rm_ops();
>  	if (kvmppc_radix_possible())
>  		kvmppc_radix_exit();
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> new file mode 100644
> index 000000000000..cd34323888b6
> --- /dev/null
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -0,0 +1,482 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * HMM driver to manage page migration between normal and secure
> + * memory.
> + *
> + * Based on Jérôme Glisse's HMM dummy driver.
> + *
> + * Copyright 2018 Bharata B Rao, IBM Corp. <bharata@linux.ibm.com>
> + */
> +
> +/*
> + * A pseries guest can be run as a secure guest on Ultravisor-enabled
> + * POWER platforms. On such platforms, this driver will be used to 
> manage
> + * the movement of guest pages between the normal memory managed by
> + * hypervisor (HV) and secure memory managed by Ultravisor (UV).
> + *
> + * Private ZONE_DEVICE memory equal to the amount of secure memory
> + * available in the platform for running secure guests is created
> + * via a HMM device. The movement of pages between normal and secure
> + * memory is done by ->alloc_and_copy() callback routine of 
> migrate_vma().
> + *
> + * The page-in or page-out requests from UV will come to HV as hcalls 
> and
> + * HV will call back into UV via uvcalls to satisfy these page 
> requests.
> + *
> + * For each page that gets moved into secure memory, a HMM PFN is used
> + * on the HV side and HMM migration PTE corresponding to that PFN 
> would be
> + * populated in the QEMU page tables. HMM PFNs are stored in the rmap
> + * array. Whenever a guest page becomes secure, HMM PFN allocated for
> + * the same will be populated in the corresponding slot in the rmap
> + * array. The overloading of rmap array's usage which otherwise is
> + * used primarily by HPT guests means that this feature (secure
> + * guest on PEF platforms) is available only for Radix MMU guests.
> + * Also the same rmap array is used differently by nested HPT guests.
> + * Hence a secure guest can't have nested guests.
> + */
> +
> +#include <linux/hmm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/sched/mm.h>
> +#include <asm/ultravisor.h>
> +
> +struct kvmppc_hmm_device {
> +	struct hmm_device *device;
> +	struct hmm_devmem *devmem;
> +	unsigned long *pfn_bitmap;
> +};
> +
> +static struct kvmppc_hmm_device kvmppc_hmm;
> +spinlock_t kvmppc_hmm_lock;
> +
> +struct kvmppc_hmm_page_pvt {
> +	unsigned long *rmap;
> +	unsigned int lpid;
> +	unsigned long gpa;
> +};
> +
> +struct kvmppc_hmm_migrate_args {
> +	unsigned long *rmap;
> +	unsigned int lpid;
> +	unsigned long gpa;
> +	unsigned long page_shift;
> +};
> +
> +/*
> + * Bits 60:56 in the rmap entry will be used to identify the
> + * different uses/functions of rmap. This definition with move
> + * to a proper header when all other functions are defined.
> + */
> +#define KVMPPC_PFN_HMM		(0x2ULL << 56)
> +
> +static inline bool kvmppc_is_hmm_pfn(unsigned long pfn)
> +{
> +	return !!(pfn & KVMPPC_PFN_HMM);
> +}
> +
> +/*
> + * Get a free HMM PFN from the pool
> + *
> + * Called when a normal page is moved to secure memory (UV_PAGE_IN). 
> HMM
> + * PFN will be used to keep track of the secure page on HV side.
> + *
> + * @rmap here is the slot in the rmap array that corresponds to @gpa.
> + * Thus a non-zero rmap entry indicates that the corresonding guest
> + * page has become secure, is not mapped on the HV side.
> + *
> + * NOTE: In this and subsequent functions, we pass around and access
> + * individual elements of kvm_memory_slot->arch.rmap[] without any
> + * protection. Should we use lock_rmap() here?
> + */
> +static struct page *kvmppc_hmm_get_page(unsigned long *rmap,
> +					unsigned long gpa, unsigned int lpid)
> +{
> +	struct page *dpage = NULL;
> +	unsigned long bit, hmm_pfn;
> +	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
> +				kvmppc_hmm.devmem->pfn_first;
> +	unsigned long flags;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +
> +	if (kvmppc_is_hmm_pfn(*rmap))
> +		return NULL;
> +
> +	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
> +	bit = find_first_zero_bit(kvmppc_hmm.pfn_bitmap, nr_pfns);
> +	if (bit >= nr_pfns)
> +		goto out;
> +
> +	bitmap_set(kvmppc_hmm.pfn_bitmap, bit, 1);
> +	hmm_pfn = bit + kvmppc_hmm.devmem->pfn_first;
> +	dpage = pfn_to_page(hmm_pfn);
> +
> +	if (!trylock_page(dpage))
> +		goto out_clear;
> +
> +	*rmap = hmm_pfn | KVMPPC_PFN_HMM;
> +	pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC);
> +	if (!pvt)
> +		goto out_unlock;
> +	pvt->rmap = rmap;
> +	pvt->gpa = gpa;
> +	pvt->lpid = lpid;
> +	hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt);
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +
> +	get_page(dpage);
> +	return dpage;
> +
> +out_unlock:
> +	unlock_page(dpage);
> +out_clear:
> +	bitmap_clear(kvmppc_hmm.pfn_bitmap,
> +		     hmm_pfn - kvmppc_hmm.devmem->pfn_first, 1);
> +out:
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +	return NULL;
> +}
> +
> +/*
> + * Release the HMM PFN back to the pool
> + *
> + * Called when secure page becomes a normal page during UV_PAGE_OUT.
> + */
> +static void kvmppc_hmm_put_page(struct page *page)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +	unsigned long flags;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +
> +	spin_lock_irqsave(&kvmppc_hmm_lock, flags);
> +	pvt = (struct kvmppc_hmm_page_pvt 
> *)hmm_devmem_page_get_drvdata(page);
> +	hmm_devmem_page_set_drvdata(page, 0);
> +
> +	bitmap_clear(kvmppc_hmm.pfn_bitmap,
> +		     pfn - kvmppc_hmm.devmem->pfn_first, 1);
> +	*(pvt->rmap) = 0;
> +	spin_unlock_irqrestore(&kvmppc_hmm_lock, flags);
> +	kfree(pvt);
> +}
> +
> +/*
> + * migrate_vma() callback to move page from normal memory to secure 
> memory.
> + *
> + * We don't capture the return value of uv_page_in() here because when
> + * UV asks for a page and then fails to copy it over, we don't care.
> + */
> +static void
> +kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma,
> +				  const unsigned long *src_pfn,
> +				  unsigned long *dst_pfn,
> +				  unsigned long start,
> +				  unsigned long end,
> +				  void *private)
> +{
> +	struct kvmppc_hmm_migrate_args *args = private;
> +	struct page *spage = migrate_pfn_to_page(*src_pfn);
> +	unsigned long pfn = *src_pfn >> MIGRATE_PFN_SHIFT;
> +	struct page *dpage;
> +
> +	*dst_pfn = 0;
> +	if (!(*src_pfn & MIGRATE_PFN_MIGRATE))
> +		return;
> +
> +	dpage = kvmppc_hmm_get_page(args->rmap, args->gpa, args->lpid);
> +	if (!dpage)
> +		return;
> +
> +	if (spage)
> +		uv_page_in(args->lpid, pfn << args->page_shift,
> +			   args->gpa, 0, args->page_shift);
> +
> +	*dst_pfn = migrate_pfn(page_to_pfn(dpage)) |
> +		    MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED;
> +}
> +
> +/*
> + * This migrate_vma() callback is typically used to updated device
> + * page tables after successful migration. We have nothing to do here.
> + *
> + * Also as we don't care if UV successfully copied over the page in
> + * kvmppc_hmm_migrate_alloc_and_copy(), we don't bother to check
> + * dst_pfn for any errors here.
> + */
> +static void
> +kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma,
> +				    const unsigned long *src_pfn,
> +				    const unsigned long *dst_pfn,
> +				    unsigned long start,
> +				    unsigned long end,
> +				    void *private)
> +{
> +}
> +
> +static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = {
> +	.alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy,
> +	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
> +};
> +
> +/*
> + * Move page from normal memory to secure memory.
> + */
> +unsigned long
> +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
> +		     unsigned long flags, unsigned long page_shift)
> +{
> +	unsigned long addr, end;
> +	unsigned long src_pfn, dst_pfn;
> +	struct kvmppc_hmm_migrate_args args;
> +	struct vm_area_struct *vma;
> +	int srcu_idx;
> +	unsigned long gfn = gpa >> page_shift;
> +	struct kvm_memory_slot *slot;
> +	unsigned long *rmap;
> +	int ret = H_SUCCESS;
> +
> +	if (page_shift != PAGE_SHIFT)
> +		return H_P3;
> +
> +	if (flags)
> +		return H_P2;
> +
> +	down_read(&kvm->mm->mmap_sem);
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slot = gfn_to_memslot(kvm, gfn);
> +	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
> +	addr = gfn_to_hva(kvm, gpa >> page_shift);
> +	if (kvm_is_error_hva(addr)) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	end = addr + (1UL << page_shift);
> +	vma = find_vma_intersection(kvm->mm, addr, end);
> +	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	args.rmap = rmap;
> +	args.lpid = kvm->arch.lpid;
> +	args.gpa = gpa;
> +	args.page_shift = page_shift;
> +	ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end,
> +			  &src_pfn, &dst_pfn, &args);
> +	if (ret < 0)
> +		ret = H_PARAMETER;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	up_read(&kvm->mm->mmap_sem);
> +	return ret;
> +}
> +
> +/*
> + * We drop the HMM PFN here, provision a new page and populate
> + * the same in QEMU page tables.
> + */
> +static void
> +kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma,
> +					const unsigned long *src_pfn,
> +					unsigned long *dst_pfn,
> +					unsigned long start,
> +					unsigned long end,
> +					void *private)
> +{
> +	struct page *dpage, *spage;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +	int ret = U_SUCCESS;
> +
> +	*dst_pfn = MIGRATE_PFN_ERROR;
> +	spage = migrate_pfn_to_page(*src_pfn);
> +	if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE))
> +		return;
> +	if (!is_zone_device_page(spage))
> +		return;
> +	dpage = hmm_vma_alloc_locked_page(vma, start);
> +	if (!dpage)
> +		return;
> +	pvt = (struct kvmppc_hmm_page_pvt *)
> +	       hmm_devmem_page_get_drvdata(spage);
> +
> +	pfn = page_to_pfn(dpage);
> +	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> +			  pvt->gpa, 0, PAGE_SHIFT);
> +	if (ret = U_SUCCESS)
> +		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
> +}
> +
> +/*
> + * We have nothing to do here.
> + */
> +static void
> +kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma,
> +					  const unsigned long *src_pfn,
> +					  const unsigned long *dst_pfn,
> +					  unsigned long start,
> +					  unsigned long end,
> +					  void *private)
> +{
> +}
> +
> +static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = {
> +	.alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy,
> +	.finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map,
> +};
> +
> +/*
> + * Fault handler callback when HV touches any page that has been
> + * moved to secure memory, we ask UV to give back the page by
> + * issuing a UV_PAGE_OUT uvcall.
> + */
> +static vm_fault_t kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem,
> +					  struct vm_area_struct *vma,
> +					  unsigned long addr,
> +					  const struct page *page,
> +					  unsigned int flags,
> +					  pmd_t *pmdp)
> +{
> +	unsigned long end = addr + PAGE_SIZE;
> +	unsigned long src_pfn, dst_pfn = 0;
> +
> +	if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> +			&src_pfn, &dst_pfn, NULL))
> +		return VM_FAULT_SIGBUS;
> +	if (dst_pfn = MIGRATE_PFN_ERROR)
> +		return VM_FAULT_SIGBUS;
> +	return 0;
> +}
> +
> +static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem,
> +				   struct page *page)
> +{
> +	kvmppc_hmm_put_page(page);
> +}
> +
> +static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = {
> +	.free = kvmppc_hmm_devmem_free,
> +	.fault = kvmppc_hmm_devmem_fault,
> +};
> +
> +/*
> + * Move page from secure memory to normal memory.
> + */
> +unsigned long
> +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
> +		      unsigned long flags, unsigned long page_shift)
> +{
> +	unsigned long addr, end;
> +	struct vm_area_struct *vma;
> +	unsigned long src_pfn, dst_pfn = 0;
> +	int srcu_idx;
> +	int ret = H_SUCCESS;
> +
> +	if (page_shift != PAGE_SHIFT)
> +		return H_P3;
> +
> +	if (flags)
> +		return H_P2;
> +
> +	down_read(&kvm->mm->mmap_sem);
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	addr = gfn_to_hva(kvm, gpa >> page_shift);
> +	if (kvm_is_error_hva(addr)) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +
> +	end = addr + (1UL << page_shift);
> +	vma = find_vma_intersection(kvm->mm, addr, end);
> +	if (!vma || vma->vm_start > addr || vma->vm_end < end) {
> +		ret = H_PARAMETER;
> +		goto out;
> +	}
> +	ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end,
> +			  &src_pfn, &dst_pfn, NULL);
> +	if (ret < 0)
> +		ret = H_PARAMETER;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	up_read(&kvm->mm->mmap_sem);
> +	return ret;
> +}
> +
> +static u64 kvmppc_get_secmem_size(void)
> +{
> +	struct device_node *np;
> +	int i, len;
> +	const __be32 *prop;
> +	u64 size = 0;
> +
> +	np = of_find_node_by_path("/ibm,ultravisor/ibm,uv-firmware");
> +	if (!np)
> +		goto out;
> +
> +	prop = of_get_property(np, "secure-memory-ranges", &len);
> +	if (!prop)
> +		goto out_put;
> +
> +	for (i = 0; i < len / (sizeof(*prop) * 4); i++)
> +		size += of_read_number(prop + (i * 4) + 2, 2);
> +
> +out_put:
> +	of_node_put(np);
> +out:
> +	return size;
> +}
> +
> +static int kvmppc_hmm_pages_init(void)
> +{
> +	unsigned long nr_pfns = kvmppc_hmm.devmem->pfn_last -
> +				kvmppc_hmm.devmem->pfn_first;
> +
> +	kvmppc_hmm.pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns),
> +					 sizeof(unsigned long), GFP_KERNEL);
> +	if (!kvmppc_hmm.pfn_bitmap)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&kvmppc_hmm_lock);
> +
> +	return 0;
> +}
> +
> +int kvmppc_hmm_init(void)
> +{
> +	int ret = 0;
> +	unsigned long size;
> +
> +	size = kvmppc_get_secmem_size();
> +	if (!size) {
> +		ret = -ENODEV;
> +		goto out;
> +	}
> +
> +	kvmppc_hmm.device = hmm_device_new(NULL);
> +	if (IS_ERR(kvmppc_hmm.device)) {
> +		ret = PTR_ERR(kvmppc_hmm.device);
> +		goto out;
> +	}
> +
> +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> +					   &kvmppc_hmm.device->device, size);
> +	if (IS_ERR(kvmppc_hmm.devmem)) {
> +		ret = PTR_ERR(kvmppc_hmm.devmem);
> +		goto out_device;
> +	}
> +	ret = kvmppc_hmm_pages_init();
> +	if (ret < 0)
> +		goto out_device;
> +
> +	pr_info("KVMPPC-HMM: Secure Memory size 0x%lx\n", size);
> +	return ret;
> +
> +out_device:
> +	hmm_device_put(kvmppc_hmm.device);
> +out:
> +	return ret;
> +}
> +
> +void kvmppc_hmm_free(void)
> +{
> +	kfree(kvmppc_hmm.pfn_bitmap);
> +	hmm_device_put(kvmppc_hmm.device);
> +}

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 2/7] kvmppc: Shared pages support for secure guests
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-09 19:35     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:35 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> A secure guest will share some of its pages with hypervisor (Eg. virtio
> bounce buffers etc). Support shared pages in HMM driver.
> 
> Once a secure page is converted to shared page, HMM driver will stop
> tracking that page.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h |  3 ++
>  arch/powerpc/kvm/book3s_hv_hmm.c  | 66 +++++++++++++++++++++++++++++--
>  2 files changed, 66 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 2f6b952deb0f..05b8536f6653 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -337,6 +337,9 @@
>  #define H_TLB_INVALIDATE	0xF808
>  #define H_COPY_TOFROM_GUEST	0xF80C
> 
> +/* Flags for H_SVM_PAGE_IN */
> +#define H_PAGE_IN_SHARED        0x1
> +
>  /* Platform-specific hcalls used by the Ultravisor */
>  #define H_SVM_PAGE_IN		0xEF00
>  #define H_SVM_PAGE_OUT		0xEF04
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index cd34323888b6..36562b382e70 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -52,6 +52,7 @@ struct kvmppc_hmm_page_pvt {
>  	unsigned long *rmap;
>  	unsigned int lpid;
>  	unsigned long gpa;
> +	bool skip_page_out;
>  };
> 
>  struct kvmppc_hmm_migrate_args {
> @@ -215,6 +216,53 @@ static const struct migrate_vma_ops
> kvmppc_hmm_migrate_ops = {
>  	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
>  };
> 
> +/*
> + * Shares the page with HV, thus making it a normal page.
> + *
> + * - If the page is already secure, then provision a new page and 
> share
> + * - If the page is a normal page, share the existing page
> + *
> + * In the former case, uses the HMM fault handler to release the HMM 
> page.
> + */
> +static unsigned long
> +kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long 
> page_shift)
> +{
> +
> +	int ret;
> +	struct page *hmm_page;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +	unsigned long *rmap;
> +	struct kvm_memory_slot *slot;
> +	unsigned long gfn = gpa >> page_shift;
> +	int srcu_idx;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slot = gfn_to_memslot(kvm, gfn);
> +	if (!slot) {
> +		srcu_read_unlock(&kvm->srcu, srcu_idx);
> +		return H_PARAMETER;
> +	}
> +	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +
> +	if (kvmppc_is_hmm_pfn(*rmap)) {
> +		hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
> +		pvt = (struct kvmppc_hmm_page_pvt *)
> +			hmm_devmem_page_get_drvdata(hmm_page);
> +		pvt->skip_page_out = true;
> +	}
> +
> +	pfn = gfn_to_pfn(kvm, gpa >> page_shift);
> +	if (is_error_noslot_pfn(pfn))
> +		return H_PARAMETER;
> +
> +	ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, 
> page_shift);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER;
> +}
> +
>  /*
>   * Move page from normal memory to secure memory.
>   */
> @@ -235,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned 
> long gpa,
>  	if (page_shift != PAGE_SHIFT)
>  		return H_P3;
> 
> -	if (flags)
> +	if (flags & ~H_PAGE_IN_SHARED)
>  		return H_P2;
> 
> +	if (flags & H_PAGE_IN_SHARED)
> +		return kvmppc_share_page(kvm, gpa, page_shift);
> +
>  	down_read(&kvm->mm->mmap_sem);
>  	srcu_idx = srcu_read_lock(&kvm->srcu);
>  	slot = gfn_to_memslot(kvm, gfn);
> @@ -299,8 +350,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct
> vm_area_struct *vma,
>  	       hmm_devmem_page_get_drvdata(spage);
> 
>  	pfn = page_to_pfn(dpage);
> -	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> -			  pvt->gpa, 0, PAGE_SHIFT);
> +
> +	/*
> +	 * This same alloc_and_copy() callback is used in two cases:
> +	 * - When HV touches a secure page, for which we do page-out
> +	 * - When a secure page is converted to shared page, we touch
> +	 *   the page to essentially discard the HMM page. In this case we
> +	 *   skip page-out.
> +	 */
> +	if (!pvt->skip_page_out)
> +		ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> +				  pvt->gpa, 0, PAGE_SHIFT);
>  	if (ret == U_SUCCESS)
>  		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
>  }


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 2/7] kvmppc: Shared pages support for secure guests
@ 2019-07-09 19:35     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:35 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> A secure guest will share some of its pages with hypervisor (Eg. virtio
> bounce buffers etc). Support shared pages in HMM driver.
> 
> Once a secure page is converted to shared page, HMM driver will stop
> tracking that page.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h |  3 ++
>  arch/powerpc/kvm/book3s_hv_hmm.c  | 66 +++++++++++++++++++++++++++++--
>  2 files changed, 66 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 2f6b952deb0f..05b8536f6653 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -337,6 +337,9 @@
>  #define H_TLB_INVALIDATE	0xF808
>  #define H_COPY_TOFROM_GUEST	0xF80C
> 
> +/* Flags for H_SVM_PAGE_IN */
> +#define H_PAGE_IN_SHARED        0x1
> +
>  /* Platform-specific hcalls used by the Ultravisor */
>  #define H_SVM_PAGE_IN		0xEF00
>  #define H_SVM_PAGE_OUT		0xEF04
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index cd34323888b6..36562b382e70 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -52,6 +52,7 @@ struct kvmppc_hmm_page_pvt {
>  	unsigned long *rmap;
>  	unsigned int lpid;
>  	unsigned long gpa;
> +	bool skip_page_out;
>  };
> 
>  struct kvmppc_hmm_migrate_args {
> @@ -215,6 +216,53 @@ static const struct migrate_vma_ops
> kvmppc_hmm_migrate_ops = {
>  	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
>  };
> 
> +/*
> + * Shares the page with HV, thus making it a normal page.
> + *
> + * - If the page is already secure, then provision a new page and 
> share
> + * - If the page is a normal page, share the existing page
> + *
> + * In the former case, uses the HMM fault handler to release the HMM 
> page.
> + */
> +static unsigned long
> +kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long 
> page_shift)
> +{
> +
> +	int ret;
> +	struct page *hmm_page;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +	unsigned long *rmap;
> +	struct kvm_memory_slot *slot;
> +	unsigned long gfn = gpa >> page_shift;
> +	int srcu_idx;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slot = gfn_to_memslot(kvm, gfn);
> +	if (!slot) {
> +		srcu_read_unlock(&kvm->srcu, srcu_idx);
> +		return H_PARAMETER;
> +	}
> +	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +
> +	if (kvmppc_is_hmm_pfn(*rmap)) {
> +		hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
> +		pvt = (struct kvmppc_hmm_page_pvt *)
> +			hmm_devmem_page_get_drvdata(hmm_page);
> +		pvt->skip_page_out = true;
> +	}
> +
> +	pfn = gfn_to_pfn(kvm, gpa >> page_shift);
> +	if (is_error_noslot_pfn(pfn))
> +		return H_PARAMETER;
> +
> +	ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, 
> page_shift);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER;
> +}
> +
>  /*
>   * Move page from normal memory to secure memory.
>   */
> @@ -235,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned 
> long gpa,
>  	if (page_shift != PAGE_SHIFT)
>  		return H_P3;
> 
> -	if (flags)
> +	if (flags & ~H_PAGE_IN_SHARED)
>  		return H_P2;
> 
> +	if (flags & H_PAGE_IN_SHARED)
> +		return kvmppc_share_page(kvm, gpa, page_shift);
> +
>  	down_read(&kvm->mm->mmap_sem);
>  	srcu_idx = srcu_read_lock(&kvm->srcu);
>  	slot = gfn_to_memslot(kvm, gfn);
> @@ -299,8 +350,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct
> vm_area_struct *vma,
>  	       hmm_devmem_page_get_drvdata(spage);
> 
>  	pfn = page_to_pfn(dpage);
> -	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> -			  pvt->gpa, 0, PAGE_SHIFT);
> +
> +	/*
> +	 * This same alloc_and_copy() callback is used in two cases:
> +	 * - When HV touches a secure page, for which we do page-out
> +	 * - When a secure page is converted to shared page, we touch
> +	 *   the page to essentially discard the HMM page. In this case we
> +	 *   skip page-out.
> +	 */
> +	if (!pvt->skip_page_out)
> +		ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> +				  pvt->gpa, 0, PAGE_SHIFT);
>  	if (ret == U_SUCCESS)
>  		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
>  }

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 2/7] kvmppc: Shared pages support for secure guests
@ 2019-07-09 19:35     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:35 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> A secure guest will share some of its pages with hypervisor (Eg. virtio
> bounce buffers etc). Support shared pages in HMM driver.
> 
> Once a secure page is converted to shared page, HMM driver will stop
> tracking that page.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h |  3 ++
>  arch/powerpc/kvm/book3s_hv_hmm.c  | 66 +++++++++++++++++++++++++++++--
>  2 files changed, 66 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 2f6b952deb0f..05b8536f6653 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -337,6 +337,9 @@
>  #define H_TLB_INVALIDATE	0xF808
>  #define H_COPY_TOFROM_GUEST	0xF80C
> 
> +/* Flags for H_SVM_PAGE_IN */
> +#define H_PAGE_IN_SHARED        0x1
> +
>  /* Platform-specific hcalls used by the Ultravisor */
>  #define H_SVM_PAGE_IN		0xEF00
>  #define H_SVM_PAGE_OUT		0xEF04
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index cd34323888b6..36562b382e70 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -52,6 +52,7 @@ struct kvmppc_hmm_page_pvt {
>  	unsigned long *rmap;
>  	unsigned int lpid;
>  	unsigned long gpa;
> +	bool skip_page_out;
>  };
> 
>  struct kvmppc_hmm_migrate_args {
> @@ -215,6 +216,53 @@ static const struct migrate_vma_ops
> kvmppc_hmm_migrate_ops = {
>  	.finalize_and_map = kvmppc_hmm_migrate_finalize_and_map,
>  };
> 
> +/*
> + * Shares the page with HV, thus making it a normal page.
> + *
> + * - If the page is already secure, then provision a new page and 
> share
> + * - If the page is a normal page, share the existing page
> + *
> + * In the former case, uses the HMM fault handler to release the HMM 
> page.
> + */
> +static unsigned long
> +kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long 
> page_shift)
> +{
> +
> +	int ret;
> +	struct page *hmm_page;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +	unsigned long *rmap;
> +	struct kvm_memory_slot *slot;
> +	unsigned long gfn = gpa >> page_shift;
> +	int srcu_idx;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slot = gfn_to_memslot(kvm, gfn);
> +	if (!slot) {
> +		srcu_read_unlock(&kvm->srcu, srcu_idx);
> +		return H_PARAMETER;
> +	}
> +	rmap = &slot->arch.rmap[gfn - slot->base_gfn];
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +
> +	if (kvmppc_is_hmm_pfn(*rmap)) {
> +		hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
> +		pvt = (struct kvmppc_hmm_page_pvt *)
> +			hmm_devmem_page_get_drvdata(hmm_page);
> +		pvt->skip_page_out = true;
> +	}
> +
> +	pfn = gfn_to_pfn(kvm, gpa >> page_shift);
> +	if (is_error_noslot_pfn(pfn))
> +		return H_PARAMETER;
> +
> +	ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, 
> page_shift);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return (ret = U_SUCCESS) ? H_SUCCESS : H_PARAMETER;
> +}
> +
>  /*
>   * Move page from normal memory to secure memory.
>   */
> @@ -235,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned 
> long gpa,
>  	if (page_shift != PAGE_SHIFT)
>  		return H_P3;
> 
> -	if (flags)
> +	if (flags & ~H_PAGE_IN_SHARED)
>  		return H_P2;
> 
> +	if (flags & H_PAGE_IN_SHARED)
> +		return kvmppc_share_page(kvm, gpa, page_shift);
> +
>  	down_read(&kvm->mm->mmap_sem);
>  	srcu_idx = srcu_read_lock(&kvm->srcu);
>  	slot = gfn_to_memslot(kvm, gfn);
> @@ -299,8 +350,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct
> vm_area_struct *vma,
>  	       hmm_devmem_page_get_drvdata(spage);
> 
>  	pfn = page_to_pfn(dpage);
> -	ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> -			  pvt->gpa, 0, PAGE_SHIFT);
> +
> +	/*
> +	 * This same alloc_and_copy() callback is used in two cases:
> +	 * - When HV touches a secure page, for which we do page-out
> +	 * - When a secure page is converted to shared page, we touch
> +	 *   the page to essentially discard the HMM page. In this case we
> +	 *   skip page-out.
> +	 */
> +	if (!pvt->skip_page_out)
> +		ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT,
> +				  pvt->gpa, 0, PAGE_SHIFT);
>  	if (ret = U_SUCCESS)
>  		*dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED;
>  }

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-09 19:42     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:42 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> H_SVM_INIT_START: Initiate securing a VM
> H_SVM_INIT_DONE: Conclude securing a VM
> 
> As part of H_SVM_INIT_START, register all existing memslots with
> the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
> the guest to secure mode is complete.
> 
> These two states (transition to secure mode STARTED and transition
> to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
> Setting these states will cause the assembly code that enters the
> guest to call the UV_RETURN ucall instead of trying to enter the
> guest directly.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Acked-by: Paul Mackerras <paulus@ozlabs.org>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h         |  2 ++
>  arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++
>  arch/powerpc/include/asm/kvm_host.h       |  4 +++
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
>  arch/powerpc/kvm/book3s_hv.c              |  7 +++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 34 +++++++++++++++++++++++
>  7 files changed, 69 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 05b8536f6653..fa7695928e30 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -343,6 +343,8 @@
>  /* Platform-specific hcalls used by the Ultravisor */
>  #define H_SVM_PAGE_IN		0xEF00
>  #define H_SVM_PAGE_OUT		0xEF04
> +#define H_SVM_INIT_START	0xEF08
> +#define H_SVM_INIT_DONE		0xEF0C
> 
>  /* Values for 2nd argument to H_SET_MODE */
>  #define H_SET_MODE_RESOURCE_SET_CIABR		1
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> index 21f3de5f2acb..8c7aacabb2e0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -11,6 +11,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm 
> *kvm,
>  					  unsigned long gra,
>  					  unsigned long flags,
>  					  unsigned long page_shift);
> +extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
> +extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
>  #else
>  static inline unsigned long
>  kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> @@ -25,5 +27,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long 
> gra,
>  {
>  	return H_UNSUPPORTED;
>  }
> +
> +static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
> +{
> +	return H_UNSUPPORTED;
> +}
> +
> +static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
> +{
> +	return H_UNSUPPORTED;
> +}
>  #endif /* CONFIG_PPC_UV */
>  #endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index ac1a101beb07..0c49c3401c63 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -272,6 +272,10 @@ struct kvm_hpt_info {
> 
>  struct kvm_resize_hpt;
> 
> +/* Flag values for kvm_arch.secure_guest */
> +#define KVMPPC_SECURE_INIT_START	0x1 /* H_SVM_INIT_START has been 
> called */
> +#define KVMPPC_SECURE_INIT_DONE		0x2 /* H_SVM_INIT_DONE completed */
> +
>  struct kvm_arch {
>  	unsigned int lpid;
>  	unsigned int smt_mode;		/* # vcpus per virtual core */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index f1c5800ac705..07b7d638e7af 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -20,6 +20,7 @@
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
> +#define UV_REGISTER_MEM_SLOT		0xF120
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> 
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index 16f8e0e8ec3f..b46042f1aa8f 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -61,6 +61,15 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra,
> u64 src_gpa, u64 flags,
>  	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
>  		     page_shift);
>  }
> +
> +static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 
> size,
> +				       u64 flags, u64 slotid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
> +		     size, flags, slotid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index 8ee66aa0da58..b8f801d00ad4 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -1097,6 +1097,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu 
> *vcpu)
>  					    kvmppc_get_gpr(vcpu, 5),
>  					    kvmppc_get_gpr(vcpu, 6));
>  		break;
> +	case H_SVM_INIT_START:
> +		ret = kvmppc_h_svm_init_start(vcpu->kvm);
> +		break;
> +	case H_SVM_INIT_DONE:
> +		ret = kvmppc_h_svm_init_done(vcpu->kvm);
> +		break;
> +
>  	default:
>  		return RESUME_HOST;
>  	}
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 36562b382e70..55bab9c4e60a 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -62,6 +62,40 @@ struct kvmppc_hmm_migrate_args {
>  	unsigned long page_shift;
>  };
> 
> +unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
> +{
> +	struct kvm_memslots *slots;
> +	struct kvm_memory_slot *memslot;
> +	int ret = H_SUCCESS;
> +	int srcu_idx;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slots = kvm_memslots(kvm);
> +	kvm_for_each_memslot(memslot, slots) {
> +		ret = uv_register_mem_slot(kvm->arch.lpid,
> +					   memslot->base_gfn << PAGE_SHIFT,
> +					   memslot->npages * PAGE_SIZE,
> +					   0, memslot->id);
> +		if (ret < 0) {
> +			ret = H_PARAMETER;
> +			goto out;
> +		}
> +	}
> +	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_START;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	return ret;
> +}
> +
> +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
> +{
> +	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
> +		return H_UNSUPPORTED;
> +
> +	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
> +	return H_SUCCESS;
> +}
> +
>  /*
>   * Bits 60:56 in the rmap entry will be used to identify the
>   * different uses/functions of rmap. This definition with move


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
@ 2019-07-09 19:42     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:42 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> H_SVM_INIT_START: Initiate securing a VM
> H_SVM_INIT_DONE: Conclude securing a VM
> 
> As part of H_SVM_INIT_START, register all existing memslots with
> the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
> the guest to secure mode is complete.
> 
> These two states (transition to secure mode STARTED and transition
> to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
> Setting these states will cause the assembly code that enters the
> guest to call the UV_RETURN ucall instead of trying to enter the
> guest directly.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Acked-by: Paul Mackerras <paulus@ozlabs.org>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h         |  2 ++
>  arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++
>  arch/powerpc/include/asm/kvm_host.h       |  4 +++
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
>  arch/powerpc/kvm/book3s_hv.c              |  7 +++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 34 +++++++++++++++++++++++
>  7 files changed, 69 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 05b8536f6653..fa7695928e30 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -343,6 +343,8 @@
>  /* Platform-specific hcalls used by the Ultravisor */
>  #define H_SVM_PAGE_IN		0xEF00
>  #define H_SVM_PAGE_OUT		0xEF04
> +#define H_SVM_INIT_START	0xEF08
> +#define H_SVM_INIT_DONE		0xEF0C
> 
>  /* Values for 2nd argument to H_SET_MODE */
>  #define H_SET_MODE_RESOURCE_SET_CIABR		1
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> index 21f3de5f2acb..8c7aacabb2e0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -11,6 +11,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm 
> *kvm,
>  					  unsigned long gra,
>  					  unsigned long flags,
>  					  unsigned long page_shift);
> +extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
> +extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
>  #else
>  static inline unsigned long
>  kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> @@ -25,5 +27,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long 
> gra,
>  {
>  	return H_UNSUPPORTED;
>  }
> +
> +static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
> +{
> +	return H_UNSUPPORTED;
> +}
> +
> +static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
> +{
> +	return H_UNSUPPORTED;
> +}
>  #endif /* CONFIG_PPC_UV */
>  #endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index ac1a101beb07..0c49c3401c63 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -272,6 +272,10 @@ struct kvm_hpt_info {
> 
>  struct kvm_resize_hpt;
> 
> +/* Flag values for kvm_arch.secure_guest */
> +#define KVMPPC_SECURE_INIT_START	0x1 /* H_SVM_INIT_START has been 
> called */
> +#define KVMPPC_SECURE_INIT_DONE		0x2 /* H_SVM_INIT_DONE completed */
> +
>  struct kvm_arch {
>  	unsigned int lpid;
>  	unsigned int smt_mode;		/* # vcpus per virtual core */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index f1c5800ac705..07b7d638e7af 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -20,6 +20,7 @@
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
> +#define UV_REGISTER_MEM_SLOT		0xF120
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> 
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index 16f8e0e8ec3f..b46042f1aa8f 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -61,6 +61,15 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra,
> u64 src_gpa, u64 flags,
>  	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
>  		     page_shift);
>  }
> +
> +static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 
> size,
> +				       u64 flags, u64 slotid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
> +		     size, flags, slotid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index 8ee66aa0da58..b8f801d00ad4 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -1097,6 +1097,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu 
> *vcpu)
>  					    kvmppc_get_gpr(vcpu, 5),
>  					    kvmppc_get_gpr(vcpu, 6));
>  		break;
> +	case H_SVM_INIT_START:
> +		ret = kvmppc_h_svm_init_start(vcpu->kvm);
> +		break;
> +	case H_SVM_INIT_DONE:
> +		ret = kvmppc_h_svm_init_done(vcpu->kvm);
> +		break;
> +
>  	default:
>  		return RESUME_HOST;
>  	}
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 36562b382e70..55bab9c4e60a 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -62,6 +62,40 @@ struct kvmppc_hmm_migrate_args {
>  	unsigned long page_shift;
>  };
> 
> +unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
> +{
> +	struct kvm_memslots *slots;
> +	struct kvm_memory_slot *memslot;
> +	int ret = H_SUCCESS;
> +	int srcu_idx;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slots = kvm_memslots(kvm);
> +	kvm_for_each_memslot(memslot, slots) {
> +		ret = uv_register_mem_slot(kvm->arch.lpid,
> +					   memslot->base_gfn << PAGE_SHIFT,
> +					   memslot->npages * PAGE_SIZE,
> +					   0, memslot->id);
> +		if (ret < 0) {
> +			ret = H_PARAMETER;
> +			goto out;
> +		}
> +	}
> +	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_START;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	return ret;
> +}
> +
> +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
> +{
> +	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
> +		return H_UNSUPPORTED;
> +
> +	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
> +	return H_SUCCESS;
> +}
> +
>  /*
>   * Bits 60:56 in the rmap entry will be used to identify the
>   * different uses/functions of rmap. This definition with move


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls
@ 2019-07-09 19:42     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:42 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> H_SVM_INIT_START: Initiate securing a VM
> H_SVM_INIT_DONE: Conclude securing a VM
> 
> As part of H_SVM_INIT_START, register all existing memslots with
> the UV. H_SVM_INIT_DONE call by UV informs HV that transition of
> the guest to secure mode is complete.
> 
> These two states (transition to secure mode STARTED and transition
> to secure mode COMPLETED) are recorded in kvm->arch.secure_guest.
> Setting these states will cause the assembly code that enters the
> guest to call the UV_RETURN ucall instead of trying to enter the
> guest directly.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Acked-by: Paul Mackerras <paulus@ozlabs.org>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/hvcall.h         |  2 ++
>  arch/powerpc/include/asm/kvm_book3s_hmm.h | 12 ++++++++
>  arch/powerpc/include/asm/kvm_host.h       |  4 +++
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
>  arch/powerpc/kvm/book3s_hv.c              |  7 +++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 34 +++++++++++++++++++++++
>  7 files changed, 69 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/hvcall.h
> b/arch/powerpc/include/asm/hvcall.h
> index 05b8536f6653..fa7695928e30 100644
> --- a/arch/powerpc/include/asm/hvcall.h
> +++ b/arch/powerpc/include/asm/hvcall.h
> @@ -343,6 +343,8 @@
>  /* Platform-specific hcalls used by the Ultravisor */
>  #define H_SVM_PAGE_IN		0xEF00
>  #define H_SVM_PAGE_OUT		0xEF04
> +#define H_SVM_INIT_START	0xEF08
> +#define H_SVM_INIT_DONE		0xEF0C
> 
>  /* Values for 2nd argument to H_SET_MODE */
>  #define H_SET_MODE_RESOURCE_SET_CIABR		1
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> index 21f3de5f2acb..8c7aacabb2e0 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -11,6 +11,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm 
> *kvm,
>  					  unsigned long gra,
>  					  unsigned long flags,
>  					  unsigned long page_shift);
> +extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
> +extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
>  #else
>  static inline unsigned long
>  kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> @@ -25,5 +27,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long 
> gra,
>  {
>  	return H_UNSUPPORTED;
>  }
> +
> +static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
> +{
> +	return H_UNSUPPORTED;
> +}
> +
> +static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
> +{
> +	return H_UNSUPPORTED;
> +}
>  #endif /* CONFIG_PPC_UV */
>  #endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index ac1a101beb07..0c49c3401c63 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -272,6 +272,10 @@ struct kvm_hpt_info {
> 
>  struct kvm_resize_hpt;
> 
> +/* Flag values for kvm_arch.secure_guest */
> +#define KVMPPC_SECURE_INIT_START	0x1 /* H_SVM_INIT_START has been 
> called */
> +#define KVMPPC_SECURE_INIT_DONE		0x2 /* H_SVM_INIT_DONE completed */
> +
>  struct kvm_arch {
>  	unsigned int lpid;
>  	unsigned int smt_mode;		/* # vcpus per virtual core */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index f1c5800ac705..07b7d638e7af 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -20,6 +20,7 @@
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
> +#define UV_REGISTER_MEM_SLOT		0xF120
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> 
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index 16f8e0e8ec3f..b46042f1aa8f 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -61,6 +61,15 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra,
> u64 src_gpa, u64 flags,
>  	return ucall(UV_PAGE_OUT, retbuf, lpid, dst_ra, src_gpa, flags,
>  		     page_shift);
>  }
> +
> +static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 
> size,
> +				       u64 flags, u64 slotid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
> +		     size, flags, slotid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index 8ee66aa0da58..b8f801d00ad4 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -1097,6 +1097,13 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu 
> *vcpu)
>  					    kvmppc_get_gpr(vcpu, 5),
>  					    kvmppc_get_gpr(vcpu, 6));
>  		break;
> +	case H_SVM_INIT_START:
> +		ret = kvmppc_h_svm_init_start(vcpu->kvm);
> +		break;
> +	case H_SVM_INIT_DONE:
> +		ret = kvmppc_h_svm_init_done(vcpu->kvm);
> +		break;
> +
>  	default:
>  		return RESUME_HOST;
>  	}
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 36562b382e70..55bab9c4e60a 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -62,6 +62,40 @@ struct kvmppc_hmm_migrate_args {
>  	unsigned long page_shift;
>  };
> 
> +unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
> +{
> +	struct kvm_memslots *slots;
> +	struct kvm_memory_slot *memslot;
> +	int ret = H_SUCCESS;
> +	int srcu_idx;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	slots = kvm_memslots(kvm);
> +	kvm_for_each_memslot(memslot, slots) {
> +		ret = uv_register_mem_slot(kvm->arch.lpid,
> +					   memslot->base_gfn << PAGE_SHIFT,
> +					   memslot->npages * PAGE_SIZE,
> +					   0, memslot->id);
> +		if (ret < 0) {
> +			ret = H_PARAMETER;
> +			goto out;
> +		}
> +	}
> +	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_START;
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	return ret;
> +}
> +
> +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm)
> +{
> +	if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START))
> +		return H_UNSUPPORTED;
> +
> +	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
> +	return H_SUCCESS;
> +}
> +
>  /*
>   * Bits 60:56 in the rmap entry will be used to identify the
>   * different uses/functions of rmap. This definition with move

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-09 19:43     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:43 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> Register the new memslot with UV during plug and unregister
> the memslot during unplug.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Acked-by: Paul Mackerras <paulus@ozlabs.org>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
>  arch/powerpc/kvm/book3s_hv.c              | 19 +++++++++++++++++++
>  3 files changed, 27 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 07b7d638e7af..d6d6eb2e6e6b 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -21,6 +21,7 @@
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
>  #define UV_REGISTER_MEM_SLOT		0xF120
> +#define UV_UNREGISTER_MEM_SLOT		0xF124
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> 
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index b46042f1aa8f..fe45be9ee63b 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -70,6 +70,13 @@ static inline int uv_register_mem_slot(u64 lpid,
> u64 start_gpa, u64 size,
>  	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
>  		     size, flags, slotid);
>  }
> +
> +static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index b8f801d00ad4..7cbb5edaed01 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -77,6 +77,7 @@
>  #include <asm/hw_breakpoint.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_book3s_hmm.h>
> +#include <asm/ultravisor.h>
> 
>  #include "book3s.h"
> 
> @@ -4504,6 +4505,24 @@ static void
> kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>  	if (change == KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) &&
>  	    ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES))
>  		kvmppc_radix_flush_memslot(kvm, old);
> +	/*
> +	 * If UV hasn't yet called H_SVM_INIT_START, don't register memslots.
> +	 */
> +	if (!kvm->arch.secure_guest)
> +		return;
> +
> +	/*
> +	 * TODO: Handle KVM_MR_MOVE
> +	 */
> +	if (change == KVM_MR_CREATE) {
> +		uv_register_mem_slot(kvm->arch.lpid,
> +					   new->base_gfn << PAGE_SHIFT,
> +					   new->npages * PAGE_SIZE,
> +					   0,
> +					   new->id);
> +	} else if (change == KVM_MR_DELETE) {
> +		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +	}
>  }
> 
>  /*


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM
@ 2019-07-09 19:43     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:43 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> Register the new memslot with UV during plug and unregister
> the memslot during unplug.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Acked-by: Paul Mackerras <paulus@ozlabs.org>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
>  arch/powerpc/kvm/book3s_hv.c              | 19 +++++++++++++++++++
>  3 files changed, 27 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 07b7d638e7af..d6d6eb2e6e6b 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -21,6 +21,7 @@
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
>  #define UV_REGISTER_MEM_SLOT		0xF120
> +#define UV_UNREGISTER_MEM_SLOT		0xF124
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> 
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index b46042f1aa8f..fe45be9ee63b 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -70,6 +70,13 @@ static inline int uv_register_mem_slot(u64 lpid,
> u64 start_gpa, u64 size,
>  	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
>  		     size, flags, slotid);
>  }
> +
> +static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index b8f801d00ad4..7cbb5edaed01 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -77,6 +77,7 @@
>  #include <asm/hw_breakpoint.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_book3s_hmm.h>
> +#include <asm/ultravisor.h>
> 
>  #include "book3s.h"
> 
> @@ -4504,6 +4505,24 @@ static void
> kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>  	if (change == KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) &&
>  	    ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES))
>  		kvmppc_radix_flush_memslot(kvm, old);
> +	/*
> +	 * If UV hasn't yet called H_SVM_INIT_START, don't register memslots.
> +	 */
> +	if (!kvm->arch.secure_guest)
> +		return;
> +
> +	/*
> +	 * TODO: Handle KVM_MR_MOVE
> +	 */
> +	if (change == KVM_MR_CREATE) {
> +		uv_register_mem_slot(kvm->arch.lpid,
> +					   new->base_gfn << PAGE_SHIFT,
> +					   new->npages * PAGE_SIZE,
> +					   0,
> +					   new->id);
> +	} else if (change == KVM_MR_DELETE) {
> +		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +	}
>  }
> 
>  /*


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM
@ 2019-07-09 19:43     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:43 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> Register the new memslot with UV during plug and unregister
> the memslot during unplug.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Acked-by: Paul Mackerras <paulus@ozlabs.org>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
>  arch/powerpc/kvm/book3s_hv.c              | 19 +++++++++++++++++++
>  3 files changed, 27 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 07b7d638e7af..d6d6eb2e6e6b 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -21,6 +21,7 @@
>  #define UV_WRITE_PATE			0xF104
>  #define UV_RETURN			0xF11C
>  #define UV_REGISTER_MEM_SLOT		0xF120
> +#define UV_UNREGISTER_MEM_SLOT		0xF124
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> 
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index b46042f1aa8f..fe45be9ee63b 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -70,6 +70,13 @@ static inline int uv_register_mem_slot(u64 lpid,
> u64 start_gpa, u64 size,
>  	return ucall(UV_REGISTER_MEM_SLOT, retbuf, lpid, start_gpa,
>  		     size, flags, slotid);
>  }
> +
> +static inline int uv_unregister_mem_slot(u64 lpid, u64 slotid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index b8f801d00ad4..7cbb5edaed01 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -77,6 +77,7 @@
>  #include <asm/hw_breakpoint.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_book3s_hmm.h>
> +#include <asm/ultravisor.h>
> 
>  #include "book3s.h"
> 
> @@ -4504,6 +4505,24 @@ static void
> kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
>  	if (change = KVM_MR_FLAGS_ONLY && kvm_is_radix(kvm) &&
>  	    ((new->flags ^ old->flags) & KVM_MEM_LOG_DIRTY_PAGES))
>  		kvmppc_radix_flush_memslot(kvm, old);
> +	/*
> +	 * If UV hasn't yet called H_SVM_INIT_START, don't register memslots.
> +	 */
> +	if (!kvm->arch.secure_guest)
> +		return;
> +
> +	/*
> +	 * TODO: Handle KVM_MR_MOVE
> +	 */
> +	if (change = KVM_MR_CREATE) {
> +		uv_register_mem_slot(kvm->arch.lpid,
> +					   new->base_gfn << PAGE_SHIFT,
> +					   new->npages * PAGE_SIZE,
> +					   0,
> +					   new->id);
> +	} else if (change = KVM_MR_DELETE) {
> +		uv_unregister_mem_slot(kvm->arch.lpid, old->id);
> +	}
>  }
> 
>  /*

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-09 19:53     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:53 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> - After the guest becomes secure, when we handle a page fault of a page
>   belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
> - Whenever a page is unmapped on the HV side, inform UV via 
> UV_PAGE_INVAL.
> - Ensure all those routines that walk the secondary page tables of
>   the guest don't do so in case of secure VM. For secure guest, the
>   active secondary page tables are in secure memory and the secondary
>   page tables in HV are freed when guest becomes secure.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/kvm_host.h       | 12 ++++++++++++
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
>  arch/powerpc/kvm/book3s_64_mmu_radix.c    | 22 ++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 20 ++++++++++++++++++++
>  5 files changed, 62 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index 0c49c3401c63..dcbf7480cb10 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -865,6 +865,8 @@ static inline void
> kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  #ifdef CONFIG_PPC_UV
>  extern int kvmppc_hmm_init(void);
>  extern void kvmppc_hmm_free(void);
> +extern bool kvmppc_is_guest_secure(struct kvm *kvm);
> +extern int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa);
>  #else
>  static inline int kvmppc_hmm_init(void)
>  {
> @@ -872,6 +874,16 @@ static inline int kvmppc_hmm_init(void)
>  }
> 
>  static inline void kvmppc_hmm_free(void) {}
> +
> +static inline bool kvmppc_is_guest_secure(struct kvm *kvm)
> +{
> +	return false;
> +}
> +
> +static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned 
> long gpa)
> +{
> +	return -EFAULT;
> +}
>  #endif /* CONFIG_PPC_UV */
> 
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index d6d6eb2e6e6b..9f5510b55892 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -24,5 +24,6 @@
>  #define UV_UNREGISTER_MEM_SLOT		0xF124
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> +#define UV_PAGE_INVAL			0xF138
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index fe45be9ee63b..f4f674794b35 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -77,6 +77,13 @@ static inline int uv_unregister_mem_slot(u64 lpid,
> u64 slotid)
> 
>  	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
>  }
> +
> +static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index f55ef071883f..c454600c454f 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -21,6 +21,8 @@
>  #include <asm/pgtable.h>
>  #include <asm/pgalloc.h>
>  #include <asm/pte-walk.h>
> +#include <asm/ultravisor.h>
> +#include <asm/kvm_host.h>
> 
>  /*
>   * Supported radix tree geometry.
> @@ -923,6 +925,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run
> *run, struct kvm_vcpu *vcpu,
>  	if (!(dsisr & DSISR_PRTABLE_FAULT))
>  		gpa |= ea & 0xfff;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return kvmppc_send_page_to_uv(kvm, gpa & PAGE_MASK);
> +
>  	/* Get the corresponding memslot */
>  	memslot = gfn_to_memslot(kvm, gfn);
> 
> @@ -980,6 +985,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	unsigned long gpa = gfn << PAGE_SHIFT;
>  	unsigned int shift;
> 
> +	if (kvmppc_is_guest_secure(kvm)) {
> +		uv_page_inval(kvm->arch.lpid, gpa, PAGE_SIZE);
> +		return 0;
> +	}
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep))
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
> @@ -997,6 +1007,9 @@ int kvm_age_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	int ref = 0;
>  	unsigned long old, *rmapp;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ref;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
>  		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
> @@ -1021,6 +1034,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	unsigned int shift;
>  	int ref = 0;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ref;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_young(*ptep))
>  		ref = 1;
> @@ -1038,6 +1054,9 @@ static int kvm_radix_test_clear_dirty(struct kvm 
> *kvm,
>  	int ret = 0;
>  	unsigned long old, *rmapp;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ret;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
>  		ret = 1;
> @@ -1090,6 +1109,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
>  	unsigned long gpa;
>  	unsigned int shift;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return;
> +
>  	gpa = memslot->base_gfn << PAGE_SHIFT;
>  	spin_lock(&kvm->mmu_lock);
>  	for (n = memslot->npages; n; --n) {
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 55bab9c4e60a..9e6c88de456f 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -62,6 +62,11 @@ struct kvmppc_hmm_migrate_args {
>  	unsigned long page_shift;
>  };
> 
> +bool kvmppc_is_guest_secure(struct kvm *kvm)
> +{
> +	return !!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE);
> +}
> +
>  unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
>  {
>  	struct kvm_memslots *slots;
> @@ -494,6 +499,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned 
> long gpa,
>  	return ret;
>  }
> 
> +int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
> +{
> +	unsigned long pfn;
> +	int ret;
> +
> +	pfn = gfn_to_pfn(kvm, gpa >> PAGE_SHIFT);
> +	if (is_error_noslot_pfn(pfn))
> +		return -EFAULT;
> +
> +	ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gpa, 0, 
> PAGE_SHIFT);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT;
> +}
> +
>  static u64 kvmppc_get_secmem_size(void)
>  {
>  	struct device_node *np;


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest
@ 2019-07-09 19:53     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:53 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> - After the guest becomes secure, when we handle a page fault of a page
>   belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
> - Whenever a page is unmapped on the HV side, inform UV via 
> UV_PAGE_INVAL.
> - Ensure all those routines that walk the secondary page tables of
>   the guest don't do so in case of secure VM. For secure guest, the
>   active secondary page tables are in secure memory and the secondary
>   page tables in HV are freed when guest becomes secure.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/kvm_host.h       | 12 ++++++++++++
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
>  arch/powerpc/kvm/book3s_64_mmu_radix.c    | 22 ++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 20 ++++++++++++++++++++
>  5 files changed, 62 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index 0c49c3401c63..dcbf7480cb10 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -865,6 +865,8 @@ static inline void
> kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  #ifdef CONFIG_PPC_UV
>  extern int kvmppc_hmm_init(void);
>  extern void kvmppc_hmm_free(void);
> +extern bool kvmppc_is_guest_secure(struct kvm *kvm);
> +extern int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa);
>  #else
>  static inline int kvmppc_hmm_init(void)
>  {
> @@ -872,6 +874,16 @@ static inline int kvmppc_hmm_init(void)
>  }
> 
>  static inline void kvmppc_hmm_free(void) {}
> +
> +static inline bool kvmppc_is_guest_secure(struct kvm *kvm)
> +{
> +	return false;
> +}
> +
> +static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned 
> long gpa)
> +{
> +	return -EFAULT;
> +}
>  #endif /* CONFIG_PPC_UV */
> 
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index d6d6eb2e6e6b..9f5510b55892 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -24,5 +24,6 @@
>  #define UV_UNREGISTER_MEM_SLOT		0xF124
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> +#define UV_PAGE_INVAL			0xF138
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index fe45be9ee63b..f4f674794b35 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -77,6 +77,13 @@ static inline int uv_unregister_mem_slot(u64 lpid,
> u64 slotid)
> 
>  	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
>  }
> +
> +static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index f55ef071883f..c454600c454f 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -21,6 +21,8 @@
>  #include <asm/pgtable.h>
>  #include <asm/pgalloc.h>
>  #include <asm/pte-walk.h>
> +#include <asm/ultravisor.h>
> +#include <asm/kvm_host.h>
> 
>  /*
>   * Supported radix tree geometry.
> @@ -923,6 +925,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run
> *run, struct kvm_vcpu *vcpu,
>  	if (!(dsisr & DSISR_PRTABLE_FAULT))
>  		gpa |= ea & 0xfff;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return kvmppc_send_page_to_uv(kvm, gpa & PAGE_MASK);
> +
>  	/* Get the corresponding memslot */
>  	memslot = gfn_to_memslot(kvm, gfn);
> 
> @@ -980,6 +985,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	unsigned long gpa = gfn << PAGE_SHIFT;
>  	unsigned int shift;
> 
> +	if (kvmppc_is_guest_secure(kvm)) {
> +		uv_page_inval(kvm->arch.lpid, gpa, PAGE_SIZE);
> +		return 0;
> +	}
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep))
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
> @@ -997,6 +1007,9 @@ int kvm_age_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	int ref = 0;
>  	unsigned long old, *rmapp;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ref;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
>  		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
> @@ -1021,6 +1034,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	unsigned int shift;
>  	int ref = 0;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ref;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_young(*ptep))
>  		ref = 1;
> @@ -1038,6 +1054,9 @@ static int kvm_radix_test_clear_dirty(struct kvm 
> *kvm,
>  	int ret = 0;
>  	unsigned long old, *rmapp;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ret;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
>  		ret = 1;
> @@ -1090,6 +1109,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
>  	unsigned long gpa;
>  	unsigned int shift;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return;
> +
>  	gpa = memslot->base_gfn << PAGE_SHIFT;
>  	spin_lock(&kvm->mmu_lock);
>  	for (n = memslot->npages; n; --n) {
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 55bab9c4e60a..9e6c88de456f 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -62,6 +62,11 @@ struct kvmppc_hmm_migrate_args {
>  	unsigned long page_shift;
>  };
> 
> +bool kvmppc_is_guest_secure(struct kvm *kvm)
> +{
> +	return !!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE);
> +}
> +
>  unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
>  {
>  	struct kvm_memslots *slots;
> @@ -494,6 +499,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned 
> long gpa,
>  	return ret;
>  }
> 
> +int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
> +{
> +	unsigned long pfn;
> +	int ret;
> +
> +	pfn = gfn_to_pfn(kvm, gpa >> PAGE_SHIFT);
> +	if (is_error_noslot_pfn(pfn))
> +		return -EFAULT;
> +
> +	ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gpa, 0, 
> PAGE_SHIFT);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT;
> +}
> +
>  static u64 kvmppc_get_secmem_size(void)
>  {
>  	struct device_node *np;

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest
@ 2019-07-09 19:53     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-09 19:53 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> - After the guest becomes secure, when we handle a page fault of a page
>   belonging to SVM in HV, send that page to UV via UV_PAGE_IN.
> - Whenever a page is unmapped on the HV side, inform UV via 
> UV_PAGE_INVAL.
> - Ensure all those routines that walk the secondary page tables of
>   the guest don't do so in case of secure VM. For secure guest, the
>   active secondary page tables are in secure memory and the secondary
>   page tables in HV are freed when guest becomes secure.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/kvm_host.h       | 12 ++++++++++++
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++++++
>  arch/powerpc/kvm/book3s_64_mmu_radix.c    | 22 ++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 20 ++++++++++++++++++++
>  5 files changed, 62 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/kvm_host.h
> b/arch/powerpc/include/asm/kvm_host.h
> index 0c49c3401c63..dcbf7480cb10 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -865,6 +865,8 @@ static inline void
> kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>  #ifdef CONFIG_PPC_UV
>  extern int kvmppc_hmm_init(void);
>  extern void kvmppc_hmm_free(void);
> +extern bool kvmppc_is_guest_secure(struct kvm *kvm);
> +extern int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa);
>  #else
>  static inline int kvmppc_hmm_init(void)
>  {
> @@ -872,6 +874,16 @@ static inline int kvmppc_hmm_init(void)
>  }
> 
>  static inline void kvmppc_hmm_free(void) {}
> +
> +static inline bool kvmppc_is_guest_secure(struct kvm *kvm)
> +{
> +	return false;
> +}
> +
> +static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned 
> long gpa)
> +{
> +	return -EFAULT;
> +}
>  #endif /* CONFIG_PPC_UV */
> 
>  #endif /* __POWERPC_KVM_HOST_H__ */
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index d6d6eb2e6e6b..9f5510b55892 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -24,5 +24,6 @@
>  #define UV_UNREGISTER_MEM_SLOT		0xF124
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
> +#define UV_PAGE_INVAL			0xF138
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index fe45be9ee63b..f4f674794b35 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -77,6 +77,13 @@ static inline int uv_unregister_mem_slot(u64 lpid,
> u64 slotid)
> 
>  	return ucall(UV_UNREGISTER_MEM_SLOT, retbuf, lpid, slotid);
>  }
> +
> +static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index f55ef071883f..c454600c454f 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -21,6 +21,8 @@
>  #include <asm/pgtable.h>
>  #include <asm/pgalloc.h>
>  #include <asm/pte-walk.h>
> +#include <asm/ultravisor.h>
> +#include <asm/kvm_host.h>
> 
>  /*
>   * Supported radix tree geometry.
> @@ -923,6 +925,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run
> *run, struct kvm_vcpu *vcpu,
>  	if (!(dsisr & DSISR_PRTABLE_FAULT))
>  		gpa |= ea & 0xfff;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return kvmppc_send_page_to_uv(kvm, gpa & PAGE_MASK);
> +
>  	/* Get the corresponding memslot */
>  	memslot = gfn_to_memslot(kvm, gfn);
> 
> @@ -980,6 +985,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	unsigned long gpa = gfn << PAGE_SHIFT;
>  	unsigned int shift;
> 
> +	if (kvmppc_is_guest_secure(kvm)) {
> +		uv_page_inval(kvm->arch.lpid, gpa, PAGE_SIZE);
> +		return 0;
> +	}
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep))
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
> @@ -997,6 +1007,9 @@ int kvm_age_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	int ref = 0;
>  	unsigned long old, *rmapp;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ref;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
>  		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
> @@ -1021,6 +1034,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct
> kvm_memory_slot *memslot,
>  	unsigned int shift;
>  	int ref = 0;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ref;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_young(*ptep))
>  		ref = 1;
> @@ -1038,6 +1054,9 @@ static int kvm_radix_test_clear_dirty(struct kvm 
> *kvm,
>  	int ret = 0;
>  	unsigned long old, *rmapp;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return ret;
> +
>  	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
>  		ret = 1;
> @@ -1090,6 +1109,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
>  	unsigned long gpa;
>  	unsigned int shift;
> 
> +	if (kvmppc_is_guest_secure(kvm))
> +		return;
> +
>  	gpa = memslot->base_gfn << PAGE_SHIFT;
>  	spin_lock(&kvm->mmu_lock);
>  	for (n = memslot->npages; n; --n) {
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 55bab9c4e60a..9e6c88de456f 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -62,6 +62,11 @@ struct kvmppc_hmm_migrate_args {
>  	unsigned long page_shift;
>  };
> 
> +bool kvmppc_is_guest_secure(struct kvm *kvm)
> +{
> +	return !!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE);
> +}
> +
>  unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
>  {
>  	struct kvm_memslots *slots;
> @@ -494,6 +499,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned 
> long gpa,
>  	return ret;
>  }
> 
> +int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gpa)
> +{
> +	unsigned long pfn;
> +	int ret;
> +
> +	pfn = gfn_to_pfn(kvm, gpa >> PAGE_SHIFT);
> +	if (is_error_noslot_pfn(pfn))
> +		return -EFAULT;
> +
> +	ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gpa, 0, 
> PAGE_SHIFT);
> +	kvm_release_pfn_clean(pfn);
> +
> +	return (ret = U_SUCCESS) ? RESUME_GUEST : -EFAULT;
> +}
> +
>  static u64 kvmppc_get_secmem_size(void)
>  {
>  	struct device_node *np;

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-10 13:24     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-10 13:24 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Anshuman Khandual, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> 
> CONFIG_PPC_UV adds support for ultravisor.
> 
> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [ Update config help and commit message ]
> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/Kconfig | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index f0e5b38d52e8..20c6c213d2be 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
>         ---help---
>           Support user-mode Transactional Memory on POWERPC.
> 
> +config PPC_UV
> +	bool "Ultravisor support"
> +	depends on KVM_BOOK3S_HV_POSSIBLE
> +	select HMM_MIRROR
> +	select HMM
> +	select ZONE_DEVICE
> +	select MIGRATE_VMA_HELPER
> +	select DEV_PAGEMAP_OPS
> +	select DEVICE_PRIVATE
> +	select MEMORY_HOTPLUG
> +	select MEMORY_HOTREMOVE
> +	default n
> +	help
> +	  This option paravirtualizes the kernel to run in POWER platforms 
> that
> +	  supports the Protected Execution Facility (PEF). In such platforms,
> +	  the ultravisor firmware runs at a privilege level above the
> +	  hypervisor.
> +
> +	  If unsure, say "N".
> +
>  config LD_HEAD_STUB_CATCH
>  	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if 
> EXPERT
>  	depends on PPC64


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
@ 2019-07-10 13:24     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-10 13:24 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev, Anshuman Khandual

On 2019-07-09 05:25, Bharata B Rao wrote:
> From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> 
> CONFIG_PPC_UV adds support for ultravisor.
> 
> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [ Update config help and commit message ]
> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/Kconfig | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index f0e5b38d52e8..20c6c213d2be 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
>         ---help---
>           Support user-mode Transactional Memory on POWERPC.
> 
> +config PPC_UV
> +	bool "Ultravisor support"
> +	depends on KVM_BOOK3S_HV_POSSIBLE
> +	select HMM_MIRROR
> +	select HMM
> +	select ZONE_DEVICE
> +	select MIGRATE_VMA_HELPER
> +	select DEV_PAGEMAP_OPS
> +	select DEVICE_PRIVATE
> +	select MEMORY_HOTPLUG
> +	select MEMORY_HOTREMOVE
> +	default n
> +	help
> +	  This option paravirtualizes the kernel to run in POWER platforms 
> that
> +	  supports the Protected Execution Facility (PEF). In such platforms,
> +	  the ultravisor firmware runs at a privilege level above the
> +	  hypervisor.
> +
> +	  If unsure, say "N".
> +
>  config LD_HEAD_STUB_CATCH
>  	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if 
> EXPERT
>  	depends on PPC64


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
@ 2019-07-10 13:24     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-10 13:24 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Anshuman Khandual, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> 
> CONFIG_PPC_UV adds support for ultravisor.
> 
> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [ Update config help and commit message ]
> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  arch/powerpc/Kconfig | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index f0e5b38d52e8..20c6c213d2be 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
>         ---help---
>           Support user-mode Transactional Memory on POWERPC.
> 
> +config PPC_UV
> +	bool "Ultravisor support"
> +	depends on KVM_BOOK3S_HV_POSSIBLE
> +	select HMM_MIRROR
> +	select HMM
> +	select ZONE_DEVICE
> +	select MIGRATE_VMA_HELPER
> +	select DEV_PAGEMAP_OPS
> +	select DEVICE_PRIVATE
> +	select MEMORY_HOTPLUG
> +	select MEMORY_HOTREMOVE
> +	default n
> +	help
> +	  This option paravirtualizes the kernel to run in POWER platforms 
> that
> +	  supports the Protected Execution Facility (PEF). In such platforms,
> +	  the ultravisor firmware runs at a privilege level above the
> +	  hypervisor.
> +
> +	  If unsure, say "N".
> +
>  config LD_HEAD_STUB_CATCH
>  	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if 
> EXPERT
>  	depends on PPC64

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH v5 6/7] kvmppc: Support reset of secure guest
  2019-07-09 10:25   ` Bharata B Rao
  (?)
@ 2019-07-10 13:30     ` janani
  -1 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-10 13:30 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
> This ioctl will be issued by QEMU during reset and includes the
> the following steps:
> 
> - Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
> - Unpin the VPA pages so that they can be migrated back to secure
>   side when guest becomes secure again. This is required because
>   pinned pages can't be migrated.
> - Reinitialize guest's partitioned scoped page tables. These are
>   freed when guest become secure (H_SVM_INIT_DONE)
> - Release all HMM pages of the secure guest.
> 
> After these steps, guest is ready to issue UV_ESM call once again
> to switch to secure mode.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> 	[Implementation of uv_svm_terminate() and its call from
> 	guest shutdown path]
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	[Unpinning of VPA pages]
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  Documentation/virtual/kvm/api.txt         | 19 ++++++
>  arch/powerpc/include/asm/kvm_book3s_hmm.h |  7 +++
>  arch/powerpc/include/asm/kvm_ppc.h        |  2 +
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++
>  arch/powerpc/kvm/book3s_hv.c              | 70 +++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 60 +++++++++++++++++++
>  arch/powerpc/kvm/powerpc.c                | 12 ++++
>  include/uapi/linux/kvm.h                  |  1 +
>  tools/include/uapi/linux/kvm.h            |  1 +
>  10 files changed, 180 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt
> b/Documentation/virtual/kvm/api.txt
> index ba6c42c576dd..c89c24ad86ed 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -4065,6 +4065,25 @@ KVM_ARM_VCPU_FINALIZE call.
>  See KVM_ARM_VCPU_INIT for details of vcpu features that require 
> finalization
>  using this ioctl.
> 
> +4.120 KVM_PPC_SVM_OFF
> +
> +Capability: basic
> +Architectures: powerpc
> +Type: vm ioctl
> +Parameters: none
> +Returns: 0 on successful completion,
> +Errors:
> +  EINVAL:    if ultravisor failed to terminate the secure guest
> +  ENOMEM:    if hypervisor failed to allocate new radix page tables 
> for guest
> +
> +This ioctl is used to turn off the secure mode of the guest or 
> transition
> +the guest from secure mode to normal mode. This is invoked when the 
> guest
> +is reset. This has no effect if called for a normal guest.
> +
> +This ioctl issues an ultravisor call to terminate the secure guest,
> +unpin the VPA pages, reinitialize guest's partition scoped page
> +tables and releases all the HMM pages that is associated with this 
> guest.
> +
>  5. The kvm_run structure
>  ------------------------
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> index 8c7aacabb2e0..cd2b0f927766 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm 
> *kvm,
>  					  unsigned long page_shift);
>  extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
>  extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
> +extern void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
> +		struct kvm_memslots *slots);
>  #else
>  static inline unsigned long
>  kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> @@ -37,5 +39,10 @@ static inline unsigned long
> kvmppc_h_svm_init_done(struct kvm *kvm)
>  {
>  	return H_UNSUPPORTED;
>  }
> +
> +static inline void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
> +		struct kvm_memslots *slots)
> +{
> +}
>  #endif /* CONFIG_PPC_UV */
>  #endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index bc892380e6cd..d80ece28d65d 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -188,6 +188,7 @@ extern void
> kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
>  extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm);
>  extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm);
>  extern void kvmppc_setup_partition_table(struct kvm *kvm);
> +extern int kvmppc_reinit_partition_table(struct kvm *kvm);
> 
>  extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
>  				struct kvm_create_spapr_tce_64 *args);
> @@ -332,6 +333,7 @@ struct kvmppc_ops {
>  			       int size);
>  	int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
>  			      int size);
> +	int (*svm_off)(struct kvm *kvm);
>  };
> 
>  extern struct kvmppc_ops *kvmppc_hv_ops;
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 9f5510b55892..c8180427fa01 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -25,5 +25,6 @@
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
>  #define UV_PAGE_INVAL			0xF138
> +#define UV_SVM_TERMINATE		0xF13C
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index f4f674794b35..a5e4516c8ddb 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -84,6 +84,13 @@ static inline int uv_page_inval(u64 lpid, u64 gpa,
> u64 page_shift)
> 
>  	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
>  }
> +
> +static inline int uv_svm_terminate(u64 lpid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_SVM_TERMINATE, retbuf, lpid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index 7cbb5edaed01..41b0fff199d3 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2437,6 +2437,15 @@ static void unpin_vpa(struct kvm *kvm, struct
> kvmppc_vpa *vpa)
>  					vpa->dirty);
>  }
> 
> +static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa)
> +{
> +	unpin_vpa(kvm, vpa);
> +	vpa->gpa = 0;
> +	vpa->pinned_addr = NULL;
> +	vpa->dirty = false;
> +	vpa->update_pending = 0;
> +}
> +
>  static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu)
>  {
>  	spin_lock(&vcpu->arch.vpa_update_lock);
> @@ -4577,6 +4586,22 @@ void kvmppc_setup_partition_table(struct kvm 
> *kvm)
>  	kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1);
>  }
> 
> +/*
> + * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure
> + * guest is converted back to normal guest.
> + */
> +int kvmppc_reinit_partition_table(struct kvm *kvm)
> +{
> +	int ret;
> +
> +	ret = kvmppc_init_vm_radix(kvm);
> +	if (ret)
> +		return ret;
> +
> +	kvmppc_setup_partition_table(kvm);
> +	return 0;
> +}
> +
>  /*
>   * Set up HPT (hashed page table) and RMA (real-mode area).
>   * Must be called with kvm->lock held.
> @@ -4956,6 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm 
> *kvm)
>  		if (nesting_enabled(kvm))
>  			kvmhv_release_all_nested(kvm);
>  		kvm->arch.process_table = 0;
> +		uv_svm_terminate(kvm->arch.lpid);
>  		kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0);
>  	}
>  	kvmppc_free_lpid(kvm->arch.lpid);
> @@ -5397,6 +5423,49 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu
> *vcpu, ulong *eaddr, void *ptr,
>  	return rc;
>  }
> 
> +/*
> + *  IOCTL handler to turn off secure mode of guest
> + *
> + * - Issue ucall to terminate the guest on the UV side
> + * - Unpin the VPA pages (Enables these pages to be migrated back
> + *   when VM becomes secure again)
> + * - Recreate partition table as the guest is transitioning back to
> + *   normal mode
> + * - Release all HMM pages
> + */
> +static int kvmhv_svm_off(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	int ret = 0;
> +	int i;
> +
> +	if (kvmppc_is_guest_secure(kvm)) {
> +		ret = uv_svm_terminate(kvm->arch.lpid);
> +		if (ret != U_SUCCESS) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		kvm_for_each_vcpu(i, vcpu, kvm) {
> +			spin_lock(&vcpu->arch.vpa_update_lock);
> +			unpin_vpa_reset(kvm, &vcpu->arch.dtl);
> +			unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow);
> +			unpin_vpa_reset(kvm, &vcpu->arch.vpa);
> +			spin_unlock(&vcpu->arch.vpa_update_lock);
> +		}
> +
> +		ret = kvmppc_reinit_partition_table(kvm);
> +		if (ret)
> +			goto out;
> +		kvm->arch.secure_guest = 0;
> +		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
> +			kvmppc_hmm_free_memslot_pfns(kvm,
> +			__kvm_memslots(kvm, i));
> +	}
> +out:
> +	return ret;
> +}
> +
>  static struct kvmppc_ops kvm_ops_hv = {
>  	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
>  	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
> @@ -5439,6 +5508,7 @@ static struct kvmppc_ops kvm_ops_hv = {
>  	.enable_nested = kvmhv_enable_nested,
>  	.load_from_eaddr = kvmhv_load_from_eaddr,
>  	.store_to_eaddr = kvmhv_store_to_eaddr,
> +	.svm_off = kvmhv_svm_off,
>  };
> 
>  static int kvm_init_subcore_bitmap(void)
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 9e6c88de456f..165926c069ba 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -38,6 +38,8 @@
>  #include <linux/kvm_host.h>
>  #include <linux/sched/mm.h>
>  #include <asm/ultravisor.h>
> +#include <asm/kvm_ppc.h>
> +#include <asm/kvm_book3s.h>
> 
>  struct kvmppc_hmm_device {
>  	struct hmm_device *device;
> @@ -98,6 +100,12 @@ unsigned long kvmppc_h_svm_init_done(struct kvm 
> *kvm)
>  		return H_UNSUPPORTED;
> 
>  	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
> +	if (kvm_is_radix(kvm)) {
> +		pr_info("LPID %d went secure, freeing HV side radix pgtables\n",
> +			kvm->arch.lpid);
> +		kvmppc_free_radix(kvm);
> +	}
> +
>  	return H_SUCCESS;
>  }
> 
> @@ -113,6 +121,58 @@ static inline bool kvmppc_is_hmm_pfn(unsigned long 
> pfn)
>  	return !!(pfn & KVMPPC_PFN_HMM);
>  }
> 
> +/*
> + * Drop HMM pages that we maintain for the secure guest
> + *
> + * We mark the pages to be skipped from UV_PAGE_OUT when there is HMM
> + * fault on these pages. Next we get these pages, force HMM fault,
> + * do fault migration to replace the HMM PTEs from QEMU page tables
> + * with normal PTEs from newly allocated pages.
> + */
> +static void kvmppc_hmm_drop_pages(struct kvm_memory_slot *free,
> struct kvm *kvm)
> +{
> +	int i;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +
> +	for (i = 0; i < free->npages; i++) {
> +		unsigned long *rmap = &free->arch.rmap[i];
> +		struct page *hmm_page;
> +
> +		if (kvmppc_is_hmm_pfn(*rmap)) {
> +			hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
> +			pvt = (struct kvmppc_hmm_page_pvt *)
> +				hmm_devmem_page_get_drvdata(hmm_page);
> +			pvt->skip_page_out = true;
> +
> +			pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT);
> +			if (is_error_noslot_pfn(pfn))
> +				continue;
> +			kvm_release_pfn_clean(pfn);
> +		}
> +	}
> +}
> +
> +/*
> + * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset
> + *
> + * UV has already cleaned up the guest, we release any HMM pages that
> + * we maintain
> + */
> +void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots 
> *slots)
> +{
> +	struct kvm_memory_slot *memslot;
> +	int srcu_idx;
> +
> +	if (!slots)
> +		return;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	kvm_for_each_memslot(memslot, slots)
> +		kvmppc_hmm_drop_pages(memslot, kvm);
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +}
> +
>  /*
>   * Get a free HMM PFN from the pool
>   *
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..0c5fae004adf 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -42,6 +42,8 @@
>  #include <asm/hvcall.h>
>  #include <asm/plpar_wrappers.h>
>  #endif
> +#include <asm/ultravisor.h>
> +#include <asm/kvm_host.h>
> 
>  #include "timing.h"
>  #include "irq.h"
> @@ -2423,6 +2425,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  			r = -EFAULT;
>  		break;
>  	}
> +	case KVM_PPC_SVM_OFF: {
> +		struct kvm *kvm = filp->private_data;
> +
> +		r = 0;
> +		if (!kvm->arch.kvm_ops->svm_off)
> +			goto out;
> +
> +		r = kvm->arch.kvm_ops->svm_off(kvm);
> +		break;
> +	}
>  	default: {
>  		struct kvm *kvm = filp->private_data;
>  		r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg);
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 2fe12b40d503..c1dbdd428025 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1327,6 +1327,7 @@ struct kvm_s390_ucas_mapping {
>  #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct 
> kvm_ppc_rmmu_info)
>  /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
>  #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct 
> kvm_ppc_cpu_char)
> +#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
> 
>  /* ioctl for vm fd */
>  #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct 
> kvm_create_device)
> diff --git a/tools/include/uapi/linux/kvm.h 
> b/tools/include/uapi/linux/kvm.h
> index 6d4ea4b6c922..03c89f4a01b2 100644
> --- a/tools/include/uapi/linux/kvm.h
> +++ b/tools/include/uapi/linux/kvm.h
> @@ -1319,6 +1319,7 @@ struct kvm_s390_ucas_mapping {
>  #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct 
> kvm_ppc_rmmu_info)
>  /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
>  #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct 
> kvm_ppc_cpu_char)
> +#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
> 
>  /* ioctl for vm fd */
>  #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct 
> kvm_create_device)


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH v5 6/7] kvmppc: Support reset of secure guest
@ 2019-07-10 13:30     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-10 13:30 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
> This ioctl will be issued by QEMU during reset and includes the
> the following steps:
> 
> - Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
> - Unpin the VPA pages so that they can be migrated back to secure
>   side when guest becomes secure again. This is required because
>   pinned pages can't be migrated.
> - Reinitialize guest's partitioned scoped page tables. These are
>   freed when guest become secure (H_SVM_INIT_DONE)
> - Release all HMM pages of the secure guest.
> 
> After these steps, guest is ready to issue UV_ESM call once again
> to switch to secure mode.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> 	[Implementation of uv_svm_terminate() and its call from
> 	guest shutdown path]
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	[Unpinning of VPA pages]
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  Documentation/virtual/kvm/api.txt         | 19 ++++++
>  arch/powerpc/include/asm/kvm_book3s_hmm.h |  7 +++
>  arch/powerpc/include/asm/kvm_ppc.h        |  2 +
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++
>  arch/powerpc/kvm/book3s_hv.c              | 70 +++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 60 +++++++++++++++++++
>  arch/powerpc/kvm/powerpc.c                | 12 ++++
>  include/uapi/linux/kvm.h                  |  1 +
>  tools/include/uapi/linux/kvm.h            |  1 +
>  10 files changed, 180 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt
> b/Documentation/virtual/kvm/api.txt
> index ba6c42c576dd..c89c24ad86ed 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -4065,6 +4065,25 @@ KVM_ARM_VCPU_FINALIZE call.
>  See KVM_ARM_VCPU_INIT for details of vcpu features that require 
> finalization
>  using this ioctl.
> 
> +4.120 KVM_PPC_SVM_OFF
> +
> +Capability: basic
> +Architectures: powerpc
> +Type: vm ioctl
> +Parameters: none
> +Returns: 0 on successful completion,
> +Errors:
> +  EINVAL:    if ultravisor failed to terminate the secure guest
> +  ENOMEM:    if hypervisor failed to allocate new radix page tables 
> for guest
> +
> +This ioctl is used to turn off the secure mode of the guest or 
> transition
> +the guest from secure mode to normal mode. This is invoked when the 
> guest
> +is reset. This has no effect if called for a normal guest.
> +
> +This ioctl issues an ultravisor call to terminate the secure guest,
> +unpin the VPA pages, reinitialize guest's partition scoped page
> +tables and releases all the HMM pages that is associated with this 
> guest.
> +
>  5. The kvm_run structure
>  ------------------------
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> index 8c7aacabb2e0..cd2b0f927766 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm 
> *kvm,
>  					  unsigned long page_shift);
>  extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
>  extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
> +extern void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
> +		struct kvm_memslots *slots);
>  #else
>  static inline unsigned long
>  kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> @@ -37,5 +39,10 @@ static inline unsigned long
> kvmppc_h_svm_init_done(struct kvm *kvm)
>  {
>  	return H_UNSUPPORTED;
>  }
> +
> +static inline void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
> +		struct kvm_memslots *slots)
> +{
> +}
>  #endif /* CONFIG_PPC_UV */
>  #endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index bc892380e6cd..d80ece28d65d 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -188,6 +188,7 @@ extern void
> kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
>  extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm);
>  extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm);
>  extern void kvmppc_setup_partition_table(struct kvm *kvm);
> +extern int kvmppc_reinit_partition_table(struct kvm *kvm);
> 
>  extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
>  				struct kvm_create_spapr_tce_64 *args);
> @@ -332,6 +333,7 @@ struct kvmppc_ops {
>  			       int size);
>  	int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
>  			      int size);
> +	int (*svm_off)(struct kvm *kvm);
>  };
> 
>  extern struct kvmppc_ops *kvmppc_hv_ops;
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 9f5510b55892..c8180427fa01 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -25,5 +25,6 @@
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
>  #define UV_PAGE_INVAL			0xF138
> +#define UV_SVM_TERMINATE		0xF13C
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index f4f674794b35..a5e4516c8ddb 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -84,6 +84,13 @@ static inline int uv_page_inval(u64 lpid, u64 gpa,
> u64 page_shift)
> 
>  	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
>  }
> +
> +static inline int uv_svm_terminate(u64 lpid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_SVM_TERMINATE, retbuf, lpid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index 7cbb5edaed01..41b0fff199d3 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2437,6 +2437,15 @@ static void unpin_vpa(struct kvm *kvm, struct
> kvmppc_vpa *vpa)
>  					vpa->dirty);
>  }
> 
> +static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa)
> +{
> +	unpin_vpa(kvm, vpa);
> +	vpa->gpa = 0;
> +	vpa->pinned_addr = NULL;
> +	vpa->dirty = false;
> +	vpa->update_pending = 0;
> +}
> +
>  static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu)
>  {
>  	spin_lock(&vcpu->arch.vpa_update_lock);
> @@ -4577,6 +4586,22 @@ void kvmppc_setup_partition_table(struct kvm 
> *kvm)
>  	kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1);
>  }
> 
> +/*
> + * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure
> + * guest is converted back to normal guest.
> + */
> +int kvmppc_reinit_partition_table(struct kvm *kvm)
> +{
> +	int ret;
> +
> +	ret = kvmppc_init_vm_radix(kvm);
> +	if (ret)
> +		return ret;
> +
> +	kvmppc_setup_partition_table(kvm);
> +	return 0;
> +}
> +
>  /*
>   * Set up HPT (hashed page table) and RMA (real-mode area).
>   * Must be called with kvm->lock held.
> @@ -4956,6 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm 
> *kvm)
>  		if (nesting_enabled(kvm))
>  			kvmhv_release_all_nested(kvm);
>  		kvm->arch.process_table = 0;
> +		uv_svm_terminate(kvm->arch.lpid);
>  		kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0);
>  	}
>  	kvmppc_free_lpid(kvm->arch.lpid);
> @@ -5397,6 +5423,49 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu
> *vcpu, ulong *eaddr, void *ptr,
>  	return rc;
>  }
> 
> +/*
> + *  IOCTL handler to turn off secure mode of guest
> + *
> + * - Issue ucall to terminate the guest on the UV side
> + * - Unpin the VPA pages (Enables these pages to be migrated back
> + *   when VM becomes secure again)
> + * - Recreate partition table as the guest is transitioning back to
> + *   normal mode
> + * - Release all HMM pages
> + */
> +static int kvmhv_svm_off(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	int ret = 0;
> +	int i;
> +
> +	if (kvmppc_is_guest_secure(kvm)) {
> +		ret = uv_svm_terminate(kvm->arch.lpid);
> +		if (ret != U_SUCCESS) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		kvm_for_each_vcpu(i, vcpu, kvm) {
> +			spin_lock(&vcpu->arch.vpa_update_lock);
> +			unpin_vpa_reset(kvm, &vcpu->arch.dtl);
> +			unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow);
> +			unpin_vpa_reset(kvm, &vcpu->arch.vpa);
> +			spin_unlock(&vcpu->arch.vpa_update_lock);
> +		}
> +
> +		ret = kvmppc_reinit_partition_table(kvm);
> +		if (ret)
> +			goto out;
> +		kvm->arch.secure_guest = 0;
> +		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
> +			kvmppc_hmm_free_memslot_pfns(kvm,
> +			__kvm_memslots(kvm, i));
> +	}
> +out:
> +	return ret;
> +}
> +
>  static struct kvmppc_ops kvm_ops_hv = {
>  	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
>  	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
> @@ -5439,6 +5508,7 @@ static struct kvmppc_ops kvm_ops_hv = {
>  	.enable_nested = kvmhv_enable_nested,
>  	.load_from_eaddr = kvmhv_load_from_eaddr,
>  	.store_to_eaddr = kvmhv_store_to_eaddr,
> +	.svm_off = kvmhv_svm_off,
>  };
> 
>  static int kvm_init_subcore_bitmap(void)
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 9e6c88de456f..165926c069ba 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -38,6 +38,8 @@
>  #include <linux/kvm_host.h>
>  #include <linux/sched/mm.h>
>  #include <asm/ultravisor.h>
> +#include <asm/kvm_ppc.h>
> +#include <asm/kvm_book3s.h>
> 
>  struct kvmppc_hmm_device {
>  	struct hmm_device *device;
> @@ -98,6 +100,12 @@ unsigned long kvmppc_h_svm_init_done(struct kvm 
> *kvm)
>  		return H_UNSUPPORTED;
> 
>  	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
> +	if (kvm_is_radix(kvm)) {
> +		pr_info("LPID %d went secure, freeing HV side radix pgtables\n",
> +			kvm->arch.lpid);
> +		kvmppc_free_radix(kvm);
> +	}
> +
>  	return H_SUCCESS;
>  }
> 
> @@ -113,6 +121,58 @@ static inline bool kvmppc_is_hmm_pfn(unsigned long 
> pfn)
>  	return !!(pfn & KVMPPC_PFN_HMM);
>  }
> 
> +/*
> + * Drop HMM pages that we maintain for the secure guest
> + *
> + * We mark the pages to be skipped from UV_PAGE_OUT when there is HMM
> + * fault on these pages. Next we get these pages, force HMM fault,
> + * do fault migration to replace the HMM PTEs from QEMU page tables
> + * with normal PTEs from newly allocated pages.
> + */
> +static void kvmppc_hmm_drop_pages(struct kvm_memory_slot *free,
> struct kvm *kvm)
> +{
> +	int i;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +
> +	for (i = 0; i < free->npages; i++) {
> +		unsigned long *rmap = &free->arch.rmap[i];
> +		struct page *hmm_page;
> +
> +		if (kvmppc_is_hmm_pfn(*rmap)) {
> +			hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
> +			pvt = (struct kvmppc_hmm_page_pvt *)
> +				hmm_devmem_page_get_drvdata(hmm_page);
> +			pvt->skip_page_out = true;
> +
> +			pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT);
> +			if (is_error_noslot_pfn(pfn))
> +				continue;
> +			kvm_release_pfn_clean(pfn);
> +		}
> +	}
> +}
> +
> +/*
> + * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset
> + *
> + * UV has already cleaned up the guest, we release any HMM pages that
> + * we maintain
> + */
> +void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots 
> *slots)
> +{
> +	struct kvm_memory_slot *memslot;
> +	int srcu_idx;
> +
> +	if (!slots)
> +		return;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	kvm_for_each_memslot(memslot, slots)
> +		kvmppc_hmm_drop_pages(memslot, kvm);
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +}
> +
>  /*
>   * Get a free HMM PFN from the pool
>   *
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..0c5fae004adf 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -42,6 +42,8 @@
>  #include <asm/hvcall.h>
>  #include <asm/plpar_wrappers.h>
>  #endif
> +#include <asm/ultravisor.h>
> +#include <asm/kvm_host.h>
> 
>  #include "timing.h"
>  #include "irq.h"
> @@ -2423,6 +2425,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  			r = -EFAULT;
>  		break;
>  	}
> +	case KVM_PPC_SVM_OFF: {
> +		struct kvm *kvm = filp->private_data;
> +
> +		r = 0;
> +		if (!kvm->arch.kvm_ops->svm_off)
> +			goto out;
> +
> +		r = kvm->arch.kvm_ops->svm_off(kvm);
> +		break;
> +	}
>  	default: {
>  		struct kvm *kvm = filp->private_data;
>  		r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg);
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 2fe12b40d503..c1dbdd428025 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1327,6 +1327,7 @@ struct kvm_s390_ucas_mapping {
>  #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct 
> kvm_ppc_rmmu_info)
>  /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
>  #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct 
> kvm_ppc_cpu_char)
> +#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
> 
>  /* ioctl for vm fd */
>  #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct 
> kvm_create_device)
> diff --git a/tools/include/uapi/linux/kvm.h 
> b/tools/include/uapi/linux/kvm.h
> index 6d4ea4b6c922..03c89f4a01b2 100644
> --- a/tools/include/uapi/linux/kvm.h
> +++ b/tools/include/uapi/linux/kvm.h
> @@ -1319,6 +1319,7 @@ struct kvm_s390_ucas_mapping {
>  #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct 
> kvm_ppc_rmmu_info)
>  /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
>  #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct 
> kvm_ppc_cpu_char)
> +#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
> 
>  /* ioctl for vm fd */
>  #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct 
> kvm_create_device)

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH v5 6/7] kvmppc: Support reset of secure guest
@ 2019-07-10 13:30     ` janani
  0 siblings, 0 replies; 58+ messages in thread
From: janani @ 2019-07-10 13:30 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm, jglisse,
	aneesh.kumar, paulus, sukadev, Linuxppc-dev

On 2019-07-09 05:25, Bharata B Rao wrote:
> Add support for reset of secure guest via a new ioctl KVM_PPC_SVM_OFF.
> This ioctl will be issued by QEMU during reset and includes the
> the following steps:
> 
> - Ask UV to terminate the guest via UV_SVM_TERMINATE ucall
> - Unpin the VPA pages so that they can be migrated back to secure
>   side when guest becomes secure again. This is required because
>   pinned pages can't be migrated.
> - Reinitialize guest's partitioned scoped page tables. These are
>   freed when guest become secure (H_SVM_INIT_DONE)
> - Release all HMM pages of the secure guest.
> 
> After these steps, guest is ready to issue UV_ESM call once again
> to switch to secure mode.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> 	[Implementation of uv_svm_terminate() and its call from
> 	guest shutdown path]
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> 	[Unpinning of VPA pages]
  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> ---
>  Documentation/virtual/kvm/api.txt         | 19 ++++++
>  arch/powerpc/include/asm/kvm_book3s_hmm.h |  7 +++
>  arch/powerpc/include/asm/kvm_ppc.h        |  2 +
>  arch/powerpc/include/asm/ultravisor-api.h |  1 +
>  arch/powerpc/include/asm/ultravisor.h     |  7 +++
>  arch/powerpc/kvm/book3s_hv.c              | 70 +++++++++++++++++++++++
>  arch/powerpc/kvm/book3s_hv_hmm.c          | 60 +++++++++++++++++++
>  arch/powerpc/kvm/powerpc.c                | 12 ++++
>  include/uapi/linux/kvm.h                  |  1 +
>  tools/include/uapi/linux/kvm.h            |  1 +
>  10 files changed, 180 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/api.txt
> b/Documentation/virtual/kvm/api.txt
> index ba6c42c576dd..c89c24ad86ed 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -4065,6 +4065,25 @@ KVM_ARM_VCPU_FINALIZE call.
>  See KVM_ARM_VCPU_INIT for details of vcpu features that require 
> finalization
>  using this ioctl.
> 
> +4.120 KVM_PPC_SVM_OFF
> +
> +Capability: basic
> +Architectures: powerpc
> +Type: vm ioctl
> +Parameters: none
> +Returns: 0 on successful completion,
> +Errors:
> +  EINVAL:    if ultravisor failed to terminate the secure guest
> +  ENOMEM:    if hypervisor failed to allocate new radix page tables 
> for guest
> +
> +This ioctl is used to turn off the secure mode of the guest or 
> transition
> +the guest from secure mode to normal mode. This is invoked when the 
> guest
> +is reset. This has no effect if called for a normal guest.
> +
> +This ioctl issues an ultravisor call to terminate the secure guest,
> +unpin the VPA pages, reinitialize guest's partition scoped page
> +tables and releases all the HMM pages that is associated with this 
> guest.
> +
>  5. The kvm_run structure
>  ------------------------
> 
> diff --git a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> index 8c7aacabb2e0..cd2b0f927766 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_hmm.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_hmm.h
> @@ -13,6 +13,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm 
> *kvm,
>  					  unsigned long page_shift);
>  extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm);
>  extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm);
> +extern void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
> +		struct kvm_memslots *slots);
>  #else
>  static inline unsigned long
>  kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gra,
> @@ -37,5 +39,10 @@ static inline unsigned long
> kvmppc_h_svm_init_done(struct kvm *kvm)
>  {
>  	return H_UNSUPPORTED;
>  }
> +
> +static inline void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm,
> +		struct kvm_memslots *slots)
> +{
> +}
>  #endif /* CONFIG_PPC_UV */
>  #endif /* __POWERPC_KVM_PPC_HMM_H__ */
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h
> b/arch/powerpc/include/asm/kvm_ppc.h
> index bc892380e6cd..d80ece28d65d 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -188,6 +188,7 @@ extern void
> kvm_spapr_tce_release_iommu_group(struct kvm *kvm,
>  extern int kvmppc_switch_mmu_to_hpt(struct kvm *kvm);
>  extern int kvmppc_switch_mmu_to_radix(struct kvm *kvm);
>  extern void kvmppc_setup_partition_table(struct kvm *kvm);
> +extern int kvmppc_reinit_partition_table(struct kvm *kvm);
> 
>  extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
>  				struct kvm_create_spapr_tce_64 *args);
> @@ -332,6 +333,7 @@ struct kvmppc_ops {
>  			       int size);
>  	int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr,
>  			      int size);
> +	int (*svm_off)(struct kvm *kvm);
>  };
> 
>  extern struct kvmppc_ops *kvmppc_hv_ops;
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h
> b/arch/powerpc/include/asm/ultravisor-api.h
> index 9f5510b55892..c8180427fa01 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -25,5 +25,6 @@
>  #define UV_PAGE_IN			0xF128
>  #define UV_PAGE_OUT			0xF12C
>  #define UV_PAGE_INVAL			0xF138
> +#define UV_SVM_TERMINATE		0xF13C
> 
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/include/asm/ultravisor.h
> b/arch/powerpc/include/asm/ultravisor.h
> index f4f674794b35..a5e4516c8ddb 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -84,6 +84,13 @@ static inline int uv_page_inval(u64 lpid, u64 gpa,
> u64 page_shift)
> 
>  	return ucall(UV_PAGE_INVAL, retbuf, lpid, gpa, page_shift);
>  }
> +
> +static inline int uv_svm_terminate(u64 lpid)
> +{
> +	unsigned long retbuf[UCALL_BUFSIZE];
> +
> +	return ucall(UV_SVM_TERMINATE, retbuf, lpid);
> +}
>  #endif /* !__ASSEMBLY__ */
> 
>  #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
> diff --git a/arch/powerpc/kvm/book3s_hv.c 
> b/arch/powerpc/kvm/book3s_hv.c
> index 7cbb5edaed01..41b0fff199d3 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -2437,6 +2437,15 @@ static void unpin_vpa(struct kvm *kvm, struct
> kvmppc_vpa *vpa)
>  					vpa->dirty);
>  }
> 
> +static void unpin_vpa_reset(struct kvm *kvm, struct kvmppc_vpa *vpa)
> +{
> +	unpin_vpa(kvm, vpa);
> +	vpa->gpa = 0;
> +	vpa->pinned_addr = NULL;
> +	vpa->dirty = false;
> +	vpa->update_pending = 0;
> +}
> +
>  static void kvmppc_core_vcpu_free_hv(struct kvm_vcpu *vcpu)
>  {
>  	spin_lock(&vcpu->arch.vpa_update_lock);
> @@ -4577,6 +4586,22 @@ void kvmppc_setup_partition_table(struct kvm 
> *kvm)
>  	kvmhv_set_ptbl_entry(kvm->arch.lpid, dw0, dw1);
>  }
> 
> +/*
> + * Called from KVM_PPC_SVM_OFF ioctl at guest reset time when secure
> + * guest is converted back to normal guest.
> + */
> +int kvmppc_reinit_partition_table(struct kvm *kvm)
> +{
> +	int ret;
> +
> +	ret = kvmppc_init_vm_radix(kvm);
> +	if (ret)
> +		return ret;
> +
> +	kvmppc_setup_partition_table(kvm);
> +	return 0;
> +}
> +
>  /*
>   * Set up HPT (hashed page table) and RMA (real-mode area).
>   * Must be called with kvm->lock held.
> @@ -4956,6 +4981,7 @@ static void kvmppc_core_destroy_vm_hv(struct kvm 
> *kvm)
>  		if (nesting_enabled(kvm))
>  			kvmhv_release_all_nested(kvm);
>  		kvm->arch.process_table = 0;
> +		uv_svm_terminate(kvm->arch.lpid);
>  		kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0);
>  	}
>  	kvmppc_free_lpid(kvm->arch.lpid);
> @@ -5397,6 +5423,49 @@ static int kvmhv_store_to_eaddr(struct kvm_vcpu
> *vcpu, ulong *eaddr, void *ptr,
>  	return rc;
>  }
> 
> +/*
> + *  IOCTL handler to turn off secure mode of guest
> + *
> + * - Issue ucall to terminate the guest on the UV side
> + * - Unpin the VPA pages (Enables these pages to be migrated back
> + *   when VM becomes secure again)
> + * - Recreate partition table as the guest is transitioning back to
> + *   normal mode
> + * - Release all HMM pages
> + */
> +static int kvmhv_svm_off(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	int ret = 0;
> +	int i;
> +
> +	if (kvmppc_is_guest_secure(kvm)) {
> +		ret = uv_svm_terminate(kvm->arch.lpid);
> +		if (ret != U_SUCCESS) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		kvm_for_each_vcpu(i, vcpu, kvm) {
> +			spin_lock(&vcpu->arch.vpa_update_lock);
> +			unpin_vpa_reset(kvm, &vcpu->arch.dtl);
> +			unpin_vpa_reset(kvm, &vcpu->arch.slb_shadow);
> +			unpin_vpa_reset(kvm, &vcpu->arch.vpa);
> +			spin_unlock(&vcpu->arch.vpa_update_lock);
> +		}
> +
> +		ret = kvmppc_reinit_partition_table(kvm);
> +		if (ret)
> +			goto out;
> +		kvm->arch.secure_guest = 0;
> +		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
> +			kvmppc_hmm_free_memslot_pfns(kvm,
> +			__kvm_memslots(kvm, i));
> +	}
> +out:
> +	return ret;
> +}
> +
>  static struct kvmppc_ops kvm_ops_hv = {
>  	.get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv,
>  	.set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv,
> @@ -5439,6 +5508,7 @@ static struct kvmppc_ops kvm_ops_hv = {
>  	.enable_nested = kvmhv_enable_nested,
>  	.load_from_eaddr = kvmhv_load_from_eaddr,
>  	.store_to_eaddr = kvmhv_store_to_eaddr,
> +	.svm_off = kvmhv_svm_off,
>  };
> 
>  static int kvm_init_subcore_bitmap(void)
> diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c 
> b/arch/powerpc/kvm/book3s_hv_hmm.c
> index 9e6c88de456f..165926c069ba 100644
> --- a/arch/powerpc/kvm/book3s_hv_hmm.c
> +++ b/arch/powerpc/kvm/book3s_hv_hmm.c
> @@ -38,6 +38,8 @@
>  #include <linux/kvm_host.h>
>  #include <linux/sched/mm.h>
>  #include <asm/ultravisor.h>
> +#include <asm/kvm_ppc.h>
> +#include <asm/kvm_book3s.h>
> 
>  struct kvmppc_hmm_device {
>  	struct hmm_device *device;
> @@ -98,6 +100,12 @@ unsigned long kvmppc_h_svm_init_done(struct kvm 
> *kvm)
>  		return H_UNSUPPORTED;
> 
>  	kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE;
> +	if (kvm_is_radix(kvm)) {
> +		pr_info("LPID %d went secure, freeing HV side radix pgtables\n",
> +			kvm->arch.lpid);
> +		kvmppc_free_radix(kvm);
> +	}
> +
>  	return H_SUCCESS;
>  }
> 
> @@ -113,6 +121,58 @@ static inline bool kvmppc_is_hmm_pfn(unsigned long 
> pfn)
>  	return !!(pfn & KVMPPC_PFN_HMM);
>  }
> 
> +/*
> + * Drop HMM pages that we maintain for the secure guest
> + *
> + * We mark the pages to be skipped from UV_PAGE_OUT when there is HMM
> + * fault on these pages. Next we get these pages, force HMM fault,
> + * do fault migration to replace the HMM PTEs from QEMU page tables
> + * with normal PTEs from newly allocated pages.
> + */
> +static void kvmppc_hmm_drop_pages(struct kvm_memory_slot *free,
> struct kvm *kvm)
> +{
> +	int i;
> +	struct kvmppc_hmm_page_pvt *pvt;
> +	unsigned long pfn;
> +
> +	for (i = 0; i < free->npages; i++) {
> +		unsigned long *rmap = &free->arch.rmap[i];
> +		struct page *hmm_page;
> +
> +		if (kvmppc_is_hmm_pfn(*rmap)) {
> +			hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM);
> +			pvt = (struct kvmppc_hmm_page_pvt *)
> +				hmm_devmem_page_get_drvdata(hmm_page);
> +			pvt->skip_page_out = true;
> +
> +			pfn = gfn_to_pfn(kvm, pvt->gpa >> PAGE_SHIFT);
> +			if (is_error_noslot_pfn(pfn))
> +				continue;
> +			kvm_release_pfn_clean(pfn);
> +		}
> +	}
> +}
> +
> +/*
> + * Called from KVM_PPC_SVM_OFF ioctl when secure guest is reset
> + *
> + * UV has already cleaned up the guest, we release any HMM pages that
> + * we maintain
> + */
> +void kvmppc_hmm_free_memslot_pfns(struct kvm *kvm, struct kvm_memslots 
> *slots)
> +{
> +	struct kvm_memory_slot *memslot;
> +	int srcu_idx;
> +
> +	if (!slots)
> +		return;
> +
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	kvm_for_each_memslot(memslot, slots)
> +		kvmppc_hmm_drop_pages(memslot, kvm);
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +}
> +
>  /*
>   * Get a free HMM PFN from the pool
>   *
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..0c5fae004adf 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -42,6 +42,8 @@
>  #include <asm/hvcall.h>
>  #include <asm/plpar_wrappers.h>
>  #endif
> +#include <asm/ultravisor.h>
> +#include <asm/kvm_host.h>
> 
>  #include "timing.h"
>  #include "irq.h"
> @@ -2423,6 +2425,16 @@ long kvm_arch_vm_ioctl(struct file *filp,
>  			r = -EFAULT;
>  		break;
>  	}
> +	case KVM_PPC_SVM_OFF: {
> +		struct kvm *kvm = filp->private_data;
> +
> +		r = 0;
> +		if (!kvm->arch.kvm_ops->svm_off)
> +			goto out;
> +
> +		r = kvm->arch.kvm_ops->svm_off(kvm);
> +		break;
> +	}
>  	default: {
>  		struct kvm *kvm = filp->private_data;
>  		r = kvm->arch.kvm_ops->arch_vm_ioctl(filp, ioctl, arg);
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 2fe12b40d503..c1dbdd428025 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1327,6 +1327,7 @@ struct kvm_s390_ucas_mapping {
>  #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct 
> kvm_ppc_rmmu_info)
>  /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
>  #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct 
> kvm_ppc_cpu_char)
> +#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
> 
>  /* ioctl for vm fd */
>  #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct 
> kvm_create_device)
> diff --git a/tools/include/uapi/linux/kvm.h 
> b/tools/include/uapi/linux/kvm.h
> index 6d4ea4b6c922..03c89f4a01b2 100644
> --- a/tools/include/uapi/linux/kvm.h
> +++ b/tools/include/uapi/linux/kvm.h
> @@ -1319,6 +1319,7 @@ struct kvm_s390_ucas_mapping {
>  #define KVM_PPC_GET_RMMU_INFO	  _IOW(KVMIO,  0xb0, struct 
> kvm_ppc_rmmu_info)
>  /* Available with KVM_CAP_PPC_GET_CPU_CHAR */
>  #define KVM_PPC_GET_CPU_CHAR	  _IOR(KVMIO,  0xb1, struct 
> kvm_ppc_cpu_char)
> +#define KVM_PPC_SVM_OFF		  _IO(KVMIO,  0xb2)
> 
>  /* ioctl for vm fd */
>  #define KVM_CREATE_DEVICE	  _IOWR(KVMIO,  0xe0, struct 
> kvm_create_device)

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
  2019-07-09 18:55     ` janani
  (?)
@ 2019-07-10 13:47       ` Jason Gunthorpe
  -1 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 13:47 UTC (permalink / raw)
  To: janani
  Cc: Bharata B Rao, linuxppc-dev, linuxram, cclaudio, kvm-ppc,
	linux-mm, jglisse, aneesh.kumar, paulus, sukadev, Linuxppc-dev

On Tue, Jul 09, 2019 at 01:55:28PM -0500, janani wrote:

> > +int kvmppc_hmm_init(void)
> > +{
> > +	int ret = 0;
> > +	unsigned long size;
> > +
> > +	size = kvmppc_get_secmem_size();
> > +	if (!size) {
> > +		ret = -ENODEV;
> > +		goto out;
> > +	}
> > +
> > +	kvmppc_hmm.device = hmm_device_new(NULL);
> > +	if (IS_ERR(kvmppc_hmm.device)) {
> > +		ret = PTR_ERR(kvmppc_hmm.device);
> > +		goto out;
> > +	}
> > +
> > +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> > +					   &kvmppc_hmm.device->device, size);
> > +	if (IS_ERR(kvmppc_hmm.devmem)) {
> > +		ret = PTR_ERR(kvmppc_hmm.devmem);
> > +		goto out_device;
> > +	}

This 'hmm_device' API family was recently deleted from hmm:

commit 07ec38917e68f0114b9c8aeeb1c584b5e73e4dd6
Author: Christoph Hellwig <hch@lst.de>
Date:   Wed Jun 26 14:27:01 2019 +0200

    mm: remove the struct hmm_device infrastructure
    
    This code is a trivial wrapper around device model helpers, which
    should have been integrated into the driver device model usage from
    the start.  Assuming it actually had users, which it never had since
    the code was added more than 1 1/2 years ago.

This patch should use the driver core directly instead.

Regards,
Jason


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-10 13:47       ` Jason Gunthorpe
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 13:47 UTC (permalink / raw)
  To: janani
  Cc: Linuxppc-dev, linuxram, cclaudio, kvm-ppc, Bharata B Rao,
	linux-mm, jglisse, aneesh.kumar, paulus, sukadev, linuxppc-dev

On Tue, Jul 09, 2019 at 01:55:28PM -0500, janani wrote:

> > +int kvmppc_hmm_init(void)
> > +{
> > +	int ret = 0;
> > +	unsigned long size;
> > +
> > +	size = kvmppc_get_secmem_size();
> > +	if (!size) {
> > +		ret = -ENODEV;
> > +		goto out;
> > +	}
> > +
> > +	kvmppc_hmm.device = hmm_device_new(NULL);
> > +	if (IS_ERR(kvmppc_hmm.device)) {
> > +		ret = PTR_ERR(kvmppc_hmm.device);
> > +		goto out;
> > +	}
> > +
> > +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> > +					   &kvmppc_hmm.device->device, size);
> > +	if (IS_ERR(kvmppc_hmm.devmem)) {
> > +		ret = PTR_ERR(kvmppc_hmm.devmem);
> > +		goto out_device;
> > +	}

This 'hmm_device' API family was recently deleted from hmm:

commit 07ec38917e68f0114b9c8aeeb1c584b5e73e4dd6
Author: Christoph Hellwig <hch@lst.de>
Date:   Wed Jun 26 14:27:01 2019 +0200

    mm: remove the struct hmm_device infrastructure
    
    This code is a trivial wrapper around device model helpers, which
    should have been integrated into the driver device model usage from
    the start.  Assuming it actually had users, which it never had since
    the code was added more than 1 1/2 years ago.

This patch should use the driver core directly instead.

Regards,
Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-10 13:47       ` Jason Gunthorpe
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 13:47 UTC (permalink / raw)
  To: janani
  Cc: Bharata B Rao, linuxppc-dev, linuxram, cclaudio, kvm-ppc,
	linux-mm, jglisse, aneesh.kumar, paulus, sukadev, Linuxppc-dev

On Tue, Jul 09, 2019 at 01:55:28PM -0500, janani wrote:

> > +int kvmppc_hmm_init(void)
> > +{
> > +	int ret = 0;
> > +	unsigned long size;
> > +
> > +	size = kvmppc_get_secmem_size();
> > +	if (!size) {
> > +		ret = -ENODEV;
> > +		goto out;
> > +	}
> > +
> > +	kvmppc_hmm.device = hmm_device_new(NULL);
> > +	if (IS_ERR(kvmppc_hmm.device)) {
> > +		ret = PTR_ERR(kvmppc_hmm.device);
> > +		goto out;
> > +	}
> > +
> > +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> > +					   &kvmppc_hmm.device->device, size);
> > +	if (IS_ERR(kvmppc_hmm.devmem)) {
> > +		ret = PTR_ERR(kvmppc_hmm.devmem);
> > +		goto out_device;
> > +	}

This 'hmm_device' API family was recently deleted from hmm:

commit 07ec38917e68f0114b9c8aeeb1c584b5e73e4dd6
Author: Christoph Hellwig <hch@lst.de>
Date:   Wed Jun 26 14:27:01 2019 +0200

    mm: remove the struct hmm_device infrastructure
    
    This code is a trivial wrapper around device model helpers, which
    should have been integrated into the driver device model usage from
    the start.  Assuming it actually had users, which it never had since
    the code was added more than 1 1/2 years ago.

This patch should use the driver core directly instead.

Regards,
Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
  2019-07-10 13:24     ` janani
  (?)
@ 2019-07-10 14:15       ` Jason Gunthorpe
  -1 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 14:15 UTC (permalink / raw)
  To: janani
  Cc: Bharata B Rao, linuxppc-dev, linuxram, cclaudio, kvm-ppc,
	linux-mm, jglisse, aneesh.kumar, paulus, sukadev,
	Anshuman Khandual, Linuxppc-dev

On Wed, Jul 10, 2019 at 08:24:56AM -0500, janani wrote:
> On 2019-07-09 05:25, Bharata B Rao wrote:
> > From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > 
> > CONFIG_PPC_UV adds support for ultravisor.
> > 
> > Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> > Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> > [ Update config help and commit message ]
> > Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
>  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> >  arch/powerpc/Kconfig | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> > 
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index f0e5b38d52e8..20c6c213d2be 100644
> > +++ b/arch/powerpc/Kconfig
> > @@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
> >           Support user-mode Transactional Memory on POWERPC.
> > 
> > +config PPC_UV
> > +	bool "Ultravisor support"
> > +	depends on KVM_BOOK3S_HV_POSSIBLE
> > +	select HMM_MIRROR
> > +	select HMM
> > +	select ZONE_DEVICE

These configs have also been changed lately, I didn't see any calls to
hmm_mirror in this patchset, so most likely the two HMM selects should
be dropped and all you'll need is ZONE_DEVICE..

Jason


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
@ 2019-07-10 14:15       ` Jason Gunthorpe
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 14:15 UTC (permalink / raw)
  To: janani
  Cc: Linuxppc-dev, linuxram, cclaudio, kvm-ppc, Bharata B Rao,
	linux-mm, jglisse, aneesh.kumar, paulus, sukadev, linuxppc-dev,
	Anshuman Khandual

On Wed, Jul 10, 2019 at 08:24:56AM -0500, janani wrote:
> On 2019-07-09 05:25, Bharata B Rao wrote:
> > From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > 
> > CONFIG_PPC_UV adds support for ultravisor.
> > 
> > Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> > Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> > [ Update config help and commit message ]
> > Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
>  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> >  arch/powerpc/Kconfig | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> > 
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index f0e5b38d52e8..20c6c213d2be 100644
> > +++ b/arch/powerpc/Kconfig
> > @@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
> >           Support user-mode Transactional Memory on POWERPC.
> > 
> > +config PPC_UV
> > +	bool "Ultravisor support"
> > +	depends on KVM_BOOK3S_HV_POSSIBLE
> > +	select HMM_MIRROR
> > +	select HMM
> > +	select ZONE_DEVICE

These configs have also been changed lately, I didn't see any calls to
hmm_mirror in this patchset, so most likely the two HMM selects should
be dropped and all you'll need is ZONE_DEVICE..

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option
@ 2019-07-10 14:15       ` Jason Gunthorpe
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Gunthorpe @ 2019-07-10 14:15 UTC (permalink / raw)
  To: janani
  Cc: Bharata B Rao, linuxppc-dev, linuxram, cclaudio, kvm-ppc,
	linux-mm, jglisse, aneesh.kumar, paulus, sukadev,
	Anshuman Khandual, Linuxppc-dev

On Wed, Jul 10, 2019 at 08:24:56AM -0500, janani wrote:
> On 2019-07-09 05:25, Bharata B Rao wrote:
> > From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > 
> > CONFIG_PPC_UV adds support for ultravisor.
> > 
> > Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> > Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
> > Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> > [ Update config help and commit message ]
> > Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
>  Reviewed-by: Janani Janakiraman <janani@linux.ibm.com>
> >  arch/powerpc/Kconfig | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> > 
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index f0e5b38d52e8..20c6c213d2be 100644
> > +++ b/arch/powerpc/Kconfig
> > @@ -440,6 +440,26 @@ config PPC_TRANSACTIONAL_MEM
> >           Support user-mode Transactional Memory on POWERPC.
> > 
> > +config PPC_UV
> > +	bool "Ultravisor support"
> > +	depends on KVM_BOOK3S_HV_POSSIBLE
> > +	select HMM_MIRROR
> > +	select HMM
> > +	select ZONE_DEVICE

These configs have also been changed lately, I didn't see any calls to
hmm_mirror in this patchset, so most likely the two HMM selects should
be dropped and all you'll need is ZONE_DEVICE..

Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
  2019-07-10 13:47       ` Jason Gunthorpe
  (?)
@ 2019-07-11  5:08         ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-11  5:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: janani, linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm,
	jglisse, aneesh.kumar, paulus, sukadev, Linuxppc-dev

On Wed, Jul 10, 2019 at 10:47:34AM -0300, Jason Gunthorpe wrote:
> On Tue, Jul 09, 2019 at 01:55:28PM -0500, janani wrote:
> 
> > > +int kvmppc_hmm_init(void)
> > > +{
> > > +	int ret = 0;
> > > +	unsigned long size;
> > > +
> > > +	size = kvmppc_get_secmem_size();
> > > +	if (!size) {
> > > +		ret = -ENODEV;
> > > +		goto out;
> > > +	}
> > > +
> > > +	kvmppc_hmm.device = hmm_device_new(NULL);
> > > +	if (IS_ERR(kvmppc_hmm.device)) {
> > > +		ret = PTR_ERR(kvmppc_hmm.device);
> > > +		goto out;
> > > +	}
> > > +
> > > +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> > > +					   &kvmppc_hmm.device->device, size);
> > > +	if (IS_ERR(kvmppc_hmm.devmem)) {
> > > +		ret = PTR_ERR(kvmppc_hmm.devmem);
> > > +		goto out_device;
> > > +	}
> 
> This 'hmm_device' API family was recently deleted from hmm:

Hmmm... I still find it in upstream, guess it will be removed soon?

I find the below commit in mmotm.

> 
> commit 07ec38917e68f0114b9c8aeeb1c584b5e73e4dd6
> Author: Christoph Hellwig <hch@lst.de>
> Date:   Wed Jun 26 14:27:01 2019 +0200
> 
>     mm: remove the struct hmm_device infrastructure
>     
>     This code is a trivial wrapper around device model helpers, which
>     should have been integrated into the driver device model usage from
>     the start.  Assuming it actually had users, which it never had since
>     the code was added more than 1 1/2 years ago.
> 
> This patch should use the driver core directly instead.
> 
> Regards,
> Jason


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-11  5:08         ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-11  5:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linuxram, cclaudio, kvm-ppc, Linuxppc-dev, linux-mm, jglisse,
	janani, aneesh.kumar, paulus, sukadev, linuxppc-dev

On Wed, Jul 10, 2019 at 10:47:34AM -0300, Jason Gunthorpe wrote:
> On Tue, Jul 09, 2019 at 01:55:28PM -0500, janani wrote:
> 
> > > +int kvmppc_hmm_init(void)
> > > +{
> > > +	int ret = 0;
> > > +	unsigned long size;
> > > +
> > > +	size = kvmppc_get_secmem_size();
> > > +	if (!size) {
> > > +		ret = -ENODEV;
> > > +		goto out;
> > > +	}
> > > +
> > > +	kvmppc_hmm.device = hmm_device_new(NULL);
> > > +	if (IS_ERR(kvmppc_hmm.device)) {
> > > +		ret = PTR_ERR(kvmppc_hmm.device);
> > > +		goto out;
> > > +	}
> > > +
> > > +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> > > +					   &kvmppc_hmm.device->device, size);
> > > +	if (IS_ERR(kvmppc_hmm.devmem)) {
> > > +		ret = PTR_ERR(kvmppc_hmm.devmem);
> > > +		goto out_device;
> > > +	}
> 
> This 'hmm_device' API family was recently deleted from hmm:

Hmmm... I still find it in upstream, guess it will be removed soon?

I find the below commit in mmotm.

> 
> commit 07ec38917e68f0114b9c8aeeb1c584b5e73e4dd6
> Author: Christoph Hellwig <hch@lst.de>
> Date:   Wed Jun 26 14:27:01 2019 +0200
> 
>     mm: remove the struct hmm_device infrastructure
>     
>     This code is a trivial wrapper around device model helpers, which
>     should have been integrated into the driver device model usage from
>     the start.  Assuming it actually had users, which it never had since
>     the code was added more than 1 1/2 years ago.
> 
> This patch should use the driver core directly instead.
> 
> Regards,
> Jason


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-11  5:08         ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-11  5:20 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: janani, linuxppc-dev, linuxram, cclaudio, kvm-ppc, linux-mm,
	jglisse, aneesh.kumar, paulus, sukadev, Linuxppc-dev

On Wed, Jul 10, 2019 at 10:47:34AM -0300, Jason Gunthorpe wrote:
> On Tue, Jul 09, 2019 at 01:55:28PM -0500, janani wrote:
> 
> > > +int kvmppc_hmm_init(void)
> > > +{
> > > +	int ret = 0;
> > > +	unsigned long size;
> > > +
> > > +	size = kvmppc_get_secmem_size();
> > > +	if (!size) {
> > > +		ret = -ENODEV;
> > > +		goto out;
> > > +	}
> > > +
> > > +	kvmppc_hmm.device = hmm_device_new(NULL);
> > > +	if (IS_ERR(kvmppc_hmm.device)) {
> > > +		ret = PTR_ERR(kvmppc_hmm.device);
> > > +		goto out;
> > > +	}
> > > +
> > > +	kvmppc_hmm.devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops,
> > > +					   &kvmppc_hmm.device->device, size);
> > > +	if (IS_ERR(kvmppc_hmm.devmem)) {
> > > +		ret = PTR_ERR(kvmppc_hmm.devmem);
> > > +		goto out_device;
> > > +	}
> 
> This 'hmm_device' API family was recently deleted from hmm:

Hmmm... I still find it in upstream, guess it will be removed soon?

I find the below commit in mmotm.

> 
> commit 07ec38917e68f0114b9c8aeeb1c584b5e73e4dd6
> Author: Christoph Hellwig <hch@lst.de>
> Date:   Wed Jun 26 14:27:01 2019 +0200
> 
>     mm: remove the struct hmm_device infrastructure
>     
>     This code is a trivial wrapper around device model helpers, which
>     should have been integrated into the driver device model usage from
>     the start.  Assuming it actually had users, which it never had since
>     the code was added more than 1 1/2 years ago.
> 
> This patch should use the driver core directly instead.
> 
> Regards,
> Jason

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
  2019-07-11  5:08         ` Bharata B Rao
@ 2019-07-19  6:46           ` Christoph Hellwig
  -1 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2019-07-19  6:46 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: Jason Gunthorpe, linuxram, cclaudio, kvm-ppc, Linuxppc-dev,
	linux-mm, jglisse, janani, aneesh.kumar, paulus, sukadev,
	linuxppc-dev

On Thu, Jul 11, 2019 at 10:38:48AM +0530, Bharata B Rao wrote:
> Hmmm... I still find it in upstream, guess it will be removed soon?
> 
> I find the below commit in mmotm.

Please take a look at the latest hmm code in mainline, there have
also been other significant changes as well.


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-19  6:46           ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2019-07-19  6:46 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: Jason Gunthorpe, linuxram, cclaudio, kvm-ppc, Linuxppc-dev,
	linux-mm, jglisse, janani, aneesh.kumar, paulus, sukadev,
	linuxppc-dev

On Thu, Jul 11, 2019 at 10:38:48AM +0530, Bharata B Rao wrote:
> Hmmm... I still find it in upstream, guess it will be removed soon?
> 
> I find the below commit in mmotm.

Please take a look at the latest hmm code in mainline, there have
also been other significant changes as well.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
  2019-07-19  6:46           ` Christoph Hellwig
@ 2019-07-19  8:26             ` Bharata B Rao
  -1 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-19  8:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jason Gunthorpe, linuxram, cclaudio, kvm-ppc, Linuxppc-dev,
	linux-mm, jglisse, janani, aneesh.kumar, paulus, sukadev,
	linuxppc-dev

On Thu, Jul 18, 2019 at 11:46:41PM -0700, Christoph Hellwig wrote:
> On Thu, Jul 11, 2019 at 10:38:48AM +0530, Bharata B Rao wrote:
> > Hmmm... I still find it in upstream, guess it will be removed soon?
> > 
> > I find the below commit in mmotm.
> 
> Please take a look at the latest hmm code in mainline, there have
> also been other significant changes as well.

Yes, my next version of this patchset will be based on those recent
HMM related changes.

Regards,
Bharata.


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v5 1/7] kvmppc: HMM backend driver to manage pages of secure guest
@ 2019-07-19  8:26             ` Bharata B Rao
  0 siblings, 0 replies; 58+ messages in thread
From: Bharata B Rao @ 2019-07-19  8:26 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jason Gunthorpe, linuxram, cclaudio, kvm-ppc, Linuxppc-dev,
	linux-mm, jglisse, janani, aneesh.kumar, paulus, sukadev,
	linuxppc-dev

On Thu, Jul 18, 2019 at 11:46:41PM -0700, Christoph Hellwig wrote:
> On Thu, Jul 11, 2019 at 10:38:48AM +0530, Bharata B Rao wrote:
> > Hmmm... I still find it in upstream, guess it will be removed soon?
> > 
> > I find the below commit in mmotm.
> 
> Please take a look at the latest hmm code in mainline, there have
> also been other significant changes as well.

Yes, my next version of this patchset will be based on those recent
HMM related changes.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2019-07-19  8:26 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-09 10:25 [PATCH v5 0/7] kvmppc: HMM driver to manage pages of secure guest Bharata B Rao
2019-07-09 10:37 ` Bharata B Rao
2019-07-09 10:25 ` Bharata B Rao
2019-07-09 10:25 ` [PATCH v5 1/7] kvmppc: HMM backend " Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-09 18:55   ` janani
2019-07-09 18:55     ` janani
2019-07-09 18:55     ` janani
2019-07-10 13:47     ` Jason Gunthorpe
2019-07-10 13:47       ` Jason Gunthorpe
2019-07-10 13:47       ` Jason Gunthorpe
2019-07-11  5:08       ` Bharata B Rao
2019-07-11  5:20         ` Bharata B Rao
2019-07-11  5:08         ` Bharata B Rao
2019-07-19  6:46         ` Christoph Hellwig
2019-07-19  6:46           ` Christoph Hellwig
2019-07-19  8:14           ` Bharata B Rao
2019-07-19  8:26             ` Bharata B Rao
2019-07-09 10:25 ` [PATCH v5 2/7] kvmppc: Shared pages support for secure guests Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-09 19:35   ` janani
2019-07-09 19:35     ` janani
2019-07-09 19:35     ` janani
2019-07-09 10:25 ` [PATCH v5 3/7] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-09 19:42   ` janani
2019-07-09 19:42     ` janani
2019-07-09 19:42     ` janani
2019-07-09 10:25 ` [PATCH v5 4/7] kvmppc: Handle memory plug/unplug to secure VM Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-09 19:43   ` janani
2019-07-09 19:43     ` janani
2019-07-09 19:43     ` janani
2019-07-09 10:25 ` [RFC PATCH v5 5/7] kvmppc: Radix changes for secure guest Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-09 19:53   ` janani
2019-07-09 19:53     ` janani
2019-07-09 19:53     ` janani
2019-07-09 10:25 ` [RFC PATCH v5 6/7] kvmppc: Support reset of " Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-10 13:30   ` janani
2019-07-10 13:30     ` janani
2019-07-10 13:30     ` janani
2019-07-09 10:25 ` [PATCH v5 7/7] KVM: PPC: Ultravisor: Add PPC_UV config option Bharata B Rao
2019-07-09 10:37   ` Bharata B Rao
2019-07-09 10:25   ` Bharata B Rao
2019-07-10 13:24   ` janani
2019-07-10 13:24     ` janani
2019-07-10 13:24     ` janani
2019-07-10 14:15     ` Jason Gunthorpe
2019-07-10 14:15       ` Jason Gunthorpe
2019-07-10 14:15       ` Jason Gunthorpe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.