linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] Secure Virtual Machine Enablement
@ 2019-05-21  4:49 Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 01/12] powerpc/pseries: Introduce option to build secure virtual machines Thiago Jung Bauermann
                   ` (12 more replies)
  0 siblings, 13 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

This series enables Secure Virtual Machines (SVMs) on powerpc. SVMs use the
Protected Execution Facility (PEF) and request to be migrated to secure
memory during prom_init() so by default all of their memory is inaccessible
to the hypervisor. There is an Ultravisor call that the VM can use to
request certain pages to be made accessible to (or shared with) the
hypervisor.

The objective of these patches is to have the guest perform this request
for buffers that need to be accessed by the hypervisor such as the LPPACAs,
the SWIOTLB memory and the Debug Trace Log.

The patch set applies on top of Claudio Carvalho's "kvmppc: Paravirtualize KVM
to support ultravisor" series:

https://lore.kernel.org/linuxppc-dev/20190518142524.28528-1-cclaudio@linux.ibm.com/

I only need the following two patches from his series:

[RFC PATCH v2 02/10] KVM: PPC: Ultravisor: Introduce the MSR_S bit
[RFC PATCH v2 04/10] KVM: PPC: Ultravisor: Add generic ultravisor call handler

Patches 2 and 3 are posted as RFC because we are still finalizing the
details on how the ESM blob will be passed to the kernel. All other patches
are (hopefully) in upstreamable shape.

Unfortunately this series still doesn't enable the use of virtio devices in
the secure guest. This support depends on a discussion that is currently
ongoing with the virtio community:

https://lore.kernel.org/linuxppc-dev/87womn8inf.fsf@morokweng.localdomain/

This was the last time I posted this patch set:

https://lore.kernel.org/linuxppc-dev/20180824162535.22798-1-bauerman@linux.ibm.com/

At that time, it wasn't possible to launch a real secure guest because the
Ultravisor was still in very early development. Now there is a relatively
mature Ultravisor and I was able to test it using Claudio's patches in the
host kernel, booting normally using an initramfs for the root filesystem.

This is the command used to start up the guest with QEMU 4.0:

qemu-system-ppc64				\
	-nodefaults				\
	-cpu host				\
	-machine pseries,accel=kvm,kvm-type=HV,cap-htm=off,cap-cfpc=broken,cap-sbbc=broken,cap-ibs=broken \
	-display none				\
	-serial mon:stdio			\
	-smp 1					\
	-m 4G					\
	-kernel /home/bauermann/vmlinux		\
	-initrd /home/bauermann/fs_small.cpio	\
	-append 'debug'

Changelog since the RFC from August:

- Patch "powerpc/pseries: Introduce option to build secure virtual machines"
  - New patch.

- Patch "powerpc: Add support for adding an ESM blob to the zImage wrapper"
  - Patch from Benjamin Herrenschmidt, first posted here:
    https://lore.kernel.org/linuxppc-dev/20180531043417.25073-1-benh@kernel.crashing.org/
  - Made minor adjustments to some comments. Code is unchanged.

- Patch "powerpc/prom_init: Add the ESM call to prom_init"
  - New patch from Ram Pai and Michael Anderson.

- Patch "powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE"
  - New patch from Ram Pai.

- Patch "powerpc/pseries: Add and use LPPACA_SIZE constant"
  - Moved LPPACA_SIZE macro inside the CONFIG_PPC_PSERIES #ifdef.
  - Put sizeof() operand left of comparison operator in BUILD_BUG_ON()
    macro to appease a checkpatch warning.

- Patch "powerpc/pseries/svm: Use shared memory for LPPACA structures"
  - Moved definition of is_secure_guest() helper to this patch.
  - Changed shared_lppaca and shared_lppaca_size from globals to static
    variables inside alloc_shared_lppaca().
  - Changed shared_lppaca to hold virtual address instead of physical
    address.

- Patch "powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL)"
  - Add get_dtl_cache_ctor() macro. Suggested by Ram Pai.

- Patch "powerpc/pseries/svm: Export guest SVM status to user space via sysfs"
  - New patch from Ryan Grimm.

- Patch "powerpc/pseries/svm: Disable doorbells in SVM guests"
  - New patch from Sukadev Bhattiprolu.

- Patch "powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests"
  - New patch.

- Patch "powerpc/pseries/svm: Force SWIOTLB for secure guests"
  - New patch with code that was previously in other patches.

- Patch "powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs"
  - New patch from Ryan Grimm.

- Patch "powerpc/pseries/svm: Detect Secure Virtual Machine (SVM) platform"
  - Dropped this patch by moving its code to other patches.

- Patch "powerpc/svm: Select CONFIG_DMA_DIRECT_OPS and CONFIG_SWIOTLB"
  - No need to select CONFIG_DMA_DIRECT_OPS anymore. The CONFIG_SWIOTLB
    change was moved to another patch and this patch was dropped.

- Patch "powerpc/pseries/svm: Add memory conversion (shared/secure) helper functions"
  - Dropped patch since the helper functions were unnecessary wrappers
    around uv_share_page() and uv_unshare_page().

- Patch "powerpc/svm: Convert SWIOTLB buffers to shared memory"
  - Squashed into patch "powerpc/pseries/svm: Force SWIOTLB for secure
    guests"

- Patch "powerpc/svm: Don't release SWIOTLB buffers on secure guests"
  - Squashed into patch "powerpc/pseries/svm: Force SWIOTLB for secure
    guests"

- Patch "powerpc/svm: Use SWIOTLB DMA API for all virtio devices"
  - Dropped patch. Enablement of virtio will use a difference approach.

- Patch "powerpc/svm: Force the use of bounce buffers"
  - Squashed into patch "powerpc/pseries/svm: Force SWIOTLB for secure
    guests"
  - Added comment explaining why it's necessary.to force use of SWIOTLB.
    Suggested by Christoph Hellwig.

- Patch "powerpc/svm: Increase SWIOTLB buffer size"
  - Dropped patch.


Anshuman Khandual (3):
  powerpc/pseries/svm: Use shared memory for LPPACA structures
  powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL)
  powerpc/pseries/svm: Force SWIOTLB for secure guests

Benjamin Herrenschmidt (1):
  powerpc: Add support for adding an ESM blob to the zImage wrapper

Ram Pai (2):
  powerpc/prom_init: Add the ESM call to prom_init
  powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE

Ryan Grimm (2):
  powerpc/pseries/svm: Export guest SVM status to user space via sysfs
  powerpc/configs: Enable secure guest support in pseries and ppc64
    defconfigs

Sukadev Bhattiprolu (1):
  powerpc/pseries/svm: Disable doorbells in SVM guests

Thiago Jung Bauermann (3):
  powerpc/pseries: Introduce option to build secure virtual machines
  powerpc/pseries: Add and use LPPACA_SIZE constant
  powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests

 .../admin-guide/kernel-parameters.txt         |   5 +
 arch/powerpc/boot/main.c                      |  41 ++++++
 arch/powerpc/boot/ops.h                       |   2 +
 arch/powerpc/boot/wrapper                     |  24 +++-
 arch/powerpc/boot/zImage.lds.S                |   8 ++
 arch/powerpc/configs/ppc64_defconfig          |   1 +
 arch/powerpc/configs/pseries_defconfig        |   1 +
 arch/powerpc/include/asm/mem_encrypt.h        |  19 +++
 arch/powerpc/include/asm/svm.h                |  31 +++++
 arch/powerpc/include/asm/ultravisor-api.h     |   3 +
 arch/powerpc/include/asm/ultravisor.h         |  16 ++-
 arch/powerpc/kernel/Makefile                  |   4 +-
 arch/powerpc/kernel/paca.c                    |  52 +++++++-
 arch/powerpc/kernel/prom_init.c               | 124 ++++++++++++++++++
 arch/powerpc/kernel/sysfs.c                   |  29 ++++
 arch/powerpc/platforms/pseries/Kconfig        |  17 +++
 arch/powerpc/platforms/pseries/Makefile       |   1 +
 arch/powerpc/platforms/pseries/iommu.c        |   6 +-
 arch/powerpc/platforms/pseries/setup.c        |   5 +-
 arch/powerpc/platforms/pseries/smp.c          |   3 +-
 arch/powerpc/platforms/pseries/svm.c          |  85 ++++++++++++
 21 files changed, 464 insertions(+), 13 deletions(-)
 create mode 100644 arch/powerpc/include/asm/mem_encrypt.h
 create mode 100644 arch/powerpc/include/asm/svm.h
 create mode 100644 arch/powerpc/platforms/pseries/svm.c


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/12] powerpc/pseries: Introduce option to build secure virtual machines
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper Thiago Jung Bauermann
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

Introduce CONFIG_PPC_SVM to control support for secure guests and include
Ultravisor-related helpers when it is selected

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/include/asm/ultravisor.h  |  2 +-
 arch/powerpc/kernel/Makefile           |  4 +++-
 arch/powerpc/platforms/pseries/Kconfig | 12 ++++++++++++
 3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 4ffec7a36acd..09e0a615d96f 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -28,7 +28,7 @@ extern int early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
  * This call supports up to 6 arguments and 4 return arguments. Use
  * UCALL_BUFSIZE to size the return argument buffer.
  */
-#if defined(CONFIG_PPC_UV)
+#if defined(CONFIG_PPC_UV) || defined(CONFIG_PPC_SVM)
 long ucall(unsigned long opcode, unsigned long *retbuf, ...);
 #else
 static long ucall(unsigned long opcode, unsigned long *retbuf, ...)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 43ff4546e469..1e9b721634c8 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -154,7 +154,9 @@ endif
 
 obj-$(CONFIG_EPAPR_PARAVIRT)	+= epapr_paravirt.o epapr_hcalls.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvm_emul.o
-obj-$(CONFIG_PPC_UV)		+= ultravisor.o ucall.o
+ifneq ($(CONFIG_PPC_UV)$(CONFIG_PPC_SVM),)
+obj-y				+= ultravisor.o ucall.o
+endif
 
 # Disable GCOV, KCOV & sanitizers in odd or sensitive code
 GCOV_PROFILE_prom_init.o := n
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 9c6b3d860518..82c16aa4f1ce 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -144,3 +144,15 @@ config PAPR_SCM
 	tristate "Support for the PAPR Storage Class Memory interface"
 	help
 	  Enable access to hypervisor provided storage class memory.
+
+config PPC_SVM
+	bool "Secure virtual machine (SVM) support for POWER"
+	depends on PPC_PSERIES
+	default n
+	help
+	 Support secure guests on POWER. There are certain POWER platforms which
+	 support secure guests using the Protected Execution Facility, with the
+	 help of an Ultravisor executing below the hypervisor layer. This
+	 enables the support for those guests.
+
+	 If unsure, say "N".


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 01/12] powerpc/pseries: Introduce option to build secure virtual machines Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  5:13   ` Christoph Hellwig
  2019-05-21  4:49 ` [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

From: Benjamin Herrenschmidt <benh@kernel.crashing.org>

For secure VMs, the signing tool will create a ticket called the "ESM blob"
for the Enter Secure Mode ultravisor call with the signatures of the kernel
and initrd among other things.

This adds support to the wrapper script for adding that blob via the "-e"
option to the zImage.pseries.

It also adds code to the zImage wrapper itself to retrieve and if necessary
relocate the blob, and pass its address to Linux via the device-tree, to be
later consumed by prom_init.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[ Minor adjustments to some comments. ]
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/boot/main.c       | 41 ++++++++++++++++++++++++++++++++++
 arch/powerpc/boot/ops.h        |  2 ++
 arch/powerpc/boot/wrapper      | 24 +++++++++++++++++---
 arch/powerpc/boot/zImage.lds.S |  8 +++++++
 4 files changed, 72 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/boot/main.c b/arch/powerpc/boot/main.c
index 78aaf4ffd7ab..ca612efd3e81 100644
--- a/arch/powerpc/boot/main.c
+++ b/arch/powerpc/boot/main.c
@@ -150,6 +150,46 @@ static struct addr_range prep_initrd(struct addr_range vmlinux, void *chosen,
 	return (struct addr_range){(void *)initrd_addr, initrd_size};
 }
 
+#ifdef __powerpc64__
+static void prep_esm_blob(struct addr_range vmlinux, void *chosen)
+{
+	unsigned long esm_blob_addr, esm_blob_size;
+
+	/* Do we have an ESM (Enter Secure Mode) blob? */
+	if (_esm_blob_end <= _esm_blob_start)
+		return;
+
+	printf("Attached ESM blob at 0x%p-0x%p\n\r",
+	       _esm_blob_start, _esm_blob_end);
+	esm_blob_addr = (unsigned long)_esm_blob_start;
+	esm_blob_size = _esm_blob_end - _esm_blob_start;
+
+	/*
+	 * If the ESM blob is too low it will be clobbered when the
+	 * kernel relocates to its final location.  In this case,
+	 * allocate a safer place and move it.
+	 */
+	if (esm_blob_addr < vmlinux.size) {
+		void *old_addr = (void *)esm_blob_addr;
+
+		printf("Allocating 0x%lx bytes for esm_blob ...\n\r",
+		       esm_blob_size);
+		esm_blob_addr = (unsigned long)malloc(esm_blob_size);
+		if (!esm_blob_addr)
+			fatal("Can't allocate memory for ESM blob !\n\r");
+		printf("Relocating ESM blob 0x%lx <- 0x%p (0x%lx bytes)\n\r",
+		       esm_blob_addr, old_addr, esm_blob_size);
+		memmove((void *)esm_blob_addr, old_addr, esm_blob_size);
+	}
+
+	/* Tell the kernel ESM blob address via device tree. */
+	setprop_val(chosen, "linux,esm-blob-start", (u32)(esm_blob_addr));
+	setprop_val(chosen, "linux,esm-blob-end", (u32)(esm_blob_addr + esm_blob_size));
+}
+#else
+static inline void prep_esm_blob(struct addr_range vmlinux, void *chosen) { }
+#endif
+
 /* A buffer that may be edited by tools operating on a zImage binary so as to
  * edit the command line passed to vmlinux (by setting /chosen/bootargs).
  * The buffer is put in it's own section so that tools may locate it easier.
@@ -218,6 +258,7 @@ void start(void)
 	vmlinux = prep_kernel();
 	initrd = prep_initrd(vmlinux, chosen,
 			     loader_info.initrd_addr, loader_info.initrd_size);
+	prep_esm_blob(vmlinux, chosen);
 	prep_cmdline(chosen);
 
 	printf("Finalizing device tree...");
diff --git a/arch/powerpc/boot/ops.h b/arch/powerpc/boot/ops.h
index cd043726ed88..e0606766480f 100644
--- a/arch/powerpc/boot/ops.h
+++ b/arch/powerpc/boot/ops.h
@@ -251,6 +251,8 @@ extern char _initrd_start[];
 extern char _initrd_end[];
 extern char _dtb_start[];
 extern char _dtb_end[];
+extern char _esm_blob_start[];
+extern char _esm_blob_end[];
 
 static inline __attribute__((const))
 int __ilog2_u32(u32 n)
diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index f9141eaec6ff..36b2ad6cd5b7 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -14,6 +14,7 @@
 # -i initrd	specify initrd file
 # -d devtree	specify device-tree blob
 # -s tree.dts	specify device-tree source file (needs dtc installed)
+# -e esm_blob   specify ESM blob for secure images
 # -c		cache $kernel.strip.gz (use if present & newer, else make)
 # -C prefix	specify command prefix for cross-building tools
 #		(strip, objcopy, ld)
@@ -38,6 +39,7 @@ platform=of
 initrd=
 dtb=
 dts=
+esm_blob=
 cacheit=
 binary=
 compression=.gz
@@ -60,9 +62,9 @@ tmpdir=.
 
 usage() {
     echo 'Usage: wrapper [-o output] [-p platform] [-i initrd]' >&2
-    echo '       [-d devtree] [-s tree.dts] [-c] [-C cross-prefix]' >&2
-    echo '       [-D datadir] [-W workingdir] [-Z (gz|xz|none)]' >&2
-    echo '       [--no-compression] [vmlinux]' >&2
+    echo '       [-d devtree] [-s tree.dts] [-e esm_blob]' >&2
+    echo '       [-c] [-C cross-prefix] [-D datadir] [-W workingdir]' >&2
+    echo '       [-Z (gz|xz|none)] [--no-compression] [vmlinux]' >&2
     exit 1
 }
 
@@ -105,6 +107,11 @@ while [ "$#" -gt 0 ]; do
 	[ "$#" -gt 0 ] || usage
 	dtb="$1"
 	;;
+    -e)
+	shift
+	[ "$#" -gt 0 ] || usage
+	esm_blob="$1"
+	;;
     -s)
 	shift
 	[ "$#" -gt 0 ] || usage
@@ -211,9 +218,16 @@ objflags=-S
 tmp=$tmpdir/zImage.$$.o
 ksection=.kernel:vmlinux.strip
 isection=.kernel:initrd
+esection=.kernel:esm_blob
 link_address='0x400000'
 make_space=y
 
+
+if [ -n "$esm_blob" -a "$platform" != "pseries" ]; then
+    echo "ESM blob not support on non-pseries platforms" >&2
+    exit 1
+fi
+
 case "$platform" in
 of)
     platformo="$object/of.o $object/epapr.o"
@@ -463,6 +477,10 @@ if [ -n "$dtb" ]; then
     fi
 fi
 
+if [ -n "$esm_blob" ]; then
+    addsec $tmp "$esm_blob" $esection
+fi
+
 if [ "$platform" != "miboot" ]; then
     if [ -n "$link_address" ] ; then
         text_start="-Ttext $link_address"
diff --git a/arch/powerpc/boot/zImage.lds.S b/arch/powerpc/boot/zImage.lds.S
index 4ac1e36edfe7..a21f3a76e06f 100644
--- a/arch/powerpc/boot/zImage.lds.S
+++ b/arch/powerpc/boot/zImage.lds.S
@@ -68,6 +68,14 @@ SECTIONS
     _initrd_end =  .;
   }
 
+  . = ALIGN(4096);
+  .kernel:esm_blob :
+  {
+    _esm_blob_start =  .;
+    *(.kernel:esm_blob)
+    _esm_blob_end =  .;
+  }
+
 #ifdef CONFIG_PPC64_BOOT_WRAPPER
   . = ALIGN(256);
   .got :


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 01/12] powerpc/pseries: Introduce option to build secure virtual machines Thiago Jung Bauermann
  2019-05-21  4:49 ` [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-06-26  7:44   ` Alexey Kardashevskiy
  2019-05-21  4:49 ` [PATCH 04/12] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE Thiago Jung Bauermann
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

From: Ram Pai <linuxram@us.ibm.com>

Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
mode. Add "svm=" command line option to turn off switching to secure mode.
Introduce CONFIG_PPC_SVM to control support for secure guests.

Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[ Generate an RTAS os-term hcall when the ESM ucall fails. ]
Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
[ Cleaned up the code a bit. ]
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 .../admin-guide/kernel-parameters.txt         |   5 +
 arch/powerpc/include/asm/ultravisor-api.h     |   1 +
 arch/powerpc/kernel/prom_init.c               | 124 ++++++++++++++++++
 3 files changed, 130 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index c45a19d654f3..7237d86b25c6 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4501,6 +4501,11 @@
 			/sys/power/pm_test). Only available when CONFIG_PM_DEBUG
 			is set. Default value is 5.
 
+	svm=		[PPC]
+			Format: { on | off | y | n | 1 | 0 }
+			This parameter controls use of the Protected
+			Execution Facility on pSeries.
+
 	swapaccount=[0|1]
 			[KNL] Enable accounting of swap in memory resource
 			controller if no parameter or 1 is given or disable
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 15e6ce77a131..0e8b72081718 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -19,6 +19,7 @@
 
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
+#define UV_ESM				0xF110
 #define UV_RETURN			0xF11C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index 523bb99d7676..5d8a3efb54f2 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -44,6 +44,7 @@
 #include <asm/sections.h>
 #include <asm/machdep.h>
 #include <asm/asm-prototypes.h>
+#include <asm/ultravisor-api.h>
 
 #include <linux/linux_logo.h>
 
@@ -174,6 +175,10 @@ static unsigned long __prombss prom_tce_alloc_end;
 static bool __prombss prom_radix_disable;
 #endif
 
+#ifdef CONFIG_PPC_SVM
+static bool __prombss prom_svm_disable;
+#endif
+
 struct platform_support {
 	bool hash_mmu;
 	bool radix_mmu;
@@ -809,6 +814,17 @@ static void __init early_cmdline_parse(void)
 	if (prom_radix_disable)
 		prom_debug("Radix disabled from cmdline\n");
 #endif /* CONFIG_PPC_PSERIES */
+
+#ifdef CONFIG_PPC_SVM
+	opt = prom_strstr(prom_cmd_line, "svm=");
+	if (opt) {
+		bool val;
+
+		opt += sizeof("svm=") - 1;
+		if (!prom_strtobool(opt, &val))
+			prom_svm_disable = !val;
+	}
+#endif /* CONFIG_PPC_SVM */
 }
 
 #ifdef CONFIG_PPC_PSERIES
@@ -1707,6 +1723,43 @@ static void __init prom_close_stdin(void)
 	}
 }
 
+#ifdef CONFIG_PPC_SVM
+static int prom_rtas_os_term_hcall(uint64_t args)
+{
+	register uint64_t arg1 asm("r3") = 0xf000;
+	register uint64_t arg2 asm("r4") = args;
+
+	asm volatile("sc 1\n" : "=r" (arg1) :
+			"r" (arg1),
+			"r" (arg2) :);
+	return arg1;
+}
+
+static struct rtas_args __prombss os_term_args;
+
+static void __init prom_rtas_os_term(char *str)
+{
+	phandle rtas_node;
+	__be32 val;
+	u32 token;
+
+	prom_printf("%s: start...\n", __func__);
+	rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
+	prom_printf("rtas_node: %x\n", rtas_node);
+	if (!PHANDLE_VALID(rtas_node))
+		return;
+
+	val = 0;
+	prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
+	token = be32_to_cpu(val);
+	prom_printf("ibm,os-term: %x\n", token);
+	if (token == 0)
+		prom_panic("Could not get token for ibm,os-term\n");
+	os_term_args.token = cpu_to_be32(token);
+	prom_rtas_os_term_hcall((uint64_t)&os_term_args);
+}
+#endif /* CONFIG_PPC_SVM */
+
 /*
  * Allocate room for and instantiate RTAS
  */
@@ -3162,6 +3215,74 @@ static void unreloc_toc(void)
 #endif
 #endif
 
+#ifdef CONFIG_PPC_SVM
+/*
+ * The ESM blob is a data structure with information needed by the Ultravisor to
+ * validate the integrity of the secure guest.
+ */
+static void *get_esm_blob(void)
+{
+	/*
+	 * FIXME: We are still finalizing the details on how prom_init will grab
+	 * the ESM blob. When that is done, this function will be updated.
+	 */
+	return (void *)0xdeadbeef;
+}
+
+/*
+ * Perform the Enter Secure Mode ultracall.
+ */
+static int enter_secure_mode(void *esm_blob, void *retaddr, void *fdt)
+{
+	register uint64_t func asm("r0") = UV_ESM;
+	register uint64_t arg1 asm("r3") = (uint64_t)esm_blob;
+	register uint64_t arg2 asm("r4") = (uint64_t)retaddr;
+	register uint64_t arg3 asm("r5") = (uint64_t)fdt;
+
+	asm volatile("sc 2\n"
+		     : "=r"(arg1)
+		     : "r"(func), "0"(arg1), "r"(arg2), "r"(arg3)
+		     :);
+
+	return (int)arg1;
+}
+
+/*
+ * Call the Ultravisor to transfer us to secure memory if we have an ESM blob.
+ */
+static void setup_secure_guest(void *fdt)
+{
+	void *esm_blob;
+	int ret;
+
+	if (prom_svm_disable) {
+		prom_printf("Secure mode is OFF\n");
+		return;
+	}
+
+	esm_blob = get_esm_blob();
+	if (esm_blob == NULL)
+		/*
+		 * Absence of an ESM blob isn't an error, it just means we
+		 * shouldn't switch to secure mode.
+		 */
+		return;
+
+	/* Switch to secure mode. */
+	prom_printf("Switching to secure mode.\n");
+
+	ret = enter_secure_mode(esm_blob, NULL, fdt);
+	if (ret != U_SUCCESS) {
+		prom_printf("Returned %d from switching to secure mode.\n", ret);
+		prom_rtas_os_term("Switch to secure mode failed.\n");
+	}
+}
+#else
+static void setup_secure_guest(void *fdt)
+{
+}
+#endif /* CONFIG_PPC_SVM */
+
 /*
  * We enter here early on, when the Open Firmware prom is still
  * handling exceptions and the MMU hash table for us.
@@ -3360,6 +3481,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 	unreloc_toc();
 #endif
 
+	/* Move to secure memory if we're supposed to be secure guests. */
+	setup_secure_guest((void *)hdr);
+
 	__start(hdr, kbase, 0, 0, 0, 0, 0);
 
 	return 0;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 04/12] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (2 preceding siblings ...)
  2019-05-21  4:49 ` [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 05/12] powerpc/pseries: Add and use LPPACA_SIZE constant Thiago Jung Bauermann
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

From: Ram Pai <linuxram@us.ibm.com>

These functions are used when the guest wants to grant the hypervisor
access to certain pages.

Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/include/asm/ultravisor-api.h |  2 ++
 arch/powerpc/include/asm/ultravisor.h     | 14 ++++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 0e8b72081718..ed68b02869fd 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -20,6 +20,8 @@
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
 #define UV_ESM				0xF110
+#define UV_SHARE_PAGE			0xF130
+#define UV_UNSHARE_PAGE			0xF134
 #define UV_RETURN			0xF11C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index 09e0a615d96f..537f7717d21a 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -44,6 +44,20 @@ static inline int uv_register_pate(u64 lpid, u64 dw0, u64 dw1)
 	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
 }
 
+static inline int uv_share_page(u64 pfn, u64 npages)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
+}
+
+static inline int uv_unshare_page(u64 pfn, u64 npages)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
+}
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 05/12] powerpc/pseries: Add and use LPPACA_SIZE constant
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (3 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 04/12] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 06/12] powerpc/pseries/svm: Use shared memory for LPPACA structures Thiago Jung Bauermann
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Anshuman Khandual, Alexey Kardashevskiy,
	Mike Anderson, Ram Pai, linux-kernel, Claudio Carvalho,
	Paul Mackerras, Christoph Hellwig, Thiago Jung Bauermann

Helps document what the hard-coded number means.

Also take the opportunity to fix an #endif comment.

Suggested-by: Alexey Kardashevskiy <aik@linux.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/kernel/paca.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 9cc91d03ab62..854105db5cff 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -56,6 +56,8 @@ static void *__init alloc_paca_data(unsigned long size, unsigned long align,
 
 #ifdef CONFIG_PPC_PSERIES
 
+#define LPPACA_SIZE 0x400
+
 /*
  * See asm/lppaca.h for more detail.
  *
@@ -69,7 +71,7 @@ static inline void init_lppaca(struct lppaca *lppaca)
 
 	*lppaca = (struct lppaca) {
 		.desc = cpu_to_be32(0xd397d781),	/* "LpPa" */
-		.size = cpu_to_be16(0x400),
+		.size = cpu_to_be16(LPPACA_SIZE),
 		.fpregs_in_use = 1,
 		.slb_count = cpu_to_be16(64),
 		.vmxregs_in_use = 0,
@@ -79,19 +81,18 @@ static inline void init_lppaca(struct lppaca *lppaca)
 static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
 {
 	struct lppaca *lp;
-	size_t size = 0x400;
 
-	BUILD_BUG_ON(size < sizeof(struct lppaca));
+	BUILD_BUG_ON(sizeof(struct lppaca) > LPPACA_SIZE);
 
 	if (early_cpu_has_feature(CPU_FTR_HVMODE))
 		return NULL;
 
-	lp = alloc_paca_data(size, 0x400, limit, cpu);
+	lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
 	init_lppaca(lp);
 
 	return lp;
 }
-#endif /* CONFIG_PPC_BOOK3S */
+#endif /* CONFIG_PPC_PSERIES */
 
 #ifdef CONFIG_PPC_BOOK3S_64
 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 06/12] powerpc/pseries/svm: Use shared memory for LPPACA structures
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (4 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 05/12] powerpc/pseries: Add and use LPPACA_SIZE constant Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 07/12] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL) Thiago Jung Bauermann
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann, Anshuman Khandual

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

LPPACA structures need to be shared with the host. Hence they need to be in
shared memory. Instead of allocating individual chunks of memory for a
given structure from memblock, a contiguous chunk of memory is allocated
and then converted into shared memory. Subsequent allocation requests will
come from the contiguous chunk which will be always shared memory for all
structures.

While we are able to use a kmem_cache constructor for the Debug Trace Log,
LPPACAs are allocated very early in the boot process (before SLUB is
available) so we need to use a simpler scheme here.

Introduce helper is_svm_platform() which uses the S bit of the MSR to tell
whether we're running as a secure guest.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/include/asm/svm.h | 26 ++++++++++++++++++++
 arch/powerpc/kernel/paca.c     | 43 +++++++++++++++++++++++++++++++++-
 2 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
new file mode 100644
index 000000000000..fef3740f46a6
--- /dev/null
+++ b/arch/powerpc/include/asm/svm.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2019 Anshuman Khandual, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SVM_H
+#define _ASM_POWERPC_SVM_H
+
+#ifdef CONFIG_PPC_SVM
+
+static inline bool is_secure_guest(void)
+{
+	return mfmsr() & MSR_S;
+}
+
+#else /* CONFIG_PPC_SVM */
+
+static inline bool is_secure_guest(void)
+{
+	return false;
+}
+
+#endif /* CONFIG_PPC_SVM */
+#endif /* _ASM_POWERPC_SVM_H */
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 854105db5cff..a9622f4b45bb 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -18,6 +18,8 @@
 #include <asm/sections.h>
 #include <asm/pgtable.h>
 #include <asm/kexec.h>
+#include <asm/svm.h>
+#include <asm/ultravisor.h>
 
 #include "setup.h"
 
@@ -58,6 +60,41 @@ static void *__init alloc_paca_data(unsigned long size, unsigned long align,
 
 #define LPPACA_SIZE 0x400
 
+static void *__init alloc_shared_lppaca(unsigned long size, unsigned long align,
+					unsigned long limit, int cpu)
+{
+	size_t shared_lppaca_total_size = PAGE_ALIGN(nr_cpu_ids * LPPACA_SIZE);
+	static unsigned long shared_lppaca_size;
+	static void *shared_lppaca;
+	void *ptr;
+
+	if (!shared_lppaca) {
+		memblock_set_bottom_up(true);
+
+		shared_lppaca =
+			memblock_alloc_try_nid(shared_lppaca_total_size,
+					       PAGE_SIZE, MEMBLOCK_LOW_LIMIT,
+					       limit, NUMA_NO_NODE);
+		if (!shared_lppaca)
+			panic("cannot allocate shared data");
+
+		memblock_set_bottom_up(false);
+		uv_share_page(PHYS_PFN(__pa(shared_lppaca)),
+			      shared_lppaca_total_size >> PAGE_SHIFT);
+	}
+
+	ptr = shared_lppaca + shared_lppaca_size;
+	shared_lppaca_size += size;
+
+	/*
+	 * This is very early in boot, so no harm done if the kernel crashes at
+	 * this point.
+	 */
+	BUG_ON(shared_lppaca_size >= shared_lppaca_total_size);
+
+	return ptr;
+}
+
 /*
  * See asm/lppaca.h for more detail.
  *
@@ -87,7 +124,11 @@ static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
 	if (early_cpu_has_feature(CPU_FTR_HVMODE))
 		return NULL;
 
-	lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
+	if (is_secure_guest())
+		lp = alloc_shared_lppaca(LPPACA_SIZE, 0x400, limit, cpu);
+	else
+		lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
+
 	init_lppaca(lp);
 
 	return lp;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 07/12] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL)
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (5 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 06/12] powerpc/pseries/svm: Use shared memory for LPPACA structures Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 08/12] powerpc/pseries/svm: Export guest SVM status to user space via sysfs Thiago Jung Bauermann
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann, Anshuman Khandual

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

Secure guests need to share the DTL buffers with the hypervisor. To that
end, use a kmem_cache constructor which converts the underlying buddy
allocated SLUB cache pages into shared memory.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/include/asm/svm.h          |  5 ++++
 arch/powerpc/platforms/pseries/Makefile |  1 +
 arch/powerpc/platforms/pseries/setup.c  |  5 +++-
 arch/powerpc/platforms/pseries/svm.c    | 40 +++++++++++++++++++++++++
 4 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
index fef3740f46a6..f253116c31fc 100644
--- a/arch/powerpc/include/asm/svm.h
+++ b/arch/powerpc/include/asm/svm.h
@@ -15,6 +15,9 @@ static inline bool is_secure_guest(void)
 	return mfmsr() & MSR_S;
 }
 
+void dtl_cache_ctor(void *addr);
+#define get_dtl_cache_ctor()	(is_secure_guest() ? dtl_cache_ctor : NULL)
+
 #else /* CONFIG_PPC_SVM */
 
 static inline bool is_secure_guest(void)
@@ -22,5 +25,7 @@ static inline bool is_secure_guest(void)
 	return false;
 }
 
+#define get_dtl_cache_ctor() NULL
+
 #endif /* CONFIG_PPC_SVM */
 #endif /* _ASM_POWERPC_SVM_H */
diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
index a43ec843c8e2..b7b6e6f52bd0 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -25,6 +25,7 @@ obj-$(CONFIG_LPARCFG)		+= lparcfg.o
 obj-$(CONFIG_IBMVIO)		+= vio.o
 obj-$(CONFIG_IBMEBUS)		+= ibmebus.o
 obj-$(CONFIG_PAPR_SCM)		+= papr_scm.o
+obj-$(CONFIG_PPC_SVM)		+= svm.o
 
 ifdef CONFIG_PPC_PSERIES
 obj-$(CONFIG_SUSPEND)		+= suspend.o
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index e4f0dfd4ae33..c928e6e8a279 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -71,6 +71,7 @@
 #include <asm/isa-bridge.h>
 #include <asm/security_features.h>
 #include <asm/asm-const.h>
+#include <asm/svm.h>
 
 #include "pseries.h"
 #include "../../../../drivers/pci/pci.h"
@@ -329,8 +330,10 @@ static inline int alloc_dispatch_logs(void)
 
 static int alloc_dispatch_log_kmem_cache(void)
 {
+	void (*ctor)(void *) = get_dtl_cache_ctor();
+
 	dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES,
-						DISPATCH_LOG_BYTES, 0, NULL);
+						DISPATCH_LOG_BYTES, 0, ctor);
 	if (!dtl_cache) {
 		pr_warn("Failed to create dispatch trace log buffer cache\n");
 		pr_warn("Stolen time statistics will be unreliable\n");
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
new file mode 100644
index 000000000000..c508196f7c83
--- /dev/null
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Secure VM platform
+ *
+ * Copyright 2019 IBM Corporation
+ * Author: Anshuman Khandual <khandual@linux.vnet.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <asm/ultravisor.h>
+
+/* There's one dispatch log per CPU. */
+#define NR_DTL_PAGE (DISPATCH_LOG_BYTES * CONFIG_NR_CPUS / PAGE_SIZE)
+
+static struct page *dtl_page_store[NR_DTL_PAGE];
+static long dtl_nr_pages;
+
+static bool is_dtl_page_shared(struct page *page)
+{
+	long i;
+
+	for (i = 0; i < dtl_nr_pages; i++)
+		if (dtl_page_store[i] == page)
+			return true;
+
+	return false;
+}
+
+void dtl_cache_ctor(void *addr)
+{
+	unsigned long pfn = PHYS_PFN(__pa(addr));
+	struct page *page = pfn_to_page(pfn);
+
+	if (!is_dtl_page_shared(page)) {
+		dtl_page_store[dtl_nr_pages] = page;
+		dtl_nr_pages++;
+		WARN_ON(dtl_nr_pages >= NR_DTL_PAGE);
+		uv_share_page(pfn, 1);
+	}
+}

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 08/12] powerpc/pseries/svm: Export guest SVM status to user space via sysfs
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (6 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 07/12] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL) Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 09/12] powerpc/pseries/svm: Disable doorbells in SVM guests Thiago Jung Bauermann
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Ryan Grimm, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

From: Ryan Grimm <grimm@linux.vnet.ibm.com>

User space might want to know it's running in a secure VM.  It can't do
a mfmsr because mfmsr is a privileged instruction.

The solution here is to create a cpu attribute:

/sys/devices/system/cpu/svm

which will read 0 or 1 based on the S bit of the guest's CPU 0.

Signed-off-by: Ryan Grimm <grimm@linux.vnet.ibm.com>
Reviewed-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/kernel/sysfs.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index e8e93c2c7d03..8fdab134e9ae 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -18,6 +18,7 @@
 #include <asm/smp.h>
 #include <asm/pmc.h>
 #include <asm/firmware.h>
+#include <asm/svm.h>
 
 #include "cacheinfo.h"
 #include "setup.h"
@@ -714,6 +715,32 @@ static struct device_attribute pa6t_attrs[] = {
 #endif /* HAS_PPC_PMC_PA6T */
 #endif /* HAS_PPC_PMC_CLASSIC */
 
+#ifdef CONFIG_PPC_SVM
+static void get_svm(void *val)
+{
+	u32 *value = val;
+
+	*value = is_secure_guest();
+}
+
+static ssize_t show_svm(struct device *dev, struct device_attribute *attr, char *buf)
+{
+	u32 val;
+	smp_call_function_single(0, get_svm, &val, 1);
+	return sprintf(buf, "%u\n", val);
+}
+static DEVICE_ATTR(svm, 0444, show_svm, NULL);
+
+static void create_svm_file(void)
+{
+	device_create_file(cpu_subsys.dev_root, &dev_attr_svm);
+}
+#else
+static void create_svm_file(void)
+{
+}
+#endif /* CONFIG_PPC_SVM */
+
 static int register_cpu_online(unsigned int cpu)
 {
 	struct cpu *c = &per_cpu(cpu_devices, cpu);
@@ -1057,6 +1084,8 @@ static int __init topology_init(void)
 	sysfs_create_dscr_default();
 #endif /* CONFIG_PPC64 */
 
+	create_svm_file();
+
 	return 0;
 }
 subsys_initcall(topology_init);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 09/12] powerpc/pseries/svm: Disable doorbells in SVM guests
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (7 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 08/12] powerpc/pseries/svm: Export guest SVM status to user space via sysfs Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 10/12] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests Thiago Jung Bauermann
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Sukadev Bhattiprolu, Christoph Hellwig, Thiago Jung Bauermann

From: Sukadev Bhattiprolu <sukadev@linux.ibm.com>

Normally, the HV emulates some instructions like MSGSNDP, MSGCLRP
from a KVM guest. To emulate the instructions, it must first read
the instruction from the guest's memory and decode its parameters.

However for a secure guest (aka SVM), the page containing the
instruction is in secure memory and the HV cannot access directly.
It would need the Ultravisor (UV) to facilitate accessing the
instruction and parameters but the UV currently does not have
the support for such accesses.

Until the UV has such support, disable doorbells in SVMs. This might
incur a performance hit but that is yet to be quantified.

With this patch applied (needed only in SVMs not needed for HV) we
are able to launch SVM guests with multi-core support. Eg:

	qemu -smp sockets=2,cores=2,threads=2.

Fix suggested by Benjamin Herrenschmidt. Thanks to input from
Paul Mackerras, Ram Pai and Michael Anderson.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/platforms/pseries/smp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index 3df46123cce3..95a5c24a1544 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -45,6 +45,7 @@
 #include <asm/dbell.h>
 #include <asm/plpar_wrappers.h>
 #include <asm/code-patching.h>
+#include <asm/svm.h>
 
 #include "pseries.h"
 #include "offline_states.h"
@@ -225,7 +226,7 @@ static __init void pSeries_smp_probe_xics(void)
 {
 	xics_smp_probe();
 
-	if (cpu_has_feature(CPU_FTR_DBELL))
+	if (cpu_has_feature(CPU_FTR_DBELL) && !is_secure_guest())
 		smp_ops->cause_ipi = smp_pseries_cause_ipi;
 	else
 		smp_ops->cause_ipi = icp_ops->cause_ipi;

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 10/12] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (8 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 09/12] powerpc/pseries/svm: Disable doorbells in SVM guests Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  4:49 ` [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for " Thiago Jung Bauermann
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

Secure guest memory is inacessible to devices so regular DMA isn't
possible.

In that case set devices' dma_map_ops to NULL so that the generic
DMA code path will use SWIOTLB and DMA to bounce buffers.

Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/platforms/pseries/iommu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 03bbb299320e..7d9550edb700 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -50,6 +50,7 @@
 #include <asm/udbg.h>
 #include <asm/mmzone.h>
 #include <asm/plpar_wrappers.h>
+#include <asm/svm.h>
 
 #include "pseries.h"
 
@@ -1332,7 +1333,10 @@ void iommu_init_early_pSeries(void)
 	of_reconfig_notifier_register(&iommu_reconfig_nb);
 	register_memory_notifier(&iommu_mem_nb);
 
-	set_pci_dma_ops(&dma_iommu_ops);
+	if (is_secure_guest())
+		set_pci_dma_ops(NULL);
+	else
+		set_pci_dma_ops(&dma_iommu_ops);
 }
 
 static int __init disable_multitce(char *str)


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for secure guests
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (9 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 10/12] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-05-21  5:15   ` Christoph Hellwig
  2019-05-21  4:49 ` [PATCH 12/12] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs Thiago Jung Bauermann
  2019-06-01 17:11 ` [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
  12 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann, Anshuman Khandual

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

SWIOTLB checks range of incoming CPU addresses to be bounced and sees if
the device can access it through its DMA window without requiring bouncing.
In such cases it just chooses to skip bouncing. But for cases like secure
guests on powerpc platform all addresses need to be bounced into the shared
pool of memory because the host cannot access it otherwise. Hence the need
to do the bouncing is not related to device's DMA window and use of bounce
buffers is forced by setting swiotlb_force.

Also, connect the shared memory conversion functions into the
ARCH_HAS_MEM_ENCRYPT hooks and call swiotlb_update_mem_attributes() to
convert SWIOTLB's memory pool to shared memory.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
[ Use ARCH_HAS_MEM_ENCRYPT hooks to share swiotlb memory pool. ]
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/include/asm/mem_encrypt.h | 19 +++++++++++
 arch/powerpc/platforms/pseries/Kconfig |  5 +++
 arch/powerpc/platforms/pseries/svm.c   | 45 ++++++++++++++++++++++++++
 3 files changed, 69 insertions(+)

diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
new file mode 100644
index 000000000000..45d5e4d0e6e0
--- /dev/null
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2019 IBM Corporation
+ */
+
+#ifndef _ASM_POWERPC_MEM_ENCRYPT_H
+#define _ASM_POWERPC_MEM_ENCRYPT_H
+
+#define sme_me_mask	0ULL
+
+static inline bool sme_active(void) { return false; }
+static inline bool sev_active(void) { return false; }
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* _ASM_POWERPC_MEM_ENCRYPT_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 82c16aa4f1ce..41b10f3bc729 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -145,9 +145,14 @@ config PAPR_SCM
 	help
 	  Enable access to hypervisor provided storage class memory.
 
+config ARCH_HAS_MEM_ENCRYPT
+	def_bool n
+
 config PPC_SVM
 	bool "Secure virtual machine (SVM) support for POWER"
 	depends on PPC_PSERIES
+	select SWIOTLB
+	select ARCH_HAS_MEM_ENCRYPT
 	default n
 	help
 	 Support secure guests on POWER. There are certain POWER platforms which
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index c508196f7c83..618622d636d5 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -7,8 +7,53 @@
  */
 
 #include <linux/mm.h>
+#include <asm/machdep.h>
+#include <asm/svm.h>
+#include <asm/swiotlb.h>
 #include <asm/ultravisor.h>
 
+static int __init init_svm(void)
+{
+	if (!is_secure_guest())
+		return 0;
+
+	/* Don't release the SWIOTLB buffer. */
+	ppc_swiotlb_enable = 1;
+
+	/*
+	 * Since the guest memory is inaccessible to the host, devices always
+	 * need to use the SWIOTLB buffer for DMA even if dma_capable() says
+	 * otherwise.
+	 */
+	swiotlb_force = SWIOTLB_FORCE;
+
+	/* Share the SWIOTLB buffer with the host. */
+	swiotlb_update_mem_attributes();
+
+	return 0;
+}
+machine_early_initcall(pseries, init_svm);
+
+int set_memory_encrypted(unsigned long addr, int numpages)
+{
+	if (!PAGE_ALIGNED(addr))
+		return -EINVAL;
+
+	uv_unshare_page(PHYS_PFN(__pa(addr)), numpages);
+
+	return 0;
+}
+
+int set_memory_decrypted(unsigned long addr, int numpages)
+{
+	if (!PAGE_ALIGNED(addr))
+		return -EINVAL;
+
+	uv_share_page(PHYS_PFN(__pa(addr)), numpages);
+
+	return 0;
+}
+
 /* There's one dispatch log per CPU. */
 #define NR_DTL_PAGE (DISPATCH_LOG_BYTES * CONFIG_NR_CPUS / PAGE_SIZE)
 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 12/12] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (10 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for " Thiago Jung Bauermann
@ 2019-05-21  4:49 ` Thiago Jung Bauermann
  2019-06-07 14:47   ` [RFC PATCH 1/1] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel Ram Pai
  2019-06-01 17:11 ` [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
  12 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-21  4:49 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Ryan Grimm, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

From: Ryan Grimm <grimm@linux.vnet.ibm.com>

Enables running as a secure guest in platforms with an Ultravisor.

Signed-off-by: Ryan Grimm <grimm@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
 arch/powerpc/configs/ppc64_defconfig   | 1 +
 arch/powerpc/configs/pseries_defconfig | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig
index d7c381009636..725297438320 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -31,6 +31,7 @@ CONFIG_DTL=y
 CONFIG_SCANLOG=m
 CONFIG_PPC_SMLPAR=y
 CONFIG_IBMEBUS=y
+CONFIG_PPC_SVM=y
 CONFIG_PPC_MAPLE=y
 CONFIG_PPC_PASEMI=y
 CONFIG_PPC_PASEMI_IOMMU=y
diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig
index 62e12f61a3b2..724a574fe4b2 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -42,6 +42,7 @@ CONFIG_DTL=y
 CONFIG_SCANLOG=m
 CONFIG_PPC_SMLPAR=y
 CONFIG_IBMEBUS=y
+CONFIG_PPC_SVM=y
 # CONFIG_PPC_PMAC is not set
 CONFIG_RTAS_FLASH=m
 CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper
  2019-05-21  4:49 ` [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper Thiago Jung Bauermann
@ 2019-05-21  5:13   ` Christoph Hellwig
  2019-05-21 15:09     ` Ram Pai
  2019-05-21 23:15     ` Paul Mackerras
  0 siblings, 2 replies; 23+ messages in thread
From: Christoph Hellwig @ 2019-05-21  5:13 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras, linuxppc-dev,
	Christoph Hellwig

On Tue, May 21, 2019 at 01:49:02AM -0300, Thiago Jung Bauermann wrote:
> From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> 
> For secure VMs, the signing tool will create a ticket called the "ESM blob"
> for the Enter Secure Mode ultravisor call with the signatures of the kernel
> and initrd among other things.
> 
> This adds support to the wrapper script for adding that blob via the "-e"
> option to the zImage.pseries.
> 
> It also adds code to the zImage wrapper itself to retrieve and if necessary
> relocate the blob, and pass its address to Linux via the device-tree, to be
> later consumed by prom_init.

Where does the "BLOB" come from?  How is it licensed and how can we
satisfy the GPL with it?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for secure guests
  2019-05-21  4:49 ` [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for " Thiago Jung Bauermann
@ 2019-05-21  5:15   ` Christoph Hellwig
  2019-05-23  5:15     ` Thiago Jung Bauermann
  0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-05-21  5:15 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras, linuxppc-dev,
	Christoph Hellwig, Anshuman Khandual

> diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
> new file mode 100644
> index 000000000000..45d5e4d0e6e0
> --- /dev/null
> +++ b/arch/powerpc/include/asm/mem_encrypt.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * SVM helper functions
> + *
> + * Copyright 2019 IBM Corporation
> + */
> +
> +#ifndef _ASM_POWERPC_MEM_ENCRYPT_H
> +#define _ASM_POWERPC_MEM_ENCRYPT_H
> +
> +#define sme_me_mask	0ULL
> +
> +static inline bool sme_active(void) { return false; }
> +static inline bool sev_active(void) { return false; }
> +
> +int set_memory_encrypted(unsigned long addr, int numpages);
> +int set_memory_decrypted(unsigned long addr, int numpages);
> +
> +#endif /* _ASM_POWERPC_MEM_ENCRYPT_H */

S/390 seems to be adding a stub header just like this.  Can you please
clean up the Kconfig and generic headers bits for memory encryption so
that we don't need all this boilerplate code?

>  config PPC_SVM
>  	bool "Secure virtual machine (SVM) support for POWER"
>  	depends on PPC_PSERIES
> +	select SWIOTLB
> +	select ARCH_HAS_MEM_ENCRYPT
>  	default n

n is the default default, no need to explictly specify it.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Re: [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper
  2019-05-21  5:13   ` Christoph Hellwig
@ 2019-05-21 15:09     ` Ram Pai
  2019-05-21 23:15     ` Paul Mackerras
  1 sibling, 0 replies; 23+ messages in thread
From: Ram Pai @ 2019-05-21 15:09 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson,
	linux-kernel, Claudio Carvalho, Paul Mackerras, linuxppc-dev,
	Thiago Jung Bauermann

On Tue, May 21, 2019 at 07:13:26AM +0200, Christoph Hellwig wrote:
> On Tue, May 21, 2019 at 01:49:02AM -0300, Thiago Jung Bauermann wrote:
> > From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > 
> > For secure VMs, the signing tool will create a ticket called the "ESM blob"
> > for the Enter Secure Mode ultravisor call with the signatures of the kernel
> > and initrd among other things.
> > 
> > This adds support to the wrapper script for adding that blob via the "-e"
> > option to the zImage.pseries.
> > 
> > It also adds code to the zImage wrapper itself to retrieve and if necessary
> > relocate the blob, and pass its address to Linux via the device-tree, to be
> > later consumed by prom_init.
> 
> Where does the "BLOB" come from?  How is it licensed and how can we
> satisfy the GPL with it?

The "BLOB" is not a piece of code. Its just a piece of data that gets
generated by our build tools. This data contains the
signed hash of the kernel, initrd, and kernel command line parameters.
Also it contains any information that the creator the the BLOB wants to
be made available to anyone needing it, inside the
secure-virtual-machine. All of this is integrity-protected and encrypted
to safegaurd it when at rest and at runtime.
 
Bottomline -- Blob is data, and hence no licensing implication. And due
to some reason, even data needs to have licensing statement, we can
make it available to have no conflicts with GPL.


-- 
Ram Pai


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper
  2019-05-21  5:13   ` Christoph Hellwig
  2019-05-21 15:09     ` Ram Pai
@ 2019-05-21 23:15     ` Paul Mackerras
  1 sibling, 0 replies; 23+ messages in thread
From: Paul Mackerras @ 2019-05-21 23:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, linuxppc-dev,
	Thiago Jung Bauermann

On Tue, May 21, 2019 at 07:13:26AM +0200, Christoph Hellwig wrote:
> On Tue, May 21, 2019 at 01:49:02AM -0300, Thiago Jung Bauermann wrote:
> > From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > 
> > For secure VMs, the signing tool will create a ticket called the "ESM blob"
> > for the Enter Secure Mode ultravisor call with the signatures of the kernel
> > and initrd among other things.
> > 
> > This adds support to the wrapper script for adding that blob via the "-e"
> > option to the zImage.pseries.
> > 
> > It also adds code to the zImage wrapper itself to retrieve and if necessary
> > relocate the blob, and pass its address to Linux via the device-tree, to be
> > later consumed by prom_init.
> 
> Where does the "BLOB" come from?  How is it licensed and how can we
> satisfy the GPL with it?

The blob is data, not code, and it will be created by a tool that will
be open source.  My understanding is that most of it will be encrypted
with a session key that is encrypted with the secret key of the
ultravisor.  Ram Pai's KVM Forum talk last year explained how this
works.

Paul.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for secure guests
  2019-05-21  5:15   ` Christoph Hellwig
@ 2019-05-23  5:15     ` Thiago Jung Bauermann
  0 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-05-23  5:15 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras, linuxppc-dev,
	Anshuman Khandual


Hello Christoph,

Thanks for reviewing the patch!

Christoph Hellwig <hch@lst.de> writes:

>> diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
>> new file mode 100644
>> index 000000000000..45d5e4d0e6e0
>> --- /dev/null
>> +++ b/arch/powerpc/include/asm/mem_encrypt.h
>> @@ -0,0 +1,19 @@
>> +/* SPDX-License-Identifier: GPL-2.0+ */
>> +/*
>> + * SVM helper functions
>> + *
>> + * Copyright 2019 IBM Corporation
>> + */
>> +
>> +#ifndef _ASM_POWERPC_MEM_ENCRYPT_H
>> +#define _ASM_POWERPC_MEM_ENCRYPT_H
>> +
>> +#define sme_me_mask	0ULL
>> +
>> +static inline bool sme_active(void) { return false; }
>> +static inline bool sev_active(void) { return false; }
>> +
>> +int set_memory_encrypted(unsigned long addr, int numpages);
>> +int set_memory_decrypted(unsigned long addr, int numpages);
>> +
>> +#endif /* _ASM_POWERPC_MEM_ENCRYPT_H */
>
> S/390 seems to be adding a stub header just like this.  Can you please
> clean up the Kconfig and generic headers bits for memory encryption so
> that we don't need all this boilerplate code?

Yes, that's a good idea. Will do.

>>  config PPC_SVM
>>  	bool "Secure virtual machine (SVM) support for POWER"
>>  	depends on PPC_PSERIES
>> +	select SWIOTLB
>> +	select ARCH_HAS_MEM_ENCRYPT
>>  	default n
>
> n is the default default, no need to explictly specify it.

Indeed. Changed for the next version.

-- 
Thiago Jung Bauermann
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 00/12] Secure Virtual Machine Enablement
  2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
                   ` (11 preceding siblings ...)
  2019-05-21  4:49 ` [PATCH 12/12] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs Thiago Jung Bauermann
@ 2019-06-01 17:11 ` Thiago Jung Bauermann
  12 siblings, 0 replies; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-06-01 17:11 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
	linux-kernel, Claudio Carvalho, Paul Mackerras,
	Christoph Hellwig


Hello,

Thiago Jung Bauermann <bauerman@linux.ibm.com> writes:

> This series enables Secure Virtual Machines (SVMs) on powerpc. SVMs use the
> Protected Execution Facility (PEF) and request to be migrated to secure
> memory during prom_init() so by default all of their memory is inaccessible
> to the hypervisor. There is an Ultravisor call that the VM can use to
> request certain pages to be made accessible to (or shared with) the
> hypervisor.
>
> The objective of these patches is to have the guest perform this request
> for buffers that need to be accessed by the hypervisor such as the LPPACAs,
> the SWIOTLB memory and the Debug Trace Log.

Ping? Any more comments on these patches? Or even acks? :-)

-- 
Thiago Jung Bauermann
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [RFC PATCH 1/1] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel
  2019-05-21  4:49 ` [PATCH 12/12] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs Thiago Jung Bauermann
@ 2019-06-07 14:47   ` Ram Pai
  0 siblings, 0 replies; 23+ messages in thread
From: Ram Pai @ 2019-06-07 14:47 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson,
	linux-kernel, Claudio Carvalho, Ryan Grimm, Paul Mackerras,
	Christoph Hellwig, Thiago Jung Bauermann

powerpc/pseries/svm: Unshare all pages before kexecing a new kernel.
    
A new kernel deserves a clean slate. Any pages shared with the
hypervisor is unshared before invoking the new kernel. However there are
exceptions.  If the new kernel is invoked to dump the current kernel, or
if there is a explicit request to preserve the state of the current
kernel, unsharing of pages is skipped.
 
NOTE: Reserve atleast 256M for crashkernel.  Otherwise SWIOTLB
allocation fails and crash kernel fails to boot.
 
Signed-off-by: Ram Pai <linuxram@us.ibm.com>

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 8a6c5b4d..c8dd470 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -31,5 +31,6 @@
 #define UV_UNSHARE_PAGE			0xF134
 #define UV_PAGE_INVAL			0xF138
 #define UV_SVM_TERMINATE		0xF13C
+#define UV_UNSHARE_ALL_PAGES		0xF140
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index bf5ac05..73c44ff 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -120,6 +120,12 @@ static inline int uv_unshare_page(u64 pfn, u64 npages)
 	return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
 }
 
+static inline int uv_unshare_all_pages(void)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_UNSHARE_ALL_PAGES, retbuf);
+}
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 75692c3..a93e3ab 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -329,6 +329,13 @@ void default_machine_kexec(struct kimage *image)
 #ifdef CONFIG_PPC_PSERIES
 	kexec_paca.lppaca_ptr = NULL;
 #endif
+
+	if (is_svm_platform() && !(image->preserve_context ||
+				   image->type == KEXEC_TYPE_CRASH)) {
+		uv_unshare_all_pages();
+		printk("kexec: Unshared all shared pages.\n");
+	}
+
 	paca_ptrs[kexec_paca.paca_index] = &kexec_paca;
 
 	setup_paca(&kexec_paca);


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init
  2019-05-21  4:49 ` [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
@ 2019-06-26  7:44   ` Alexey Kardashevskiy
  2019-06-28 22:33     ` Thiago Jung Bauermann
  0 siblings, 1 reply; 23+ messages in thread
From: Alexey Kardashevskiy @ 2019-06-26  7:44 UTC (permalink / raw)
  To: Thiago Jung Bauermann, linuxppc-dev
  Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
	Claudio Carvalho, Paul Mackerras, Christoph Hellwig



On 21/05/2019 14:49, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@us.ibm.com>
> 
> Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
> mode. Add "svm=" command line option to turn off switching to secure mode.
> Introduce CONFIG_PPC_SVM to control support for secure guests.
> 
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [ Generate an RTAS os-term hcall when the ESM ucall fails. ]
> Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
> [ Cleaned up the code a bit. ]
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> ---
>  .../admin-guide/kernel-parameters.txt         |   5 +
>  arch/powerpc/include/asm/ultravisor-api.h     |   1 +
>  arch/powerpc/kernel/prom_init.c               | 124 ++++++++++++++++++
>  3 files changed, 130 insertions(+)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index c45a19d654f3..7237d86b25c6 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4501,6 +4501,11 @@
>  			/sys/power/pm_test). Only available when CONFIG_PM_DEBUG
>  			is set. Default value is 5.
>  
> +	svm=		[PPC]
> +			Format: { on | off | y | n | 1 | 0 }
> +			This parameter controls use of the Protected
> +			Execution Facility on pSeries.
> +
>  	swapaccount=[0|1]
>  			[KNL] Enable accounting of swap in memory resource
>  			controller if no parameter or 1 is given or disable
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
> index 15e6ce77a131..0e8b72081718 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -19,6 +19,7 @@
>  
>  /* opcodes */
>  #define UV_WRITE_PATE			0xF104
> +#define UV_ESM				0xF110
>  #define UV_RETURN			0xF11C
>  
>  #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
> diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
> index 523bb99d7676..5d8a3efb54f2 100644
> --- a/arch/powerpc/kernel/prom_init.c
> +++ b/arch/powerpc/kernel/prom_init.c
> @@ -44,6 +44,7 @@
>  #include <asm/sections.h>
>  #include <asm/machdep.h>
>  #include <asm/asm-prototypes.h>
> +#include <asm/ultravisor-api.h>
>  
>  #include <linux/linux_logo.h>
>  
> @@ -174,6 +175,10 @@ static unsigned long __prombss prom_tce_alloc_end;
>  static bool __prombss prom_radix_disable;
>  #endif
>  
> +#ifdef CONFIG_PPC_SVM
> +static bool __prombss prom_svm_disable;
> +#endif
> +
>  struct platform_support {
>  	bool hash_mmu;
>  	bool radix_mmu;
> @@ -809,6 +814,17 @@ static void __init early_cmdline_parse(void)
>  	if (prom_radix_disable)
>  		prom_debug("Radix disabled from cmdline\n");
>  #endif /* CONFIG_PPC_PSERIES */
> +
> +#ifdef CONFIG_PPC_SVM
> +	opt = prom_strstr(prom_cmd_line, "svm=");
> +	if (opt) {
> +		bool val;
> +
> +		opt += sizeof("svm=") - 1;
> +		if (!prom_strtobool(opt, &val))
> +			prom_svm_disable = !val;
> +	}
> +#endif /* CONFIG_PPC_SVM */
>  }
>  
>  #ifdef CONFIG_PPC_PSERIES
> @@ -1707,6 +1723,43 @@ static void __init prom_close_stdin(void)
>  	}
>  }
>  
> +#ifdef CONFIG_PPC_SVM
> +static int prom_rtas_os_term_hcall(uint64_t args)


This is just an rtas hcall, nothing special about "os-term".


> +{
> +	register uint64_t arg1 asm("r3") = 0xf000;
> +	register uint64_t arg2 asm("r4") = args;
> +
> +	asm volatile("sc 1\n" : "=r" (arg1) :
> +			"r" (arg1),
> +			"r" (arg2) :);
> +	return arg1;
> +}
> +
> +static struct rtas_args __prombss os_term_args;
> +
> +static void __init prom_rtas_os_term(char *str)
> +{
> +	phandle rtas_node;
> +	__be32 val;
> +	u32 token;
> +
> +	prom_printf("%s: start...\n", __func__);
> +	rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
> +	prom_printf("rtas_node: %x\n", rtas_node);
> +	if (!PHANDLE_VALID(rtas_node))
> +		return;
> +
> +	val = 0;
> +	prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
> +	token = be32_to_cpu(val);
> +	prom_printf("ibm,os-term: %x\n", token);
> +	if (token == 0)
> +		prom_panic("Could not get token for ibm,os-term\n");
> +	os_term_args.token = cpu_to_be32(token);
> +	prom_rtas_os_term_hcall((uint64_t)&os_term_args);
> +}
> +#endif /* CONFIG_PPC_SVM */
> +
>  /*
>   * Allocate room for and instantiate RTAS
>   */
> @@ -3162,6 +3215,74 @@ static void unreloc_toc(void)
>  #endif
>  #endif
>  
> +#ifdef CONFIG_PPC_SVM
> +/*
> + * The ESM blob is a data structure with information needed by the Ultravisor to
> + * validate the integrity of the secure guest.
> + */
> +static void *get_esm_blob(void)
> +{
> +	/*
> +	 * FIXME: We are still finalizing the details on how prom_init will grab
> +	 * the ESM blob. When that is done, this function will be updated.
> +	 */
> +	return (void *)0xdeadbeef;
> +}
> +
> +/*
> + * Perform the Enter Secure Mode ultracall.
> + */
> +static int enter_secure_mode(void *esm_blob, void *retaddr, void *fdt)
> +{
> +	register uint64_t func asm("r0") = UV_ESM;
> +	register uint64_t arg1 asm("r3") = (uint64_t)esm_blob;
> +	register uint64_t arg2 asm("r4") = (uint64_t)retaddr;
> +	register uint64_t arg3 asm("r5") = (uint64_t)fdt;
> +
> +	asm volatile("sc 2\n"
> +		     : "=r"(arg1)
> +		     : "r"(func), "0"(arg1), "r"(arg2), "r"(arg3)
> +		     :);
> +
> +	return (int)arg1;
> +}
> +
> +/*
> + * Call the Ultravisor to transfer us to secure memory if we have an ESM blob.
> + */
> +static void setup_secure_guest(void *fdt)
> +{
> +	void *esm_blob;
> +	int ret;
> +
> +	if (prom_svm_disable) {
> +		prom_printf("Secure mode is OFF\n");
> +		return;
> +	}
> +
> +	esm_blob = get_esm_blob();
> +	if (esm_blob == NULL)
> +		/*
> +		 * Absence of an ESM blob isn't an error, it just means we
> +		 * shouldn't switch to secure mode.
> +		 */
> +		return;
> +
> +	/* Switch to secure mode. */
> +	prom_printf("Switching to secure mode.\n");
> +
> +	ret = enter_secure_mode(esm_blob, NULL, fdt);
> +	if (ret != U_SUCCESS) {
> +		prom_printf("Returned %d from switching to secure mode.\n", ret);
> +		prom_rtas_os_term("Switch to secure mode failed.\n");
> +	}
> +}
> +#else
> +static void setup_secure_guest(void *fdt)
> +{
> +}
> +#endif /* CONFIG_PPC_SVM */
> +
>  /*
>   * We enter here early on, when the Open Firmware prom is still
>   * handling exceptions and the MMU hash table for us.
> @@ -3360,6 +3481,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
>  	unreloc_toc();
>  #endif
>  
> +	/* Move to secure memory if we're supposed to be secure guests. */
> +	setup_secure_guest((void *)hdr);
> +
>  	__start(hdr, kbase, 0, 0, 0, 0, 0);
>  
>  	return 0;
> 

-- 
Alexey

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init
  2019-06-26  7:44   ` Alexey Kardashevskiy
@ 2019-06-28 22:33     ` Thiago Jung Bauermann
  2019-07-01  3:13       ` Alexey Kardashevskiy
  0 siblings, 1 reply; 23+ messages in thread
From: Thiago Jung Bauermann @ 2019-06-28 22:33 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
	Claudio Carvalho, Paul Mackerras, linuxppc-dev,
	Christoph Hellwig


Hello Alexey,

Thanks for reviewing this patch!

Alexey Kardashevskiy <aik@ozlabs.ru> writes:

> On 21/05/2019 14:49, Thiago Jung Bauermann wrote:
>> @@ -1707,6 +1723,43 @@ static void __init prom_close_stdin(void)
>>  	}
>>  }
>>  
>> +#ifdef CONFIG_PPC_SVM
>> +static int prom_rtas_os_term_hcall(uint64_t args)
>
>
> This is just an rtas hcall, nothing special about "os-term".

Sorry, unfortunately I don't understand how we're treating os-term
especially. Do you mean that we should inline this function directly
into prom_rtas_os_term()?

>> +{
>> +	register uint64_t arg1 asm("r3") = 0xf000;
>> +	register uint64_t arg2 asm("r4") = args;
>> +
>> +	asm volatile("sc 1\n" : "=r" (arg1) :
>> +			"r" (arg1),
>> +			"r" (arg2) :);
>> +	return arg1;
>> +}
>> +
>> +static struct rtas_args __prombss os_term_args;
>> +
>> +static void __init prom_rtas_os_term(char *str)
>> +{
>> +	phandle rtas_node;
>> +	__be32 val;
>> +	u32 token;
>> +
>> +	prom_printf("%s: start...\n", __func__);
>> +	rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
>> +	prom_printf("rtas_node: %x\n", rtas_node);
>> +	if (!PHANDLE_VALID(rtas_node))
>> +		return;
>> +
>> +	val = 0;
>> +	prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
>> +	token = be32_to_cpu(val);
>> +	prom_printf("ibm,os-term: %x\n", token);
>> +	if (token == 0)
>> +		prom_panic("Could not get token for ibm,os-term\n");
>> +	os_term_args.token = cpu_to_be32(token);
>> +	prom_rtas_os_term_hcall((uint64_t)&os_term_args);
>> +}
>> +#endif /* CONFIG_PPC_SVM */
>> +
>>  /*
>>   * Allocate room for and instantiate RTAS
>>   */

-- 
Thiago Jung Bauermann
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init
  2019-06-28 22:33     ` Thiago Jung Bauermann
@ 2019-07-01  3:13       ` Alexey Kardashevskiy
  0 siblings, 0 replies; 23+ messages in thread
From: Alexey Kardashevskiy @ 2019-07-01  3:13 UTC (permalink / raw)
  To: Thiago Jung Bauermann
  Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
	Claudio Carvalho, Paul Mackerras, linuxppc-dev,
	Christoph Hellwig



On 29/06/2019 08:33, Thiago Jung Bauermann wrote:
> 
> Hello Alexey,
> 
> Thanks for reviewing this patch!
> 
> Alexey Kardashevskiy <aik@ozlabs.ru> writes:
> 
>> On 21/05/2019 14:49, Thiago Jung Bauermann wrote:
>>> @@ -1707,6 +1723,43 @@ static void __init prom_close_stdin(void)
>>>  	}
>>>  }
>>>  
>>> +#ifdef CONFIG_PPC_SVM
>>> +static int prom_rtas_os_term_hcall(uint64_t args)
>>
>>
>> This is just an rtas hcall, nothing special about "os-term".
> 
> Sorry, unfortunately I don't understand how we're treating os-term
> especially. Do you mean that we should inline this function directly
> into prom_rtas_os_term()?

I meant the function name - prom_rtas_os_term_hcall - should rather be
prom_rtas_hcall.



> 
>>> +{
>>> +	register uint64_t arg1 asm("r3") = 0xf000;
>>> +	register uint64_t arg2 asm("r4") = args;
>>> +
>>> +	asm volatile("sc 1\n" : "=r" (arg1) :
>>> +			"r" (arg1),
>>> +			"r" (arg2) :);
>>> +	return arg1;
>>> +}
>>> +
>>> +static struct rtas_args __prombss os_term_args;
>>> +
>>> +static void __init prom_rtas_os_term(char *str)
>>> +{
>>> +	phandle rtas_node;
>>> +	__be32 val;
>>> +	u32 token;
>>> +
>>> +	prom_printf("%s: start...\n", __func__);
>>> +	rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
>>> +	prom_printf("rtas_node: %x\n", rtas_node);
>>> +	if (!PHANDLE_VALID(rtas_node))
>>> +		return;
>>> +
>>> +	val = 0;
>>> +	prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
>>> +	token = be32_to_cpu(val);
>>> +	prom_printf("ibm,os-term: %x\n", token);
>>> +	if (token == 0)
>>> +		prom_panic("Could not get token for ibm,os-term\n");
>>> +	os_term_args.token = cpu_to_be32(token);
>>> +	prom_rtas_os_term_hcall((uint64_t)&os_term_args);
>>> +}
>>> +#endif /* CONFIG_PPC_SVM */
>>> +
>>>  /*
>>>   * Allocate room for and instantiate RTAS
>>>   */
> 

-- 
Alexey

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-07-01  3:15 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-21  4:49 [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 01/12] powerpc/pseries: Introduce option to build secure virtual machines Thiago Jung Bauermann
2019-05-21  4:49 ` [RFC PATCH 02/12] powerpc: Add support for adding an ESM blob to the zImage wrapper Thiago Jung Bauermann
2019-05-21  5:13   ` Christoph Hellwig
2019-05-21 15:09     ` Ram Pai
2019-05-21 23:15     ` Paul Mackerras
2019-05-21  4:49 ` [RFC PATCH 03/12] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
2019-06-26  7:44   ` Alexey Kardashevskiy
2019-06-28 22:33     ` Thiago Jung Bauermann
2019-07-01  3:13       ` Alexey Kardashevskiy
2019-05-21  4:49 ` [PATCH 04/12] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 05/12] powerpc/pseries: Add and use LPPACA_SIZE constant Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 06/12] powerpc/pseries/svm: Use shared memory for LPPACA structures Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 07/12] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL) Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 08/12] powerpc/pseries/svm: Export guest SVM status to user space via sysfs Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 09/12] powerpc/pseries/svm: Disable doorbells in SVM guests Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 10/12] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 11/12] powerpc/pseries/svm: Force SWIOTLB for " Thiago Jung Bauermann
2019-05-21  5:15   ` Christoph Hellwig
2019-05-23  5:15     ` Thiago Jung Bauermann
2019-05-21  4:49 ` [PATCH 12/12] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs Thiago Jung Bauermann
2019-06-07 14:47   ` [RFC PATCH 1/1] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel Ram Pai
2019-06-01 17:11 ` [PATCH 00/12] Secure Virtual Machine Enablement Thiago Jung Bauermann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).