* [PATCH v2 01/13] powerpc/pseries: Introduce option to build secure virtual machines
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [RFC PATCH v2 02/13] powerpc: Add support for adding an ESM blob to the zImage wrapper Thiago Jung Bauermann
` (11 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
Introduce CONFIG_PPC_SVM to control support for secure guests and include
Ultravisor-related helpers when it is selected
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/include/asm/ultravisor.h | 2 +-
arch/powerpc/kernel/Makefile | 4 +++-
arch/powerpc/platforms/pseries/Kconfig | 11 +++++++++++
3 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index a5e4516c8ddb..f5dc5af739b8 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -28,7 +28,7 @@ extern int early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
* This call supports up to 6 arguments and 4 return arguments. Use
* UCALL_BUFSIZE to size the return argument buffer.
*/
-#if defined(CONFIG_PPC_POWERNV)
+#if defined(CONFIG_PPC_POWERNV) || defined(CONFIG_PPC_SVM)
long ucall(unsigned long opcode, unsigned long *retbuf, ...);
#else
static long ucall(unsigned long opcode, unsigned long *retbuf, ...)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 929f79d3e6a9..ea671f03eba2 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -155,7 +155,9 @@ endif
obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
-obj-$(CONFIG_PPC_POWERNV) += ultravisor.o ucall.o
+ifneq ($(CONFIG_PPC_POWERNV)$(CONFIG_PPC_SVM),)
+obj-y += ultravisor.o ucall.o
+endif
# Disable GCOV, KCOV & sanitizers in odd or sensitive code
GCOV_PROFILE_prom_init.o := n
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index f7b484f55553..d09deb05bb66 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -145,3 +145,14 @@ config PAPR_SCM
tristate "Support for the PAPR Storage Class Memory interface"
help
Enable access to hypervisor provided storage class memory.
+
+config PPC_SVM
+ bool "Secure virtual machine (SVM) support for POWER"
+ depends on PPC_PSERIES
+ help
+ There are certain POWER platforms which support secure guests using
+ the Protected Execution Facility, with the help of an Ultravisor
+ executing below the hypervisor layer. This enables support for
+ those guests.
+
+ If unsure, say "N".
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH v2 02/13] powerpc: Add support for adding an ESM blob to the zImage wrapper
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 01/13] powerpc/pseries: Introduce option to build secure virtual machines Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
` (10 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
For secure VMs, the signing tool will create a ticket called the "ESM blob"
for the Enter Secure Mode ultravisor call with the signatures of the kernel
and initrd among other things.
This adds support to the wrapper script for adding that blob via the "-e"
option to the zImage.pseries.
It also adds code to the zImage wrapper itself to retrieve and if necessary
relocate the blob, and pass its address to Linux via the device-tree, to be
later consumed by prom_init.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[ bauerman: Minor adjustments to some comments. ]
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/boot/main.c | 41 ++++++++++++++++++++++++++++++++++
arch/powerpc/boot/ops.h | 2 ++
arch/powerpc/boot/wrapper | 24 +++++++++++++++++---
arch/powerpc/boot/zImage.lds.S | 8 +++++++
4 files changed, 72 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/boot/main.c b/arch/powerpc/boot/main.c
index 78aaf4ffd7ab..ca612efd3e81 100644
--- a/arch/powerpc/boot/main.c
+++ b/arch/powerpc/boot/main.c
@@ -150,6 +150,46 @@ static struct addr_range prep_initrd(struct addr_range vmlinux, void *chosen,
return (struct addr_range){(void *)initrd_addr, initrd_size};
}
+#ifdef __powerpc64__
+static void prep_esm_blob(struct addr_range vmlinux, void *chosen)
+{
+ unsigned long esm_blob_addr, esm_blob_size;
+
+ /* Do we have an ESM (Enter Secure Mode) blob? */
+ if (_esm_blob_end <= _esm_blob_start)
+ return;
+
+ printf("Attached ESM blob at 0x%p-0x%p\n\r",
+ _esm_blob_start, _esm_blob_end);
+ esm_blob_addr = (unsigned long)_esm_blob_start;
+ esm_blob_size = _esm_blob_end - _esm_blob_start;
+
+ /*
+ * If the ESM blob is too low it will be clobbered when the
+ * kernel relocates to its final location. In this case,
+ * allocate a safer place and move it.
+ */
+ if (esm_blob_addr < vmlinux.size) {
+ void *old_addr = (void *)esm_blob_addr;
+
+ printf("Allocating 0x%lx bytes for esm_blob ...\n\r",
+ esm_blob_size);
+ esm_blob_addr = (unsigned long)malloc(esm_blob_size);
+ if (!esm_blob_addr)
+ fatal("Can't allocate memory for ESM blob !\n\r");
+ printf("Relocating ESM blob 0x%lx <- 0x%p (0x%lx bytes)\n\r",
+ esm_blob_addr, old_addr, esm_blob_size);
+ memmove((void *)esm_blob_addr, old_addr, esm_blob_size);
+ }
+
+ /* Tell the kernel ESM blob address via device tree. */
+ setprop_val(chosen, "linux,esm-blob-start", (u32)(esm_blob_addr));
+ setprop_val(chosen, "linux,esm-blob-end", (u32)(esm_blob_addr + esm_blob_size));
+}
+#else
+static inline void prep_esm_blob(struct addr_range vmlinux, void *chosen) { }
+#endif
+
/* A buffer that may be edited by tools operating on a zImage binary so as to
* edit the command line passed to vmlinux (by setting /chosen/bootargs).
* The buffer is put in it's own section so that tools may locate it easier.
@@ -218,6 +258,7 @@ void start(void)
vmlinux = prep_kernel();
initrd = prep_initrd(vmlinux, chosen,
loader_info.initrd_addr, loader_info.initrd_size);
+ prep_esm_blob(vmlinux, chosen);
prep_cmdline(chosen);
printf("Finalizing device tree...");
diff --git a/arch/powerpc/boot/ops.h b/arch/powerpc/boot/ops.h
index cd043726ed88..e0606766480f 100644
--- a/arch/powerpc/boot/ops.h
+++ b/arch/powerpc/boot/ops.h
@@ -251,6 +251,8 @@ extern char _initrd_start[];
extern char _initrd_end[];
extern char _dtb_start[];
extern char _dtb_end[];
+extern char _esm_blob_start[];
+extern char _esm_blob_end[];
static inline __attribute__((const))
int __ilog2_u32(u32 n)
diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 4ed18d63d892..ddb62b5b5b7a 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -14,6 +14,7 @@
# -i initrd specify initrd file
# -d devtree specify device-tree blob
# -s tree.dts specify device-tree source file (needs dtc installed)
+# -e esm_blob specify ESM blob for secure images
# -c cache $kernel.strip.gz (use if present & newer, else make)
# -C prefix specify command prefix for cross-building tools
# (strip, objcopy, ld)
@@ -38,6 +39,7 @@ platform=of
initrd=
dtb=
dts=
+esm_blob=
cacheit=
binary=
compression=.gz
@@ -61,9 +63,9 @@ tmpdir=.
usage() {
echo 'Usage: wrapper [-o output] [-p platform] [-i initrd]' >&2
- echo ' [-d devtree] [-s tree.dts] [-c] [-C cross-prefix]' >&2
- echo ' [-D datadir] [-W workingdir] [-Z (gz|xz|none)]' >&2
- echo ' [--no-compression] [vmlinux]' >&2
+ echo ' [-d devtree] [-s tree.dts] [-e esm_blob]' >&2
+ echo ' [-c] [-C cross-prefix] [-D datadir] [-W workingdir]' >&2
+ echo ' [-Z (gz|xz|none)] [--no-compression] [vmlinux]' >&2
exit 1
}
@@ -106,6 +108,11 @@ while [ "$#" -gt 0 ]; do
[ "$#" -gt 0 ] || usage
dtb="$1"
;;
+ -e)
+ shift
+ [ "$#" -gt 0 ] || usage
+ esm_blob="$1"
+ ;;
-s)
shift
[ "$#" -gt 0 ] || usage
@@ -219,9 +226,16 @@ objflags=-S
tmp=$tmpdir/zImage.$$.o
ksection=.kernel:vmlinux.strip
isection=.kernel:initrd
+esection=.kernel:esm_blob
link_address='0x400000'
make_space=y
+
+if [ -n "$esm_blob" -a "$platform" != "pseries" ]; then
+ echo "ESM blob not support on non-pseries platforms" >&2
+ exit 1
+fi
+
case "$platform" in
of)
platformo="$object/of.o $object/epapr.o"
@@ -478,6 +492,10 @@ if [ -n "$dtb" ]; then
fi
fi
+if [ -n "$esm_blob" ]; then
+ addsec $tmp "$esm_blob" $esection
+fi
+
if [ "$platform" != "miboot" ]; then
if [ -n "$link_address" ] ; then
text_start="-Ttext $link_address"
diff --git a/arch/powerpc/boot/zImage.lds.S b/arch/powerpc/boot/zImage.lds.S
index 4ac1e36edfe7..a21f3a76e06f 100644
--- a/arch/powerpc/boot/zImage.lds.S
+++ b/arch/powerpc/boot/zImage.lds.S
@@ -68,6 +68,14 @@ SECTIONS
_initrd_end = .;
}
+ . = ALIGN(4096);
+ .kernel:esm_blob :
+ {
+ _esm_blob_start = .;
+ *(.kernel:esm_blob)
+ _esm_blob_end = .;
+ }
+
#ifdef CONFIG_PPC64_BOOT_WRAPPER
. = ALIGN(256);
.got :
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 01/13] powerpc/pseries: Introduce option to build secure virtual machines Thiago Jung Bauermann
2019-07-13 6:00 ` [RFC PATCH v2 02/13] powerpc: Add support for adding an ESM blob to the zImage wrapper Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-18 8:11 ` Alexey Kardashevskiy
2019-07-13 6:00 ` [PATCH v2 04/13] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE Thiago Jung Bauermann
` (9 subsequent siblings)
12 siblings, 1 reply; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
From: Ram Pai <linuxram@us.ibm.com>
Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
mode. Add "svm=" command line option to turn on switching to secure mode.
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[ andmike: Generate an RTAS os-term hcall when the ESM ucall fails. ]
Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
[ bauerman: Cleaned up the code a bit. ]
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
.../admin-guide/kernel-parameters.txt | 5 +
arch/powerpc/include/asm/ultravisor-api.h | 1 +
arch/powerpc/kernel/prom_init.c | 99 +++++++++++++++++++
3 files changed, 105 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 7b15abf7db21..c611891b5992 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4585,6 +4585,11 @@
/sys/power/pm_test). Only available when CONFIG_PM_DEBUG
is set. Default value is 5.
+ svm= [PPC]
+ Format: { on | off | y | n | 1 | 0 }
+ This parameter controls use of the Protected
+ Execution Facility on pSeries.
+
swapaccount=[0|1]
[KNL] Enable accounting of swap in memory resource
controller if no parameter or 1 is given or disable
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index c8180427fa01..fe9a0d8d7673 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -19,6 +19,7 @@
/* opcodes */
#define UV_WRITE_PATE 0xF104
+#define UV_ESM 0xF110
#define UV_RETURN 0xF11C
#define UV_REGISTER_MEM_SLOT 0xF120
#define UV_UNREGISTER_MEM_SLOT 0xF124
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index a3fb90bb5a39..6389a992451b 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -44,6 +44,7 @@
#include <asm/sections.h>
#include <asm/machdep.h>
#include <asm/asm-prototypes.h>
+#include <asm/ultravisor-api.h>
#include <linux/linux_logo.h>
@@ -175,6 +176,10 @@ static bool __prombss prom_radix_disable;
static bool __prombss prom_xive_disable;
#endif
+#ifdef CONFIG_PPC_SVM
+static bool __prombss prom_svm_enable;
+#endif
+
struct platform_support {
bool hash_mmu;
bool radix_mmu;
@@ -816,6 +821,17 @@ static void __init early_cmdline_parse(void)
prom_debug("XIVE disabled from cmdline\n");
}
#endif /* CONFIG_PPC_PSERIES */
+
+#ifdef CONFIG_PPC_SVM
+ opt = prom_strstr(prom_cmd_line, "svm=");
+ if (opt) {
+ bool val;
+
+ opt += sizeof("svm=") - 1;
+ if (!prom_strtobool(opt, &val))
+ prom_svm_enable = val;
+ }
+#endif /* CONFIG_PPC_SVM */
}
#ifdef CONFIG_PPC_PSERIES
@@ -1716,6 +1732,43 @@ static void __init prom_close_stdin(void)
}
}
+#ifdef CONFIG_PPC_SVM
+static int prom_rtas_hcall(uint64_t args)
+{
+ register uint64_t arg1 asm("r3") = H_RTAS;
+ register uint64_t arg2 asm("r4") = args;
+
+ asm volatile("sc 1\n" : "=r" (arg1) :
+ "r" (arg1),
+ "r" (arg2) :);
+ return arg1;
+}
+
+static struct rtas_args __prombss os_term_args;
+
+static void __init prom_rtas_os_term(char *str)
+{
+ phandle rtas_node;
+ __be32 val;
+ u32 token;
+
+ prom_debug("%s: start...\n", __func__);
+ rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
+ prom_debug("rtas_node: %x\n", rtas_node);
+ if (!PHANDLE_VALID(rtas_node))
+ return;
+
+ val = 0;
+ prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
+ token = be32_to_cpu(val);
+ prom_debug("ibm,os-term: %x\n", token);
+ if (token == 0)
+ prom_panic("Could not get token for ibm,os-term\n");
+ os_term_args.token = cpu_to_be32(token);
+ prom_rtas_hcall((uint64_t)&os_term_args);
+}
+#endif /* CONFIG_PPC_SVM */
+
/*
* Allocate room for and instantiate RTAS
*/
@@ -3172,6 +3225,49 @@ static void unreloc_toc(void)
#endif
#endif
+#ifdef CONFIG_PPC_SVM
+/*
+ * Perform the Enter Secure Mode ultracall.
+ */
+static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
+{
+ register uint64_t func asm("r3") = UV_ESM;
+ register uint64_t arg1 asm("r4") = (uint64_t)kbase;
+ register uint64_t arg2 asm("r5") = (uint64_t)fdt;
+
+ asm volatile("sc 2\n"
+ : "=r"(func)
+ : "0"(func), "r"(arg1), "r"(arg2)
+ :);
+
+ return (int)func;
+}
+
+/*
+ * Call the Ultravisor to transfer us to secure memory if we have an ESM blob.
+ */
+static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
+{
+ int ret;
+
+ if (!prom_svm_enable)
+ return;
+
+ /* Switch to secure mode. */
+ prom_printf("Switching to secure mode.\n");
+
+ ret = enter_secure_mode(kbase, fdt);
+ if (ret != U_SUCCESS) {
+ prom_printf("Returned %d from switching to secure mode.\n", ret);
+ prom_rtas_os_term("Switch to secure mode failed.\n");
+ }
+}
+#else
+static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
+{
+}
+#endif /* CONFIG_PPC_SVM */
+
/*
* We enter here early on, when the Open Firmware prom is still
* handling exceptions and the MMU hash table for us.
@@ -3370,6 +3466,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
unreloc_toc();
#endif
+ /* Move to secure memory if we're supposed to be secure guests. */
+ setup_secure_guest(kbase, hdr);
+
__start(hdr, kbase, 0, 0, 0, 0, 0);
return 0;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init
2019-07-13 6:00 ` [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
@ 2019-07-18 8:11 ` Alexey Kardashevskiy
2019-07-18 19:58 ` Segher Boessenkool
0 siblings, 1 reply; 21+ messages in thread
From: Alexey Kardashevskiy @ 2019-07-18 8:11 UTC (permalink / raw)
To: Thiago Jung Bauermann, linuxppc-dev
Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
Claudio Carvalho, Paul Mackerras, Christoph Hellwig
On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@us.ibm.com>
>
> Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
> mode. Add "svm=" command line option to turn on switching to secure mode.
>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [ andmike: Generate an RTAS os-term hcall when the ESM ucall fails. ]
> Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
> [ bauerman: Cleaned up the code a bit. ]
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> ---
> .../admin-guide/kernel-parameters.txt | 5 +
> arch/powerpc/include/asm/ultravisor-api.h | 1 +
> arch/powerpc/kernel/prom_init.c | 99 +++++++++++++++++++
> 3 files changed, 105 insertions(+)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 7b15abf7db21..c611891b5992 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -4585,6 +4585,11 @@
> /sys/power/pm_test). Only available when CONFIG_PM_DEBUG
> is set. Default value is 5.
>
> + svm= [PPC]
> + Format: { on | off | y | n | 1 | 0 }
> + This parameter controls use of the Protected
> + Execution Facility on pSeries.
> +
> swapaccount=[0|1]
> [KNL] Enable accounting of swap in memory resource
> controller if no parameter or 1 is given or disable
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
> index c8180427fa01..fe9a0d8d7673 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -19,6 +19,7 @@
>
> /* opcodes */
> #define UV_WRITE_PATE 0xF104
> +#define UV_ESM 0xF110
> #define UV_RETURN 0xF11C
> #define UV_REGISTER_MEM_SLOT 0xF120
> #define UV_UNREGISTER_MEM_SLOT 0xF124
> diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
> index a3fb90bb5a39..6389a992451b 100644
> --- a/arch/powerpc/kernel/prom_init.c
> +++ b/arch/powerpc/kernel/prom_init.c
> @@ -44,6 +44,7 @@
> #include <asm/sections.h>
> #include <asm/machdep.h>
> #include <asm/asm-prototypes.h>
> +#include <asm/ultravisor-api.h>
>
> #include <linux/linux_logo.h>
>
> @@ -175,6 +176,10 @@ static bool __prombss prom_radix_disable;
> static bool __prombss prom_xive_disable;
> #endif
>
> +#ifdef CONFIG_PPC_SVM
> +static bool __prombss prom_svm_enable;
> +#endif
> +
> struct platform_support {
> bool hash_mmu;
> bool radix_mmu;
> @@ -816,6 +821,17 @@ static void __init early_cmdline_parse(void)
> prom_debug("XIVE disabled from cmdline\n");
> }
> #endif /* CONFIG_PPC_PSERIES */
> +
> +#ifdef CONFIG_PPC_SVM
> + opt = prom_strstr(prom_cmd_line, "svm=");
> + if (opt) {
> + bool val;
> +
> + opt += sizeof("svm=") - 1;
> + if (!prom_strtobool(opt, &val))
> + prom_svm_enable = val;
> + }
> +#endif /* CONFIG_PPC_SVM */
> }
>
> #ifdef CONFIG_PPC_PSERIES
> @@ -1716,6 +1732,43 @@ static void __init prom_close_stdin(void)
> }
> }
>
> +#ifdef CONFIG_PPC_SVM
> +static int prom_rtas_hcall(uint64_t args)
> +{
> + register uint64_t arg1 asm("r3") = H_RTAS;
> + register uint64_t arg2 asm("r4") = args;
> +
> + asm volatile("sc 1\n" : "=r" (arg1) :
> + "r" (arg1),
> + "r" (arg2) :);
> + return arg1;
> +}
> +
> +static struct rtas_args __prombss os_term_args;
> +
> +static void __init prom_rtas_os_term(char *str)
> +{
> + phandle rtas_node;
> + __be32 val;
> + u32 token;
> +
> + prom_debug("%s: start...\n", __func__);
> + rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
> + prom_debug("rtas_node: %x\n", rtas_node);
> + if (!PHANDLE_VALID(rtas_node))
> + return;
> +
> + val = 0;
> + prom_getprop(rtas_node, "ibm,os-term", &val, sizeof(val));
> + token = be32_to_cpu(val);
> + prom_debug("ibm,os-term: %x\n", token);
> + if (token == 0)
> + prom_panic("Could not get token for ibm,os-term\n");
> + os_term_args.token = cpu_to_be32(token);
> + prom_rtas_hcall((uint64_t)&os_term_args);
> +}
> +#endif /* CONFIG_PPC_SVM */
> +
> /*
> * Allocate room for and instantiate RTAS
> */
> @@ -3172,6 +3225,49 @@ static void unreloc_toc(void)
> #endif
> #endif
>
> +#ifdef CONFIG_PPC_SVM
> +/*
> + * Perform the Enter Secure Mode ultracall.
> + */
> +static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
> +{
> + register uint64_t func asm("r3") = UV_ESM;
> + register uint64_t arg1 asm("r4") = (uint64_t)kbase;
> + register uint64_t arg2 asm("r5") = (uint64_t)fdt;
What does UV do with kbase and fdt precisely? Few words in the commit
log will do.
> +
> + asm volatile("sc 2\n"
> + : "=r"(func)
> + : "0"(func), "r"(arg1), "r"(arg2)
> + :);
> +
> + return (int)func;
And why "func"? Is it "function"? Weird name. Thanks,
> +}
> +
> +/*
> + * Call the Ultravisor to transfer us to secure memory if we have an ESM blob.
> + */
> +static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
> +{
> + int ret;
> +
> + if (!prom_svm_enable)
> + return;
> +
> + /* Switch to secure mode. */
> + prom_printf("Switching to secure mode.\n");
> +
> + ret = enter_secure_mode(kbase, fdt);
> + if (ret != U_SUCCESS) {
> + prom_printf("Returned %d from switching to secure mode.\n", ret);
> + prom_rtas_os_term("Switch to secure mode failed.\n");
> + }
> +}
> +#else
> +static void setup_secure_guest(unsigned long kbase, unsigned long fdt)
> +{
> +}
> +#endif /* CONFIG_PPC_SVM */
> +
> /*
> * We enter here early on, when the Open Firmware prom is still
> * handling exceptions and the MMU hash table for us.
> @@ -3370,6 +3466,9 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
> unreloc_toc();
> #endif
>
> + /* Move to secure memory if we're supposed to be secure guests. */
> + setup_secure_guest(kbase, hdr);
> +
> __start(hdr, kbase, 0, 0, 0, 0, 0);
>
> return 0;
>
--
Alexey
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init
2019-07-18 8:11 ` Alexey Kardashevskiy
@ 2019-07-18 19:58 ` Segher Boessenkool
2019-07-18 21:28 ` Thiago Jung Bauermann
0 siblings, 1 reply; 21+ messages in thread
From: Segher Boessenkool @ 2019-07-18 19:58 UTC (permalink / raw)
To: Alexey Kardashevskiy
Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
Claudio Carvalho, Paul Mackerras, linuxppc-dev,
Christoph Hellwig, Thiago Jung Bauermann
(Sorry to hijack your reply).
On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
> >From: Ram Pai <linuxram@us.ibm.com>
> >+static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
> >+{
> >+ register uint64_t func asm("r3") = UV_ESM;
> >+ register uint64_t arg1 asm("r4") = (uint64_t)kbase;
> >+ register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>
> What does UV do with kbase and fdt precisely? Few words in the commit
> log will do.
>
> >+
> >+ asm volatile("sc 2\n"
> >+ : "=r"(func)
> >+ : "0"(func), "r"(arg1), "r"(arg2)
> >+ :);
> >+
> >+ return (int)func;
>
> And why "func"? Is it "function"? Weird name. Thanks,
Maybe the three vars should just be called "r3", "r4", and "r5" --
r3 is used as return value as well, so "func" isn't a great name for it.
Some other comments about this inline asm:
The "\n" makes the generated asm look funny and has no other function.
Instead of using backreferences you can use a "+" constraint, "inout".
Empty clobber list is strange.
Casts to the return type, like most other casts, are an invitation to
bugs and not actually useful.
So this can be written
static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
{
register uint64_t r3 asm("r3") = UV_ESM;
register uint64_t r4 asm("r4") = kbase;
register uint64_t r4 asm("r5") = fdt;
asm volatile("sc 2" : "+r"(r3) : "r"(r4), "r"(r5));
return r3;
}
(and it probably should use u64 instead of both uint64_t and unsigned long?)
Segher
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init
2019-07-18 19:58 ` Segher Boessenkool
@ 2019-07-18 21:28 ` Thiago Jung Bauermann
2019-07-19 0:09 ` Alexey Kardashevskiy
0 siblings, 1 reply; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-18 21:28 UTC (permalink / raw)
To: Segher Boessenkool
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras, linuxppc-dev,
Christoph Hellwig
Hello Segher,
Thanks for your review and suggestions!
Segher Boessenkool <segher@kernel.crashing.org> writes:
> (Sorry to hijack your reply).
>
> On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
>> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>> >From: Ram Pai <linuxram@us.ibm.com>
>> >+static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>> >+{
>> >+ register uint64_t func asm("r3") = UV_ESM;
>> >+ register uint64_t arg1 asm("r4") = (uint64_t)kbase;
>> >+ register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>>
>> What does UV do with kbase and fdt precisely? Few words in the commit
>> log will do.
>>
>> >+
>> >+ asm volatile("sc 2\n"
>> >+ : "=r"(func)
>> >+ : "0"(func), "r"(arg1), "r"(arg2)
>> >+ :);
>> >+
>> >+ return (int)func;
>>
>> And why "func"? Is it "function"? Weird name. Thanks,
Yes, I believe func is for function. Perhaps ucall would be clearer
if the variable wasn't reused for the return value as Segher points out.
> Maybe the three vars should just be called "r3", "r4", and "r5" --
> r3 is used as return value as well, so "func" isn't a great name for it.
Yes, that does seem simpler.
> Some other comments about this inline asm:
>
> The "\n" makes the generated asm look funny and has no other function.
> Instead of using backreferences you can use a "+" constraint, "inout".
> Empty clobber list is strange.
> Casts to the return type, like most other casts, are an invitation to
> bugs and not actually useful.
>
> So this can be written
>
> static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
> {
> register uint64_t r3 asm("r3") = UV_ESM;
> register uint64_t r4 asm("r4") = kbase;
> register uint64_t r4 asm("r5") = fdt;
>
> asm volatile("sc 2" : "+r"(r3) : "r"(r4), "r"(r5));
>
> return r3;
> }
I'll adopt your version, it is cleaner inded. Thanks for providing it!
> (and it probably should use u64 instead of both uint64_t and unsigned long?)
Almost all of prom_init.c uses unsigned long, with u64 in just a few
places. uint64_t isn't used anywhere else in the file. I'll switch to
unsigned long everywhere, since this feature is only for 64 bit.
--
Thiago Jung Bauermann
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init
2019-07-18 21:28 ` Thiago Jung Bauermann
@ 2019-07-19 0:09 ` Alexey Kardashevskiy
2019-07-19 0:48 ` Thiago Jung Bauermann
0 siblings, 1 reply; 21+ messages in thread
From: Alexey Kardashevskiy @ 2019-07-19 0:09 UTC (permalink / raw)
To: Thiago Jung Bauermann
Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
Claudio Carvalho, Paul Mackerras, linuxppc-dev,
Christoph Hellwig
On 19/07/2019 07:28, Thiago Jung Bauermann wrote:
>
> Hello Segher,
>
> Thanks for your review and suggestions!
>
> Segher Boessenkool <segher@kernel.crashing.org> writes:
>
>> (Sorry to hijack your reply).
>>
>> On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
>>> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>>>> From: Ram Pai <linuxram@us.ibm.com>
>>>> +static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>>>> +{
>>>> + register uint64_t func asm("r3") = UV_ESM;
>>>> + register uint64_t arg1 asm("r4") = (uint64_t)kbase;
>>>> + register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>>>
>>> What does UV do with kbase and fdt precisely? Few words in the commit
>>> log will do.
What about this one? :)
>>>
>>>> +
>>>> + asm volatile("sc 2\n"
>>>> + : "=r"(func)
>>>> + : "0"(func), "r"(arg1), "r"(arg2)
>>>> + :);
>>>> +
>>>> + return (int)func;
>>>
>>> And why "func"? Is it "function"? Weird name. Thanks,
>
> Yes, I believe func is for function. Perhaps ucall would be clearer
> if the variable wasn't reused for the return value as Segher points out.
>
>> Maybe the three vars should just be called "r3", "r4", and "r5" --
>> r3 is used as return value as well, so "func" isn't a great name for it.
>
> Yes, that does seem simpler.
>
>> Some other comments about this inline asm:
>>
>> The "\n" makes the generated asm look funny and has no other function.
>> Instead of using backreferences you can use a "+" constraint, "inout".
>> Empty clobber list is strange.
>> Casts to the return type, like most other casts, are an invitation to
>> bugs and not actually useful.
>>
>> So this can be written
>>
>> static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>> {
>> register uint64_t r3 asm("r3") = UV_ESM;
>> register uint64_t r4 asm("r4") = kbase;
>> register uint64_t r4 asm("r5") = fdt;
>>
>> asm volatile("sc 2" : "+r"(r3) : "r"(r4), "r"(r5));
>>
>> return r3;
>> }
>
> I'll adopt your version, it is cleaner inded. Thanks for providing it!
>
>> (and it probably should use u64 instead of both uint64_t and unsigned long?)
>
> Almost all of prom_init.c uses unsigned long, with u64 in just a few
> places. uint64_t isn't used anywhere else in the file. I'll switch to
> unsigned long everywhere, since this feature is only for 64 bit.
>
--
Alexey
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init
2019-07-19 0:09 ` Alexey Kardashevskiy
@ 2019-07-19 0:48 ` Thiago Jung Bauermann
0 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-19 0:48 UTC (permalink / raw)
To: Alexey Kardashevskiy
Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
Claudio Carvalho, Paul Mackerras, linuxppc-dev,
Christoph Hellwig
Alexey Kardashevskiy <aik@ozlabs.ru> writes:
> On 19/07/2019 07:28, Thiago Jung Bauermann wrote:
>>
>> Hello Segher,
>>
>> Thanks for your review and suggestions!
>>
>> Segher Boessenkool <segher@kernel.crashing.org> writes:
>>
>>> (Sorry to hijack your reply).
>>>
>>> On Thu, Jul 18, 2019 at 06:11:48PM +1000, Alexey Kardashevskiy wrote:
>>>> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>>>>> From: Ram Pai <linuxram@us.ibm.com>
>>>>> +static int enter_secure_mode(unsigned long kbase, unsigned long fdt)
>>>>> +{
>>>>> + register uint64_t func asm("r3") = UV_ESM;
>>>>> + register uint64_t arg1 asm("r4") = (uint64_t)kbase;
>>>>> + register uint64_t arg2 asm("r5") = (uint64_t)fdt;
>>>>
>>>> What does UV do with kbase and fdt precisely? Few words in the commit
>>>> log will do.
>
>
> What about this one? :)
Sorry, I don't have an elaborate answer yet. The non-elaborate answer is
that the ultravisor uses the kbase and fdt as part of integrity checking
of the secure guest.
--
Thiago Jung Bauermann
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 04/13] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (2 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 03/13] powerpc/prom_init: Add the ESM call to prom_init Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-18 8:13 ` Alexey Kardashevskiy
2019-07-13 6:00 ` [PATCH v2 05/13] powerpc/pseries: Add and use LPPACA_SIZE constant Thiago Jung Bauermann
` (8 subsequent siblings)
12 siblings, 1 reply; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
From: Ram Pai <linuxram@us.ibm.com>
These functions are used when the guest wants to grant the hypervisor
access to certain pages.
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/include/asm/ultravisor-api.h | 2 ++
arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index fe9a0d8d7673..c7513bbadf57 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -25,6 +25,8 @@
#define UV_UNREGISTER_MEM_SLOT 0xF124
#define UV_PAGE_IN 0xF128
#define UV_PAGE_OUT 0xF12C
+#define UV_SHARE_PAGE 0xF130
+#define UV_UNSHARE_PAGE 0xF134
#define UV_PAGE_INVAL 0xF138
#define UV_SVM_TERMINATE 0xF13C
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f5dc5af739b8..f7418b663a0e 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -91,6 +91,21 @@ static inline int uv_svm_terminate(u64 lpid)
return ucall(UV_SVM_TERMINATE, retbuf, lpid);
}
+
+static inline int uv_share_page(u64 pfn, u64 npages)
+{
+ unsigned long retbuf[UCALL_BUFSIZE];
+
+ return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
+}
+
+static inline int uv_unshare_page(u64 pfn, u64 npages)
+{
+ unsigned long retbuf[UCALL_BUFSIZE];
+
+ return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_ULTRAVISOR_H */
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v2 04/13] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE
2019-07-13 6:00 ` [PATCH v2 04/13] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE Thiago Jung Bauermann
@ 2019-07-18 8:13 ` Alexey Kardashevskiy
2019-07-18 20:12 ` Thiago Jung Bauermann
0 siblings, 1 reply; 21+ messages in thread
From: Alexey Kardashevskiy @ 2019-07-18 8:13 UTC (permalink / raw)
To: Thiago Jung Bauermann, linuxppc-dev
Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
Claudio Carvalho, Paul Mackerras, Christoph Hellwig
On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
> From: Ram Pai <linuxram@us.ibm.com>
>
> These functions are used when the guest wants to grant the hypervisor
> access to certain pages.
>
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
> ---
> arch/powerpc/include/asm/ultravisor-api.h | 2 ++
> arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
> 2 files changed, 17 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
> index fe9a0d8d7673..c7513bbadf57 100644
> --- a/arch/powerpc/include/asm/ultravisor-api.h
> +++ b/arch/powerpc/include/asm/ultravisor-api.h
> @@ -25,6 +25,8 @@
> #define UV_UNREGISTER_MEM_SLOT 0xF124
> #define UV_PAGE_IN 0xF128
> #define UV_PAGE_OUT 0xF12C
> +#define UV_SHARE_PAGE 0xF130
> +#define UV_UNSHARE_PAGE 0xF134
> #define UV_PAGE_INVAL 0xF138
> #define UV_SVM_TERMINATE 0xF13C
>
> diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
> index f5dc5af739b8..f7418b663a0e 100644
> --- a/arch/powerpc/include/asm/ultravisor.h
> +++ b/arch/powerpc/include/asm/ultravisor.h
> @@ -91,6 +91,21 @@ static inline int uv_svm_terminate(u64 lpid)
>
> return ucall(UV_SVM_TERMINATE, retbuf, lpid);
> }
> +
> +static inline int uv_share_page(u64 pfn, u64 npages)
> +{
> + unsigned long retbuf[UCALL_BUFSIZE];
> +
> + return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
What is in that retbuf? Can you pass NULL instead?
> +}
> +
> +static inline int uv_unshare_page(u64 pfn, u64 npages)
> +{
> + unsigned long retbuf[UCALL_BUFSIZE];
> +
> + return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
> +}
> +
> #endif /* !__ASSEMBLY__ */
>
> #endif /* _ASM_POWERPC_ULTRAVISOR_H */
>
--
Alexey
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v2 04/13] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE
2019-07-18 8:13 ` Alexey Kardashevskiy
@ 2019-07-18 20:12 ` Thiago Jung Bauermann
0 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-18 20:12 UTC (permalink / raw)
To: Alexey Kardashevskiy
Cc: Anshuman Khandual, Mike Anderson, Ram Pai, linux-kernel,
Claudio Carvalho, Paul Mackerras, linuxppc-dev,
Christoph Hellwig
Hello Alexey,
Thanks for your review!
Alexey Kardashevskiy <aik@ozlabs.ru> writes:
> On 13/07/2019 16:00, Thiago Jung Bauermann wrote:
>> From: Ram Pai <linuxram@us.ibm.com>
>>
>> These functions are used when the guest wants to grant the hypervisor
>> access to certain pages.
>>
>> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
>> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
>> ---
>> arch/powerpc/include/asm/ultravisor-api.h | 2 ++
>> arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
>> 2 files changed, 17 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
>> index fe9a0d8d7673..c7513bbadf57 100644
>> --- a/arch/powerpc/include/asm/ultravisor-api.h
>> +++ b/arch/powerpc/include/asm/ultravisor-api.h
>> @@ -25,6 +25,8 @@
>> #define UV_UNREGISTER_MEM_SLOT 0xF124
>> #define UV_PAGE_IN 0xF128
>> #define UV_PAGE_OUT 0xF12C
>> +#define UV_SHARE_PAGE 0xF130
>> +#define UV_UNSHARE_PAGE 0xF134
>> #define UV_PAGE_INVAL 0xF138
>> #define UV_SVM_TERMINATE 0xF13C
>> diff --git a/arch/powerpc/include/asm/ultravisor.h
>> b/arch/powerpc/include/asm/ultravisor.h
>> index f5dc5af739b8..f7418b663a0e 100644
>> --- a/arch/powerpc/include/asm/ultravisor.h
>> +++ b/arch/powerpc/include/asm/ultravisor.h
>> @@ -91,6 +91,21 @@ static inline int uv_svm_terminate(u64 lpid)
>> return ucall(UV_SVM_TERMINATE, retbuf, lpid);
>> }
>> +
>> +static inline int uv_share_page(u64 pfn, u64 npages)
>> +{
>> + unsigned long retbuf[UCALL_BUFSIZE];
>> +
>> + return ucall(UV_SHARE_PAGE, retbuf, pfn, npages);
>
>
> What is in that retbuf? Can you pass NULL instead?
I think so, that buffer isn't used actually. Claudio is working on a
ucall_norets() which doesn't take the buffer and I can switch to that.
--
Thiago Jung Bauermann
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 05/13] powerpc/pseries: Add and use LPPACA_SIZE constant
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (3 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 04/13] powerpc/pseries/svm: Add helpers for UV_SHARE_PAGE and UV_UNSHARE_PAGE Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 06/13] powerpc/pseries/svm: Use shared memory for LPPACA structures Thiago Jung Bauermann
` (7 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Alexey Kardashevskiy, Anshuman Khandual, Alexey Kardashevskiy,
Mike Anderson, Ram Pai, linux-kernel, Claudio Carvalho,
Paul Mackerras, Christoph Hellwig, Thiago Jung Bauermann
Helps document what the hard-coded number means.
Also take the opportunity to fix an #endif comment.
Suggested-by: Alexey Kardashevskiy <aik@linux.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/kernel/paca.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 9cc91d03ab62..854105db5cff 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -56,6 +56,8 @@ static void *__init alloc_paca_data(unsigned long size, unsigned long align,
#ifdef CONFIG_PPC_PSERIES
+#define LPPACA_SIZE 0x400
+
/*
* See asm/lppaca.h for more detail.
*
@@ -69,7 +71,7 @@ static inline void init_lppaca(struct lppaca *lppaca)
*lppaca = (struct lppaca) {
.desc = cpu_to_be32(0xd397d781), /* "LpPa" */
- .size = cpu_to_be16(0x400),
+ .size = cpu_to_be16(LPPACA_SIZE),
.fpregs_in_use = 1,
.slb_count = cpu_to_be16(64),
.vmxregs_in_use = 0,
@@ -79,19 +81,18 @@ static inline void init_lppaca(struct lppaca *lppaca)
static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
{
struct lppaca *lp;
- size_t size = 0x400;
- BUILD_BUG_ON(size < sizeof(struct lppaca));
+ BUILD_BUG_ON(sizeof(struct lppaca) > LPPACA_SIZE);
if (early_cpu_has_feature(CPU_FTR_HVMODE))
return NULL;
- lp = alloc_paca_data(size, 0x400, limit, cpu);
+ lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
init_lppaca(lp);
return lp;
}
-#endif /* CONFIG_PPC_BOOK3S */
+#endif /* CONFIG_PPC_PSERIES */
#ifdef CONFIG_PPC_BOOK3S_64
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 06/13] powerpc/pseries/svm: Use shared memory for LPPACA structures
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (4 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 05/13] powerpc/pseries: Add and use LPPACA_SIZE constant Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 07/13] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL) Thiago Jung Bauermann
` (6 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann, Anshuman Khandual
From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
LPPACA structures need to be shared with the host. Hence they need to be in
shared memory. Instead of allocating individual chunks of memory for a
given structure from memblock, a contiguous chunk of memory is allocated
and then converted into shared memory. Subsequent allocation requests will
come from the contiguous chunk which will be always shared memory for all
structures.
While we are able to use a kmem_cache constructor for the Debug Trace Log,
LPPACAs are allocated very early in the boot process (before SLUB is
available) so we need to use a simpler scheme here.
Introduce helper is_svm_platform() which uses the S bit of the MSR to tell
whether we're running as a secure guest.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/include/asm/svm.h | 26 ++++++++++++++++++++
arch/powerpc/kernel/paca.c | 43 +++++++++++++++++++++++++++++++++-
2 files changed, 68 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
new file mode 100644
index 000000000000..fef3740f46a6
--- /dev/null
+++ b/arch/powerpc/include/asm/svm.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2019 Anshuman Khandual, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SVM_H
+#define _ASM_POWERPC_SVM_H
+
+#ifdef CONFIG_PPC_SVM
+
+static inline bool is_secure_guest(void)
+{
+ return mfmsr() & MSR_S;
+}
+
+#else /* CONFIG_PPC_SVM */
+
+static inline bool is_secure_guest(void)
+{
+ return false;
+}
+
+#endif /* CONFIG_PPC_SVM */
+#endif /* _ASM_POWERPC_SVM_H */
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 854105db5cff..a9622f4b45bb 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -18,6 +18,8 @@
#include <asm/sections.h>
#include <asm/pgtable.h>
#include <asm/kexec.h>
+#include <asm/svm.h>
+#include <asm/ultravisor.h>
#include "setup.h"
@@ -58,6 +60,41 @@ static void *__init alloc_paca_data(unsigned long size, unsigned long align,
#define LPPACA_SIZE 0x400
+static void *__init alloc_shared_lppaca(unsigned long size, unsigned long align,
+ unsigned long limit, int cpu)
+{
+ size_t shared_lppaca_total_size = PAGE_ALIGN(nr_cpu_ids * LPPACA_SIZE);
+ static unsigned long shared_lppaca_size;
+ static void *shared_lppaca;
+ void *ptr;
+
+ if (!shared_lppaca) {
+ memblock_set_bottom_up(true);
+
+ shared_lppaca =
+ memblock_alloc_try_nid(shared_lppaca_total_size,
+ PAGE_SIZE, MEMBLOCK_LOW_LIMIT,
+ limit, NUMA_NO_NODE);
+ if (!shared_lppaca)
+ panic("cannot allocate shared data");
+
+ memblock_set_bottom_up(false);
+ uv_share_page(PHYS_PFN(__pa(shared_lppaca)),
+ shared_lppaca_total_size >> PAGE_SHIFT);
+ }
+
+ ptr = shared_lppaca + shared_lppaca_size;
+ shared_lppaca_size += size;
+
+ /*
+ * This is very early in boot, so no harm done if the kernel crashes at
+ * this point.
+ */
+ BUG_ON(shared_lppaca_size >= shared_lppaca_total_size);
+
+ return ptr;
+}
+
/*
* See asm/lppaca.h for more detail.
*
@@ -87,7 +124,11 @@ static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
if (early_cpu_has_feature(CPU_FTR_HVMODE))
return NULL;
- lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
+ if (is_secure_guest())
+ lp = alloc_shared_lppaca(LPPACA_SIZE, 0x400, limit, cpu);
+ else
+ lp = alloc_paca_data(LPPACA_SIZE, 0x400, limit, cpu);
+
init_lppaca(lp);
return lp;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 07/13] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL)
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (5 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 06/13] powerpc/pseries/svm: Use shared memory for LPPACA structures Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 08/13] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel Thiago Jung Bauermann
` (5 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann, Anshuman Khandual
From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Secure guests need to share the DTL buffers with the hypervisor. To that
end, use a kmem_cache constructor which converts the underlying buddy
allocated SLUB cache pages into shared memory.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/include/asm/svm.h | 5 ++++
arch/powerpc/platforms/pseries/Makefile | 1 +
arch/powerpc/platforms/pseries/setup.c | 5 +++-
arch/powerpc/platforms/pseries/svm.c | 40 +++++++++++++++++++++++++
4 files changed, 50 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
index fef3740f46a6..f253116c31fc 100644
--- a/arch/powerpc/include/asm/svm.h
+++ b/arch/powerpc/include/asm/svm.h
@@ -15,6 +15,9 @@ static inline bool is_secure_guest(void)
return mfmsr() & MSR_S;
}
+void dtl_cache_ctor(void *addr);
+#define get_dtl_cache_ctor() (is_secure_guest() ? dtl_cache_ctor : NULL)
+
#else /* CONFIG_PPC_SVM */
static inline bool is_secure_guest(void)
@@ -22,5 +25,7 @@ static inline bool is_secure_guest(void)
return false;
}
+#define get_dtl_cache_ctor() NULL
+
#endif /* CONFIG_PPC_SVM */
#endif /* _ASM_POWERPC_SVM_H */
diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
index ab3d59aeacca..a420ef4c9d8e 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -26,6 +26,7 @@ obj-$(CONFIG_IBMVIO) += vio.o
obj-$(CONFIG_IBMEBUS) += ibmebus.o
obj-$(CONFIG_PAPR_SCM) += papr_scm.o
obj-$(CONFIG_PPC_SPLPAR) += vphn.o
+obj-$(CONFIG_PPC_SVM) += svm.o
ifdef CONFIG_PPC_PSERIES
obj-$(CONFIG_SUSPEND) += suspend.o
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index cb418d2bb1ac..3009b5bd11d2 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -73,6 +73,7 @@
#include <asm/security_features.h>
#include <asm/asm-const.h>
#include <asm/swiotlb.h>
+#include <asm/svm.h>
#include "pseries.h"
#include "../../../../drivers/pci/pci.h"
@@ -301,8 +302,10 @@ static inline int alloc_dispatch_logs(void)
static int alloc_dispatch_log_kmem_cache(void)
{
+ void (*ctor)(void *) = get_dtl_cache_ctor();
+
dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES,
- DISPATCH_LOG_BYTES, 0, NULL);
+ DISPATCH_LOG_BYTES, 0, ctor);
if (!dtl_cache) {
pr_warn("Failed to create dispatch trace log buffer cache\n");
pr_warn("Stolen time statistics will be unreliable\n");
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
new file mode 100644
index 000000000000..c508196f7c83
--- /dev/null
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Secure VM platform
+ *
+ * Copyright 2019 IBM Corporation
+ * Author: Anshuman Khandual <khandual@linux.vnet.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <asm/ultravisor.h>
+
+/* There's one dispatch log per CPU. */
+#define NR_DTL_PAGE (DISPATCH_LOG_BYTES * CONFIG_NR_CPUS / PAGE_SIZE)
+
+static struct page *dtl_page_store[NR_DTL_PAGE];
+static long dtl_nr_pages;
+
+static bool is_dtl_page_shared(struct page *page)
+{
+ long i;
+
+ for (i = 0; i < dtl_nr_pages; i++)
+ if (dtl_page_store[i] == page)
+ return true;
+
+ return false;
+}
+
+void dtl_cache_ctor(void *addr)
+{
+ unsigned long pfn = PHYS_PFN(__pa(addr));
+ struct page *page = pfn_to_page(pfn);
+
+ if (!is_dtl_page_shared(page)) {
+ dtl_page_store[dtl_nr_pages] = page;
+ dtl_nr_pages++;
+ WARN_ON(dtl_nr_pages >= NR_DTL_PAGE);
+ uv_share_page(pfn, 1);
+ }
+}
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 08/13] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (6 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 07/13] powerpc/pseries/svm: Use shared memory for Debug Trace Log (DTL) Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 09/13] powerpc/pseries/svm: Export guest SVM status to user space via sysfs Thiago Jung Bauermann
` (4 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
From: Ram Pai <linuxram@us.ibm.com>
A new kernel deserves a clean slate. Any pages shared with the hypervisor
is unshared before invoking the new kernel. However there are exceptions.
If the new kernel is invoked to dump the current kernel, or if there is a
explicit request to preserve the state of the current kernel, unsharing
of pages is skipped.
NOTE: While testing crashkernel, make sure at least 256M is reserved for
crashkernel. Otherwise SWIOTLB allocation will fail and crash kernel will
fail to boot.
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/include/asm/ultravisor-api.h | 1 +
arch/powerpc/include/asm/ultravisor.h | 6 ++++++
arch/powerpc/kernel/machine_kexec_64.c | 8 ++++++++
3 files changed, 15 insertions(+)
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index c7513bbadf57..ab4f756cb91c 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -29,5 +29,6 @@
#define UV_UNSHARE_PAGE 0xF134
#define UV_PAGE_INVAL 0xF138
#define UV_SVM_TERMINATE 0xF13C
+#define UV_UNSHARE_ALL_PAGES 0xF140
#endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index f7418b663a0e..80d4beaf61b8 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -106,6 +106,12 @@ static inline int uv_unshare_page(u64 pfn, u64 npages)
return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages);
}
+static inline int uv_unshare_all_pages(void)
+{
+ unsigned long retbuf[UCALL_BUFSIZE];
+
+ return ucall(UV_UNSHARE_ALL_PAGES, retbuf);
+}
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 75692c327ba0..b3d87d32e8f7 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -31,6 +31,7 @@
#include <asm/smp.h>
#include <asm/hw_breakpoint.h>
#include <asm/asm-prototypes.h>
+#include <asm/ultravisor.h>
int default_machine_kexec_prepare(struct kimage *image)
{
@@ -329,6 +330,13 @@ void default_machine_kexec(struct kimage *image)
#ifdef CONFIG_PPC_PSERIES
kexec_paca.lppaca_ptr = NULL;
#endif
+
+ if (is_secure_guest() && !(image->preserve_context ||
+ image->type == KEXEC_TYPE_CRASH)) {
+ uv_unshare_all_pages();
+ printk("kexec: Unshared all shared pages.\n");
+ }
+
paca_ptrs[kexec_paca.paca_index] = &kexec_paca;
setup_paca(&kexec_paca);
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 09/13] powerpc/pseries/svm: Export guest SVM status to user space via sysfs
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (7 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 08/13] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 10/13] powerpc/pseries/svm: Disable doorbells in SVM guests Thiago Jung Bauermann
` (3 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Ryan Grimm, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
From: Ryan Grimm <grimm@linux.vnet.ibm.com>
User space might want to know it's running in a secure VM. It can't do
a mfmsr because mfmsr is a privileged instruction.
The solution here is to create a cpu attribute:
/sys/devices/system/cpu/svm
which will read 0 or 1 based on the S bit of the guest's CPU 0.
Signed-off-by: Ryan Grimm <grimm@linux.vnet.ibm.com>
Reviewed-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/kernel/sysfs.c | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index e2147d7c9e72..f7100ab77d29 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -19,6 +19,7 @@
#include <asm/smp.h>
#include <asm/pmc.h>
#include <asm/firmware.h>
+#include <asm/svm.h>
#include "cacheinfo.h"
#include "setup.h"
@@ -715,6 +716,32 @@ static struct device_attribute pa6t_attrs[] = {
#endif /* HAS_PPC_PMC_PA6T */
#endif /* HAS_PPC_PMC_CLASSIC */
+#ifdef CONFIG_PPC_SVM
+static void get_svm(void *val)
+{
+ u32 *value = val;
+
+ *value = is_secure_guest();
+}
+
+static ssize_t show_svm(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ u32 val;
+ smp_call_function_single(0, get_svm, &val, 1);
+ return sprintf(buf, "%u\n", val);
+}
+static DEVICE_ATTR(svm, 0444, show_svm, NULL);
+
+static void create_svm_file(void)
+{
+ device_create_file(cpu_subsys.dev_root, &dev_attr_svm);
+}
+#else
+static void create_svm_file(void)
+{
+}
+#endif /* CONFIG_PPC_SVM */
+
static int register_cpu_online(unsigned int cpu)
{
struct cpu *c = &per_cpu(cpu_devices, cpu);
@@ -1058,6 +1085,8 @@ static int __init topology_init(void)
sysfs_create_dscr_default();
#endif /* CONFIG_PPC64 */
+ create_svm_file();
+
return 0;
}
subsys_initcall(topology_init);
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 10/13] powerpc/pseries/svm: Disable doorbells in SVM guests
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (8 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 09/13] powerpc/pseries/svm: Export guest SVM status to user space via sysfs Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 11/13] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests Thiago Jung Bauermann
` (2 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Sukadev Bhattiprolu, Christoph Hellwig, Thiago Jung Bauermann
From: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Normally, the HV emulates some instructions like MSGSNDP, MSGCLRP
from a KVM guest. To emulate the instructions, it must first read
the instruction from the guest's memory and decode its parameters.
However for a secure guest (aka SVM), the page containing the
instruction is in secure memory and the HV cannot access directly.
It would need the Ultravisor (UV) to facilitate accessing the
instruction and parameters but the UV currently does not have
the support for such accesses.
Until the UV has such support, disable doorbells in SVMs. This might
incur a performance hit but that is yet to be quantified.
With this patch applied (needed only in SVMs not needed for HV) we
are able to launch SVM guests with multi-core support. Eg:
qemu -smp sockets=2,cores=2,threads=2.
Fix suggested by Benjamin Herrenschmidt. Thanks to input from
Paul Mackerras, Ram Pai and Michael Anderson.
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/platforms/pseries/smp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index 3df46123cce3..95a5c24a1544 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -45,6 +45,7 @@
#include <asm/dbell.h>
#include <asm/plpar_wrappers.h>
#include <asm/code-patching.h>
+#include <asm/svm.h>
#include "pseries.h"
#include "offline_states.h"
@@ -225,7 +226,7 @@ static __init void pSeries_smp_probe_xics(void)
{
xics_smp_probe();
- if (cpu_has_feature(CPU_FTR_DBELL))
+ if (cpu_has_feature(CPU_FTR_DBELL) && !is_secure_guest())
smp_ops->cause_ipi = smp_pseries_cause_ipi;
else
smp_ops->cause_ipi = icp_ops->cause_ipi;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 11/13] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (9 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 10/13] powerpc/pseries/svm: Disable doorbells in SVM guests Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 12/13] powerpc/pseries/svm: Force SWIOTLB for " Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 13/13] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs Thiago Jung Bauermann
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
Secure guest memory is inacessible to devices so regular DMA isn't
possible.
In that case set devices' dma_map_ops to NULL so that the generic
DMA code path will use SWIOTLB and DMA to bounce buffers.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/platforms/pseries/iommu.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 03bbb299320e..7d9550edb700 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -50,6 +50,7 @@
#include <asm/udbg.h>
#include <asm/mmzone.h>
#include <asm/plpar_wrappers.h>
+#include <asm/svm.h>
#include "pseries.h"
@@ -1332,7 +1333,10 @@ void iommu_init_early_pSeries(void)
of_reconfig_notifier_register(&iommu_reconfig_nb);
register_memory_notifier(&iommu_mem_nb);
- set_pci_dma_ops(&dma_iommu_ops);
+ if (is_secure_guest())
+ set_pci_dma_ops(NULL);
+ else
+ set_pci_dma_ops(&dma_iommu_ops);
}
static int __init disable_multitce(char *str)
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 12/13] powerpc/pseries/svm: Force SWIOTLB for secure guests
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (10 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 11/13] powerpc/pseries/iommu: Don't use dma_iommu_ops on secure guests Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
2019-07-13 6:00 ` [PATCH v2 13/13] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs Thiago Jung Bauermann
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann, Anshuman Khandual
From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
SWIOTLB checks range of incoming CPU addresses to be bounced and sees if
the device can access it through its DMA window without requiring bouncing.
In such cases it just chooses to skip bouncing. But for cases like secure
guests on powerpc platform all addresses need to be bounced into the shared
pool of memory because the host cannot access it otherwise. Hence the need
to do the bouncing is not related to device's DMA window and use of bounce
buffers is forced by setting swiotlb_force.
Also, connect the shared memory conversion functions into the
ARCH_HAS_MEM_ENCRYPT hooks and call swiotlb_update_mem_attributes() to
convert SWIOTLB's memory pool to shared memory.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
[ bauerman: Use ARCH_HAS_MEM_ENCRYPT hooks to share swiotlb memory pool. ]
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/include/asm/mem_encrypt.h | 26 +++++++++++++++
arch/powerpc/platforms/pseries/Kconfig | 3 ++
arch/powerpc/platforms/pseries/svm.c | 45 ++++++++++++++++++++++++++
3 files changed, 74 insertions(+)
diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
new file mode 100644
index 000000000000..d8d5a7fcf298
--- /dev/null
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2019 IBM Corporation
+ */
+
+#ifndef _ASM_POWERPC_MEM_ENCRYPT_H
+#define _ASM_POWERPC_MEM_ENCRYPT_H
+
+#include <asm/svm.h>
+
+static inline bool mem_encrypt_active(void)
+{
+ return is_secure_guest();
+}
+
+static inline bool force_dma_unencrypted(struct device *dev)
+{
+ return is_secure_guest();
+}
+
+int set_memory_encrypted(unsigned long addr, int numpages);
+int set_memory_decrypted(unsigned long addr, int numpages);
+
+#endif /* _ASM_POWERPC_MEM_ENCRYPT_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index d09deb05bb66..9e35cddddf73 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -149,6 +149,9 @@ config PAPR_SCM
config PPC_SVM
bool "Secure virtual machine (SVM) support for POWER"
depends on PPC_PSERIES
+ select SWIOTLB
+ select ARCH_HAS_MEM_ENCRYPT
+ select ARCH_HAS_FORCE_DMA_UNENCRYPTED
help
There are certain POWER platforms which support secure guests using
the Protected Execution Facility, with the help of an Ultravisor
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index c508196f7c83..618622d636d5 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -7,8 +7,53 @@
*/
#include <linux/mm.h>
+#include <asm/machdep.h>
+#include <asm/svm.h>
+#include <asm/swiotlb.h>
#include <asm/ultravisor.h>
+static int __init init_svm(void)
+{
+ if (!is_secure_guest())
+ return 0;
+
+ /* Don't release the SWIOTLB buffer. */
+ ppc_swiotlb_enable = 1;
+
+ /*
+ * Since the guest memory is inaccessible to the host, devices always
+ * need to use the SWIOTLB buffer for DMA even if dma_capable() says
+ * otherwise.
+ */
+ swiotlb_force = SWIOTLB_FORCE;
+
+ /* Share the SWIOTLB buffer with the host. */
+ swiotlb_update_mem_attributes();
+
+ return 0;
+}
+machine_early_initcall(pseries, init_svm);
+
+int set_memory_encrypted(unsigned long addr, int numpages)
+{
+ if (!PAGE_ALIGNED(addr))
+ return -EINVAL;
+
+ uv_unshare_page(PHYS_PFN(__pa(addr)), numpages);
+
+ return 0;
+}
+
+int set_memory_decrypted(unsigned long addr, int numpages)
+{
+ if (!PAGE_ALIGNED(addr))
+ return -EINVAL;
+
+ uv_share_page(PHYS_PFN(__pa(addr)), numpages);
+
+ return 0;
+}
+
/* There's one dispatch log per CPU. */
#define NR_DTL_PAGE (DISPATCH_LOG_BYTES * CONFIG_NR_CPUS / PAGE_SIZE)
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v2 13/13] powerpc/configs: Enable secure guest support in pseries and ppc64 defconfigs
2019-07-13 6:00 [PATCH v2 00/13] Secure Virtual Machine Enablement Thiago Jung Bauermann
` (11 preceding siblings ...)
2019-07-13 6:00 ` [PATCH v2 12/13] powerpc/pseries/svm: Force SWIOTLB for " Thiago Jung Bauermann
@ 2019-07-13 6:00 ` Thiago Jung Bauermann
12 siblings, 0 replies; 21+ messages in thread
From: Thiago Jung Bauermann @ 2019-07-13 6:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: Anshuman Khandual, Alexey Kardashevskiy, Mike Anderson, Ram Pai,
linux-kernel, Claudio Carvalho, Ryan Grimm, Paul Mackerras,
Christoph Hellwig, Thiago Jung Bauermann
From: Ryan Grimm <grimm@linux.vnet.ibm.com>
Enables running as a secure guest in platforms with an Ultravisor.
Signed-off-by: Ryan Grimm <grimm@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig b/arch/powerpc/configs/ppc64_defconfig
index c63b019ba753..0b04c5ca9e13 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs/ppc64_defconfig
@@ -31,6 +31,7 @@ CONFIG_DTL=y
CONFIG_SCANLOG=m
CONFIG_PPC_SMLPAR=y
CONFIG_IBMEBUS=y
+CONFIG_PPC_SVM=y
CONFIG_PPC_MAPLE=y
CONFIG_PPC_PASEMI=y
CONFIG_PPC_PASEMI_IOMMU=y
diff --git a/arch/powerpc/configs/pseries_defconfig b/arch/powerpc/configs/pseries_defconfig
index 38abc9c1770a..26126b4d4de3 100644
--- a/arch/powerpc/configs/pseries_defconfig
+++ b/arch/powerpc/configs/pseries_defconfig
@@ -42,6 +42,7 @@ CONFIG_DTL=y
CONFIG_SCANLOG=m
CONFIG_PPC_SMLPAR=y
CONFIG_IBMEBUS=y
+CONFIG_PPC_SVM=y
# CONFIG_PPC_PMAC is not set
CONFIG_RTAS_FLASH=m
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
^ permalink raw reply related [flat|nested] 21+ messages in thread