linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor
@ 2019-05-18 14:25 Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 01/10] KVM: PPC: Ultravisor: Add PPC_UV config option Claudio Carvalho
                   ` (9 more replies)
  0 siblings, 10 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

POWER platforms that supports the Protected Execution Facility (PEF)
introduce features that combine hardware facilities and firmware to
enable secure virtual machines. That includes a new processor mode
(ultravisor mode) and the ultravisor firmware.

In PEF enabled systems, the ultravisor firmware runs at a privilege
level above the hypervisor and also takes control over some system
resources. The hypervisor, though, can make system calls to access these
resources. Such system calls, a.k.a. ucalls, are handled by the
ultravisor firmware.

The processor allows part of the system memory to be configured as
secure memory, and introduces a a new mode, called secure mode, where
any software entity in that mode can access secure memory. The
hypervisor doesn't (and can't) run in secure mode, but a secure guest
and the ultravisor firmware do.

This patch set adds support for ultravisor calls and do some preparation
for running secure guests

---
Changelog:
---
v1->v2:
 - Addressed comments from Paul Mackerras:
     - Write the pate in HV's table before doing that in UV's
     - Renamed and better documented the ultravisor header files. Also added
       all possible return codes for each ucall
     - Updated the commit message that introduces the MSR_S bit
     - Moved ultravisor.c and ucall.S to arch/powerpc/kernel
     - Changed ucall.S to not save CR
 - Rebased
 - Changed the patches order
 - Updated several commit messages
 - Added FW_FEATURE_ULTRAVISOR to enable use of firmware_has_feature()
 - Renamed CONFIG_PPC_KVM_UV to CONFIG_PPC_UV and used it to ifdef the ucall
   handler and the code that populates the powerpc_firmware_features for 
   ultravisor
 - Exported the ucall symbol. KVM may be built as module.
 - Restricted LDBAR access if the ultravisor firmware is available
 - Dropped patches:
     "[PATCH 06/13] KVM: PPC: Ultravisor: UV_RESTRICTED_SPR_WRITE ucall"
     "[PATCH 07/13] KVM: PPC: Ultravisor: UV_RESTRICTED_SPR_READ ucall"
     "[PATCH 08/13] KVM: PPC: Ultravisor: fix mtspr and mfspr"
 - Squashed patches:
     "[PATCH 09/13] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM"
     "[PATCH 13/13] KVM: PPC: UV: Have fast_guest_return check secure_guest"

Anshuman Khandual (1):
  KVM: PPC: Ultravisor: Add PPC_UV config option

Claudio Carvalho (1):
  powerpc: Introduce FW_FEATURE_ULTRAVISOR

Michael Anderson (2):
  KVM: PPC: Ultravisor: Use UV_WRITE_PATE ucall to register a PATE
  KVM: PPC: Ultravisor: Check for MSR_S during hv_reset_msr

Paul Mackerras (1):
  KVM: PPC: Book3S HV: Fixed for running secure guests

Ram Pai (3):
  KVM: PPC: Ultravisor: Add generic ultravisor call handler
  KVM: PPC: Ultravisor: Restrict flush of the partition tlb cache
  KVM: PPC: Ultravisor: Restrict LDBAR access

Sukadev Bhattiprolu (2):
  KVM: PPC: Ultravisor: Introduce the MSR_S bit
  KVM: PPC: Ultravisor: Return to UV for hcalls from SVM

 arch/powerpc/Kconfig                         | 20 ++++++
 arch/powerpc/include/asm/firmware.h          |  5 +-
 arch/powerpc/include/asm/kvm_host.h          |  1 +
 arch/powerpc/include/asm/reg.h               |  3 +
 arch/powerpc/include/asm/ultravisor-api.h    | 24 ++++++++
 arch/powerpc/include/asm/ultravisor.h        | 49 +++++++++++++++
 arch/powerpc/kernel/Makefile                 |  1 +
 arch/powerpc/kernel/asm-offsets.c            |  1 +
 arch/powerpc/kernel/prom.c                   |  6 ++
 arch/powerpc/kernel/ucall.S                  | 31 ++++++++++
 arch/powerpc/kernel/ultravisor.c             | 30 +++++++++
 arch/powerpc/kvm/book3s_64_mmu_hv.c          |  1 +
 arch/powerpc/kvm/book3s_hv.c                 |  4 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S      | 40 ++++++++++--
 arch/powerpc/mm/book3s64/hash_utils.c        |  3 +-
 arch/powerpc/mm/book3s64/pgtable.c           | 65 +++++++++++++++-----
 arch/powerpc/mm/book3s64/radix_pgtable.c     |  9 ++-
 arch/powerpc/perf/imc-pmu.c                  | 64 +++++++++++--------
 arch/powerpc/platforms/powernv/idle.c        |  6 +-
 arch/powerpc/platforms/powernv/subcore-asm.S |  4 ++
 20 files changed, 311 insertions(+), 56 deletions(-)
 create mode 100644 arch/powerpc/include/asm/ultravisor-api.h
 create mode 100644 arch/powerpc/include/asm/ultravisor.h
 create mode 100644 arch/powerpc/kernel/ucall.S
 create mode 100644 arch/powerpc/kernel/ultravisor.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 01/10] KVM: PPC: Ultravisor: Add PPC_UV config option
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 02/10] KVM: PPC: Ultravisor: Introduce the MSR_S bit Claudio Carvalho
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Anshuman Khandual <khandual@linux.vnet.ibm.com>

CONFIG_PPC_UV adds support for ultravisor.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[Update config help and commit message]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/Kconfig | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2711aac24621..894171c863bc 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -438,6 +438,26 @@ config PPC_TRANSACTIONAL_MEM
        ---help---
          Support user-mode Transactional Memory on POWERPC.
 
+config PPC_UV
+	bool "Ultravisor support"
+	depends on KVM_BOOK3S_HV_POSSIBLE
+	select HMM_MIRROR
+	select HMM
+	select ZONE_DEVICE
+	select MIGRATE_VMA_HELPER
+	select DEV_PAGEMAP_OPS
+	select DEVICE_PRIVATE
+	select MEMORY_HOTPLUG
+	select MEMORY_HOTREMOVE
+	default n
+	help
+	  This option paravirtualizes the kernel to run in POWER platforms that
+	  supports the Protected Execution Facility (PEF). In such platforms,
+	  the ultravisor firmware runs at a privilege level above the
+	  hypervisor.
+
+	  If unsure, say "N".
+
 config LD_HEAD_STUB_CATCH
 	bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT
 	depends on PPC64
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 02/10] KVM: PPC: Ultravisor: Introduce the MSR_S bit
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 01/10] KVM: PPC: Ultravisor: Add PPC_UV config option Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 03/10] powerpc: Introduce FW_FEATURE_ULTRAVISOR Claudio Carvalho
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>

The ultravisor processor mode is introduced in POWER platforms that
supports the Protected Execution Facility (PEF). Ultravisor is higher
privileged than hypervisor mode.

In PEF enabled platforms, the MSR_S bit is used to indicate if the
thread is in secure state. With the MSR_S bit, the privilege state of
the thread is now determined by MSR_S, MSR_HV and MSR_PR, as follows:

S   HV  PR
-----------------------
0   x   1   problem
1   0   1   problem
x   x   0   privileged
x   1   0   hypervisor
1   1   0   ultravisor
1   1   1   reserved

The hypervisor doesn't (and can't) run with the MSR_S bit set, but a
secure guest and the ultravisor firmware do.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[Update the commit message]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/include/asm/reg.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 10caa145f98b..39b4c0a519f5 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -38,6 +38,7 @@
 #define MSR_TM_LG	32		/* Trans Mem Available */
 #define MSR_VEC_LG	25	        /* Enable AltiVec */
 #define MSR_VSX_LG	23		/* Enable VSX */
+#define MSR_S_LG	22		/* Secure VM bit */
 #define MSR_POW_LG	18		/* Enable Power Management */
 #define MSR_WE_LG	18		/* Wait State Enable */
 #define MSR_TGPR_LG	17		/* TLB Update registers in use */
@@ -71,11 +72,13 @@
 #define MSR_SF		__MASK(MSR_SF_LG)	/* Enable 64 bit mode */
 #define MSR_ISF		__MASK(MSR_ISF_LG)	/* Interrupt 64b mode valid on 630 */
 #define MSR_HV 		__MASK(MSR_HV_LG)	/* Hypervisor state */
+#define MSR_S		__MASK(MSR_S_LG)	/* Secure state */
 #else
 /* so tests for these bits fail on 32-bit */
 #define MSR_SF		0
 #define MSR_ISF		0
 #define MSR_HV		0
+#define MSR_S		0
 #endif
 
 /*
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 03/10] powerpc: Introduce FW_FEATURE_ULTRAVISOR
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 01/10] KVM: PPC: Ultravisor: Add PPC_UV config option Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 02/10] KVM: PPC: Ultravisor: Introduce the MSR_S bit Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 04/10] KVM: PPC: Ultravisor: Add generic ultravisor call handler Claudio Carvalho
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

This feature tells if the ultravisor firmware is available to handle
ucalls.

Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
[Device node name to "ibm,ultravisor"]
Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
---
 arch/powerpc/include/asm/firmware.h   |  5 +++--
 arch/powerpc/include/asm/ultravisor.h | 15 +++++++++++++++
 arch/powerpc/kernel/Makefile          |  1 +
 arch/powerpc/kernel/prom.c            |  6 ++++++
 arch/powerpc/kernel/ultravisor.c      | 26 ++++++++++++++++++++++++++
 5 files changed, 51 insertions(+), 2 deletions(-)
 create mode 100644 arch/powerpc/include/asm/ultravisor.h
 create mode 100644 arch/powerpc/kernel/ultravisor.c

diff --git a/arch/powerpc/include/asm/firmware.h b/arch/powerpc/include/asm/firmware.h
index 00bc42d95679..43b48c4d3ca9 100644
--- a/arch/powerpc/include/asm/firmware.h
+++ b/arch/powerpc/include/asm/firmware.h
@@ -54,6 +54,7 @@
 #define FW_FEATURE_DRC_INFO	ASM_CONST(0x0000000800000000)
 #define FW_FEATURE_BLOCK_REMOVE ASM_CONST(0x0000001000000000)
 #define FW_FEATURE_PAPR_SCM 	ASM_CONST(0x0000002000000000)
+#define FW_FEATURE_ULTRAVISOR	ASM_CONST(0x0000004000000000)
 
 #ifndef __ASSEMBLY__
 
@@ -72,9 +73,9 @@ enum {
 		FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN |
 		FW_FEATURE_HPT_RESIZE | FW_FEATURE_DRMEM_V2 |
 		FW_FEATURE_DRC_INFO | FW_FEATURE_BLOCK_REMOVE |
-		FW_FEATURE_PAPR_SCM,
+		FW_FEATURE_PAPR_SCM | FW_FEATURE_ULTRAVISOR,
 	FW_FEATURE_PSERIES_ALWAYS = 0,
-	FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL,
+	FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL | FW_FEATURE_ULTRAVISOR,
 	FW_FEATURE_POWERNV_ALWAYS = 0,
 	FW_FEATURE_PS3_POSSIBLE = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
 	FW_FEATURE_PS3_ALWAYS = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
new file mode 100644
index 000000000000..e5009b0d84ea
--- /dev/null
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Ultravisor definitions
+ *
+ * Copyright 2019, IBM Corporation.
+ *
+ */
+#ifndef _ASM_POWERPC_ULTRAVISOR_H
+#define _ASM_POWERPC_ULTRAVISOR_H
+
+/* Internal functions */
+extern int early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
+					 int depth, void *data);
+
+#endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 0ea6c4aa3a20..c8ca219e54bf 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -154,6 +154,7 @@ endif
 
 obj-$(CONFIG_EPAPR_PARAVIRT)	+= epapr_paravirt.o epapr_hcalls.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvm_emul.o
+obj-$(CONFIG_PPC_UV)		+= ultravisor.o
 
 # Disable GCOV, KCOV & sanitizers in odd or sensitive code
 GCOV_PROFILE_prom_init.o := n
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 4221527b082f..8a9a8a319959 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -59,6 +59,7 @@
 #include <asm/firmware.h>
 #include <asm/dt_cpu_ftrs.h>
 #include <asm/drmem.h>
+#include <asm/ultravisor.h>
 
 #include <mm/mmu_decl.h>
 
@@ -713,6 +714,11 @@ void __init early_init_devtree(void *params)
 	of_scan_flat_dt(early_init_dt_scan_fw_dump, NULL);
 #endif
 
+#if defined(CONFIG_PPC_UV)
+	/* Scan tree for ultravisor feature */
+	of_scan_flat_dt(early_init_dt_scan_ultravisor, NULL);
+#endif
+
 	/* Retrieve various informations from the /chosen node of the
 	 * device-tree, including the platform type, initrd location and
 	 * size, TCE reserve, and more ...
diff --git a/arch/powerpc/kernel/ultravisor.c b/arch/powerpc/kernel/ultravisor.c
new file mode 100644
index 000000000000..ac23835bdf5a
--- /dev/null
+++ b/arch/powerpc/kernel/ultravisor.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Ultravisor high level interfaces
+ *
+ * Copyright 2019, IBM Corporation.
+ *
+ */
+#include <linux/init.h>
+#include <linux/printk.h>
+#include <linux/string.h>
+
+#include <asm/ultravisor.h>
+#include <asm/firmware.h>
+
+int __init early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
+					 int depth, void *data)
+{
+	if (depth != 1 || strcmp(uname, "ibm,ultravisor") != 0)
+		return 0;
+
+	/* TODO: check the compatible devtree property once it is created */
+
+	powerpc_firmware_features |= FW_FEATURE_ULTRAVISOR;
+	pr_debug("Ultravisor detected!\n");
+	return 1;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 04/10] KVM: PPC: Ultravisor: Add generic ultravisor call handler
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (2 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 03/10] powerpc: Introduce FW_FEATURE_ULTRAVISOR Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 05/10] KVM: PPC: Ultravisor: Use UV_WRITE_PATE ucall to register a PATE Claudio Carvalho
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Ram Pai <linuxram@us.ibm.com>

Add the ucall() function, which can be used to make ultravisor calls
with varied number of in and out arguments. Ultravisor calls can be made
from the host or guests.

This copies the implementation of plpar_hcall().

Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[Change ucall.S to not save CR, rename and move the headers, build
 ucall.S if CONFIG_PPC_UV set, and add some comments in the code]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/include/asm/ultravisor-api.h | 20 +++++++++++++++
 arch/powerpc/include/asm/ultravisor.h     | 25 ++++++++++++++++++
 arch/powerpc/kernel/Makefile              |  2 +-
 arch/powerpc/kernel/ucall.S               | 31 +++++++++++++++++++++++
 arch/powerpc/kernel/ultravisor.c          |  4 +++
 5 files changed, 81 insertions(+), 1 deletion(-)
 create mode 100644 arch/powerpc/include/asm/ultravisor-api.h
 create mode 100644 arch/powerpc/kernel/ucall.S

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
new file mode 100644
index 000000000000..5f538f33c704
--- /dev/null
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Ultravisor calls.
+ *
+ * Copyright 2019, IBM Corporation.
+ *
+ */
+#ifndef _ASM_POWERPC_ULTRAVISOR_API_H
+#define _ASM_POWERPC_ULTRAVISOR_API_H
+
+#include <asm/hvcall.h>
+
+/* Return codes */
+#define U_NOT_AVAILABLE		H_NOT_AVAILABLE
+#define U_SUCCESS		H_SUCCESS
+#define U_FUNCTION		H_FUNCTION
+#define U_PARAMETER		H_PARAMETER
+
+#endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
+
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index e5009b0d84ea..e8abc1bbc194 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -8,8 +8,33 @@
 #ifndef _ASM_POWERPC_ULTRAVISOR_H
 #define _ASM_POWERPC_ULTRAVISOR_H
 
+#include <asm/ultravisor-api.h>
+
+#if !defined(__ASSEMBLY__)
+
 /* Internal functions */
 extern int early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
 					 int depth, void *data);
 
+/* API functions */
+#define UCALL_BUFSIZE 4
+/**
+ * ucall: Make a powerpc ultravisor call.
+ * @opcode: The ultravisor call to make.
+ * @retbuf: Buffer to store up to 4 return arguments in.
+ *
+ * This call supports up to 6 arguments and 4 return arguments. Use
+ * UCALL_BUFSIZE to size the return argument buffer.
+ */
+#if defined(CONFIG_PPC_UV)
+long ucall(unsigned long opcode, unsigned long *retbuf, ...);
+#else
+static long ucall(unsigned long opcode, unsigned long *retbuf, ...)
+{
+	return U_NOT_AVAILABLE;
+}
+#endif
+
+#endif /* !__ASSEMBLY__ */
+
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index c8ca219e54bf..43ff4546e469 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -154,7 +154,7 @@ endif
 
 obj-$(CONFIG_EPAPR_PARAVIRT)	+= epapr_paravirt.o epapr_hcalls.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvm_emul.o
-obj-$(CONFIG_PPC_UV)		+= ultravisor.o
+obj-$(CONFIG_PPC_UV)		+= ultravisor.o ucall.o
 
 # Disable GCOV, KCOV & sanitizers in odd or sensitive code
 GCOV_PROFILE_prom_init.o := n
diff --git a/arch/powerpc/kernel/ucall.S b/arch/powerpc/kernel/ucall.S
new file mode 100644
index 000000000000..ecc88998a13b
--- /dev/null
+++ b/arch/powerpc/kernel/ucall.S
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Generic code to perform an ultravisor call.
+ *
+ * Copyright 2019, IBM Corporation.
+ *
+ */
+#include <asm/ppc_asm.h>
+
+/*
+ * This function is based on the plpar_hcall()
+ */
+_GLOBAL_TOC(ucall)
+	mr	r0,r3
+	std     r4,STK_PARAM(R4)(r1)     /* Save ret buffer */
+	mr	r3,r5
+	mr	r4,r6
+	mr	r5,r7
+	mr	r6,r8
+	mr	r7,r9
+	mr	r8,r10
+
+	sc 2				/* invoke the ultravisor */
+
+	ld	r12,STK_PARAM(R4)(r1)
+	std	r4,  0(r12)
+	std	r5,  8(r12)
+	std	r6, 16(r12)
+	std	r7, 24(r12)
+
+	blr				/* return r3 = status */
diff --git a/arch/powerpc/kernel/ultravisor.c b/arch/powerpc/kernel/ultravisor.c
index ac23835bdf5a..9fbf0804ee4e 100644
--- a/arch/powerpc/kernel/ultravisor.c
+++ b/arch/powerpc/kernel/ultravisor.c
@@ -8,10 +8,14 @@
 #include <linux/init.h>
 #include <linux/printk.h>
 #include <linux/string.h>
+#include <linux/export.h>
 
 #include <asm/ultravisor.h>
 #include <asm/firmware.h>
 
+/* in ucall.S */
+EXPORT_SYMBOL_GPL(ucall);
+
 int __init early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
 					 int depth, void *data)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 05/10] KVM: PPC: Ultravisor: Use UV_WRITE_PATE ucall to register a PATE
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (3 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 04/10] KVM: PPC: Ultravisor: Add generic ultravisor call handler Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 06/10] KVM: PPC: Ultravisor: Restrict flush of the partition tlb cache Claudio Carvalho
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Ryan Grimm, Sukadev Bhattiprolu, Thiago Jung Bauermann,
	Anshuman Khandual

From: Michael Anderson <andmike@linux.ibm.com>

When running under an ultravisor, the ultravisor controls the real
partition table and has it in secure memory where the hypervisor can't
access it, and therefore we (the HV) have to do a ucall whenever we want
to update an entry.

The HV still keeps a copy of its view of the partition table in normal
memory so that the nest MMU can access it.

Both partition tables will have PATE entries for HV and normal virtual
machines.

Suggested-by: Ryan Grimm <grimm@linux.vnet.ibm.com>
Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[Write the pate in HV's table before doing that in UV's]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/include/asm/ultravisor-api.h |  5 +++-
 arch/powerpc/include/asm/ultravisor.h     |  9 ++++++
 arch/powerpc/mm/book3s64/hash_utils.c     |  3 +-
 arch/powerpc/mm/book3s64/pgtable.c        | 34 +++++++++++++++++++++--
 arch/powerpc/mm/book3s64/radix_pgtable.c  |  9 ++++--
 5 files changed, 52 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 5f538f33c704..24bfb4c1737e 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -15,6 +15,9 @@
 #define U_SUCCESS		H_SUCCESS
 #define U_FUNCTION		H_FUNCTION
 #define U_PARAMETER		H_PARAMETER
+#define U_PERMISSION		H_PERMISSION
 
-#endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
+/* opcodes */
+#define UV_WRITE_PATE			0xF104
 
+#endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h
index e8abc1bbc194..4ffec7a36acd 100644
--- a/arch/powerpc/include/asm/ultravisor.h
+++ b/arch/powerpc/include/asm/ultravisor.h
@@ -12,6 +12,8 @@
 
 #if !defined(__ASSEMBLY__)
 
+#include <linux/types.h>
+
 /* Internal functions */
 extern int early_init_dt_scan_ultravisor(unsigned long node, const char *uname,
 					 int depth, void *data);
@@ -35,6 +37,13 @@ static long ucall(unsigned long opcode, unsigned long *retbuf, ...)
 }
 #endif
 
+static inline int uv_register_pate(u64 lpid, u64 dw0, u64 dw1)
+{
+	unsigned long retbuf[UCALL_BUFSIZE];
+
+	return ucall(UV_WRITE_PATE, retbuf, lpid, dw0, dw1);
+}
+
 #endif /* !__ASSEMBLY__ */
 
 #endif	/* _ASM_POWERPC_ULTRAVISOR_H */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 919a861a8ec0..8419e665fab0 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1080,9 +1080,10 @@ void hash__early_init_mmu_secondary(void)
 
 		if (!cpu_has_feature(CPU_FTR_ARCH_300))
 			mtspr(SPRN_SDR1, _SDR1);
-		else
+		else if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
 			mtspr(SPRN_PTCR,
 			      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
+
 	}
 	/* Initialize SLB */
 	slb_initialize();
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 16bda049187a..40a9fc8b139f 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -16,6 +16,8 @@
 #include <asm/tlb.h>
 #include <asm/trace.h>
 #include <asm/powernv.h>
+#include <asm/firmware.h>
+#include <asm/ultravisor.h>
 
 #include <mm/mmu_decl.h>
 #include <trace/events/thp.h>
@@ -206,12 +208,25 @@ void __init mmu_partition_table_init(void)
 	 * 64 K size.
 	 */
 	ptcr = __pa(partition_tb) | (PATB_SIZE_SHIFT - 12);
-	mtspr(SPRN_PTCR, ptcr);
+	/*
+	 * If ultravisor is available, it is responsible for creating and
+	 * managing partition table
+	 */
+	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+		mtspr(SPRN_PTCR, ptcr);
+
+	/*
+	 * Since nestMMU cannot access secure memory. Create
+	 * and manage our own partition table. This table
+	 * contains entries for nonsecure and hypervisor
+	 * partition.
+	 */
 	powernv_set_nmmu_ptcr(ptcr);
 }
 
-void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
-				   unsigned long dw1)
+static void __mmu_partition_table_set_entry(unsigned int lpid,
+					    unsigned long dw0,
+					    unsigned long dw1)
 {
 	unsigned long old = be64_to_cpu(partition_tb[lpid].patb0);
 
@@ -238,6 +253,19 @@ void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
 	/* do we need fixup here ?*/
 	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
 }
+
+void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
+				  unsigned long dw1)
+{
+	__mmu_partition_table_set_entry(lpid, dw0, dw1);
+
+	if (firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
+		uv_register_pate(lpid, dw0, dw1);
+		pr_info("PATE registered by ultravisor: dw0 = 0x%lx, dw1 = 0x%lx\n",
+			dw0, dw1);
+	}
+}
+
 EXPORT_SYMBOL_GPL(mmu_partition_table_set_entry);
 
 static pmd_t *get_pmd_from_cache(struct mm_struct *mm)
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index c9bcf428dd2b..0472cab84df8 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -655,8 +655,10 @@ void radix__early_init_mmu_secondary(void)
 		lpcr = mfspr(SPRN_LPCR);
 		mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
 
-		mtspr(SPRN_PTCR,
-		      __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
+		if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+			mtspr(SPRN_PTCR, __pa(partition_tb) |
+			      (PATB_SIZE_SHIFT - 12));
+
 		radix_init_amor();
 	}
 
@@ -672,7 +674,8 @@ void radix__mmu_cleanup_all(void)
 	if (!firmware_has_feature(FW_FEATURE_LPAR)) {
 		lpcr = mfspr(SPRN_LPCR);
 		mtspr(SPRN_LPCR, lpcr & ~LPCR_UPRT);
-		mtspr(SPRN_PTCR, 0);
+		if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+			mtspr(SPRN_PTCR, 0);
 		powernv_set_nmmu_ptcr(0);
 		radix__flush_tlb_all();
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 06/10] KVM: PPC: Ultravisor: Restrict flush of the partition tlb cache
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (4 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 05/10] KVM: PPC: Ultravisor: Use UV_WRITE_PATE ucall to register a PATE Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access Claudio Carvalho
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Ram Pai <linuxram@us.ibm.com>

Ultravisor is responsible for flushing the tlb cache, since it manages
the PATE entries. Hence skip tlb flush, if the ultravisor firmware is
available.

Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/pgtable.c | 33 +++++++++++++++++-------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 40a9fc8b139f..1eeb5fe87023 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -224,6 +224,23 @@ void __init mmu_partition_table_init(void)
 	powernv_set_nmmu_ptcr(ptcr);
 }
 
+static void flush_partition(unsigned int lpid, unsigned long dw0)
+{
+	if (dw0 & PATB_HR) {
+		asm volatile(PPC_TLBIE_5(%0, %1, 2, 0, 1) : :
+			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
+		asm volatile(PPC_TLBIE_5(%0, %1, 2, 1, 1) : :
+			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
+		trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 1);
+	} else {
+		asm volatile(PPC_TLBIE_5(%0, %1, 2, 0, 0) : :
+			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
+		trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 0);
+	}
+	/* do we need fixup here ?*/
+	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+}
+
 static void __mmu_partition_table_set_entry(unsigned int lpid,
 					    unsigned long dw0,
 					    unsigned long dw1)
@@ -238,20 +255,8 @@ static void __mmu_partition_table_set_entry(unsigned int lpid,
 	 * The type of flush (hash or radix) depends on what the previous
 	 * use of this partition ID was, not the new use.
 	 */
-	asm volatile("ptesync" : : : "memory");
-	if (old & PATB_HR) {
-		asm volatile(PPC_TLBIE_5(%0,%1,2,0,1) : :
-			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
-		asm volatile(PPC_TLBIE_5(%0,%1,2,1,1) : :
-			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
-		trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 1);
-	} else {
-		asm volatile(PPC_TLBIE_5(%0,%1,2,0,0) : :
-			     "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
-		trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 0);
-	}
-	/* do we need fixup here ?*/
-	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+		flush_partition(lpid, old);
 }
 
 void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (5 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 06/10] KVM: PPC: Ultravisor: Restrict flush of the partition tlb cache Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-20  5:43   ` Paul Mackerras
  2019-05-21  5:24   ` Madhavan Srinivasan
  2019-05-18 14:25 ` [RFC PATCH v2 08/10] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM Claudio Carvalho
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Ram Pai <linuxram@us.ibm.com>

When the ultravisor firmware is available, it takes control over the
LDBAR register. In this case, thread-imc updates and save/restore
operations on the LDBAR register are handled by ultravisor.

Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[Restrict LDBAR access in assembly code and some in C, update the commit
 message]
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_hv.c                 |  4 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S      |  2 +
 arch/powerpc/perf/imc-pmu.c                  | 64 ++++++++++++--------
 arch/powerpc/platforms/powernv/idle.c        |  6 +-
 arch/powerpc/platforms/powernv/subcore-asm.S |  4 ++
 5 files changed, 52 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 0fab0a201027..81f35f955d16 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -75,6 +75,7 @@
 #include <asm/xics.h>
 #include <asm/xive.h>
 #include <asm/hw_breakpoint.h>
+#include <asm/firmware.h>
 
 #include "book3s.h"
 
@@ -3117,7 +3118,8 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
 			subcore_size = MAX_SMT_THREADS / split;
 			split_info.rpr = mfspr(SPRN_RPR);
 			split_info.pmmar = mfspr(SPRN_PMMAR);
-			split_info.ldbar = mfspr(SPRN_LDBAR);
+			if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+				split_info.ldbar = mfspr(SPRN_LDBAR);
 			split_info.subcore_size = subcore_size;
 		} else {
 			split_info.subcore_size = 1;
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index dd014308f065..938cfa5dceed 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -375,8 +375,10 @@ BEGIN_FTR_SECTION
 	mtspr	SPRN_RPR, r0
 	ld	r0, KVM_SPLIT_PMMAR(r6)
 	mtspr	SPRN_PMMAR, r0
+BEGIN_FW_FTR_SECTION_NESTED(70)
 	ld	r0, KVM_SPLIT_LDBAR(r6)
 	mtspr	SPRN_LDBAR, r0
+END_FW_FTR_SECTION_NESTED(FW_FEATURE_ULTRAVISOR, 0, 70)
 	isync
 FTR_SECTION_ELSE
 	/* On P9 we use the split_info for coordinating LPCR changes */
diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index 31fa753e2eb2..39c84de74da9 100644
--- a/arch/powerpc/perf/imc-pmu.c
+++ b/arch/powerpc/perf/imc-pmu.c
@@ -17,6 +17,7 @@
 #include <asm/cputhreads.h>
 #include <asm/smp.h>
 #include <linux/string.h>
+#include <asm/firmware.h>
 
 /* Nest IMC data structures and variables */
 
@@ -816,6 +817,17 @@ static int core_imc_event_init(struct perf_event *event)
 	return 0;
 }
 
+static void thread_imc_ldbar_disable(void *dummy)
+{
+	/*
+	 * By Zeroing LDBAR, we disable thread-imc updates. When the ultravisor
+	 * firmware is available, it is responsible for handling thread-imc
+	 * updates, though
+	 */
+	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+		mtspr(SPRN_LDBAR, 0);
+}
+
 /*
  * Allocates a page of memory for each of the online cpus, and load
  * LDBAR with 0.
@@ -856,7 +868,7 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
 		per_cpu(thread_imc_mem, cpu_id) = local_mem;
 	}
 
-	mtspr(SPRN_LDBAR, 0);
+	thread_imc_ldbar_disable(NULL);
 	return 0;
 }
 
@@ -867,7 +879,7 @@ static int ppc_thread_imc_cpu_online(unsigned int cpu)
 
 static int ppc_thread_imc_cpu_offline(unsigned int cpu)
 {
-	mtspr(SPRN_LDBAR, 0);
+	thread_imc_ldbar_disable(NULL);
 	return 0;
 }
 
@@ -1010,7 +1022,6 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
 {
 	int core_id;
 	struct imc_pmu_ref *ref;
-	u64 ldbar_value, *local_mem = per_cpu(thread_imc_mem, smp_processor_id());
 
 	if (flags & PERF_EF_START)
 		imc_event_start(event, flags);
@@ -1019,8 +1030,14 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
 		return -EINVAL;
 
 	core_id = smp_processor_id() / threads_per_core;
-	ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) | THREAD_IMC_ENABLE;
-	mtspr(SPRN_LDBAR, ldbar_value);
+	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
+		u64 ldbar_value, *local_mem;
+
+		local_mem = per_cpu(thread_imc_mem, smp_processor_id());
+		ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
+				THREAD_IMC_ENABLE;
+		mtspr(SPRN_LDBAR, ldbar_value);
+	}
 
 	/*
 	 * imc pmus are enabled only when it is used.
@@ -1053,7 +1070,7 @@ static void thread_imc_event_del(struct perf_event *event, int flags)
 	int core_id;
 	struct imc_pmu_ref *ref;
 
-	mtspr(SPRN_LDBAR, 0);
+	thread_imc_ldbar_disable(NULL);
 
 	core_id = smp_processor_id() / threads_per_core;
 	ref = &core_imc_refc[core_id];
@@ -1109,7 +1126,7 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
 	trace_imc_refc[core_id].id = core_id;
 	mutex_init(&trace_imc_refc[core_id].lock);
 
-	mtspr(SPRN_LDBAR, 0);
+	thread_imc_ldbar_disable(NULL);
 	return 0;
 }
 
@@ -1120,7 +1137,7 @@ static int ppc_trace_imc_cpu_online(unsigned int cpu)
 
 static int ppc_trace_imc_cpu_offline(unsigned int cpu)
 {
-	mtspr(SPRN_LDBAR, 0);
+	thread_imc_ldbar_disable(NULL);
 	return 0;
 }
 
@@ -1207,11 +1224,6 @@ static int trace_imc_event_add(struct perf_event *event, int flags)
 {
 	int core_id = smp_processor_id() / threads_per_core;
 	struct imc_pmu_ref *ref = NULL;
-	u64 local_mem, ldbar_value;
-
-	/* Set trace-imc bit in ldbar and load ldbar with per-thread memory address */
-	local_mem = get_trace_imc_event_base_addr();
-	ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) | TRACE_IMC_ENABLE;
 
 	if (core_imc_refc)
 		ref = &core_imc_refc[core_id];
@@ -1222,14 +1234,25 @@ static int trace_imc_event_add(struct perf_event *event, int flags)
 		if (!ref)
 			return -EINVAL;
 	}
-	mtspr(SPRN_LDBAR, ldbar_value);
+	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
+		u64 local_mem, ldbar_value;
+
+		/*
+		 * Set trace-imc bit in ldbar and load ldbar with per-thread
+		 * memory address
+		 */
+		local_mem = get_trace_imc_event_base_addr();
+		ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
+				TRACE_IMC_ENABLE;
+		mtspr(SPRN_LDBAR, ldbar_value);
+	}
 	mutex_lock(&ref->lock);
 	if (ref->refc == 0) {
 		if (opal_imc_counters_start(OPAL_IMC_COUNTERS_TRACE,
 				get_hard_smp_processor_id(smp_processor_id()))) {
 			mutex_unlock(&ref->lock);
 			pr_err("trace-imc: Unable to start the counters for core %d\n", core_id);
-			mtspr(SPRN_LDBAR, 0);
+			thread_imc_ldbar_disable(NULL);
 			return -EINVAL;
 		}
 	}
@@ -1270,7 +1293,7 @@ static void trace_imc_event_del(struct perf_event *event, int flags)
 		if (!ref)
 			return;
 	}
-	mtspr(SPRN_LDBAR, 0);
+	thread_imc_ldbar_disable(NULL);
 	mutex_lock(&ref->lock);
 	ref->refc--;
 	if (ref->refc == 0) {
@@ -1413,15 +1436,6 @@ static void cleanup_all_core_imc_memory(void)
 	kfree(core_imc_refc);
 }
 
-static void thread_imc_ldbar_disable(void *dummy)
-{
-	/*
-	 * By Zeroing LDBAR, we disable thread-imc
-	 * updates.
-	 */
-	mtspr(SPRN_LDBAR, 0);
-}
-
 void thread_imc_disable(void)
 {
 	on_each_cpu(thread_imc_ldbar_disable, NULL, 1);
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index c9133f7908ca..fd62435e3267 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -679,7 +679,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
 		sprs.ptcr	= mfspr(SPRN_PTCR);
 		sprs.rpr	= mfspr(SPRN_RPR);
 		sprs.tscr	= mfspr(SPRN_TSCR);
-		sprs.ldbar	= mfspr(SPRN_LDBAR);
+		if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+			sprs.ldbar	= mfspr(SPRN_LDBAR);
 
 		sprs_saved = true;
 
@@ -762,7 +763,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
 	mtspr(SPRN_PTCR,	sprs.ptcr);
 	mtspr(SPRN_RPR,		sprs.rpr);
 	mtspr(SPRN_TSCR,	sprs.tscr);
-	mtspr(SPRN_LDBAR,	sprs.ldbar);
+	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
+		mtspr(SPRN_LDBAR,	sprs.ldbar);
 
 	if (pls >= pnv_first_tb_loss_level) {
 		/* TB loss */
diff --git a/arch/powerpc/platforms/powernv/subcore-asm.S b/arch/powerpc/platforms/powernv/subcore-asm.S
index 39bb24aa8f34..e4383fa5e150 100644
--- a/arch/powerpc/platforms/powernv/subcore-asm.S
+++ b/arch/powerpc/platforms/powernv/subcore-asm.S
@@ -44,7 +44,9 @@ _GLOBAL(split_core_secondary_loop)
 
 real_mode:
 	/* Grab values from unsplit SPRs */
+BEGIN_FW_FTR_SECTION
 	mfspr	r6,  SPRN_LDBAR
+END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
 	mfspr	r7,  SPRN_PMMAR
 	mfspr	r8,  SPRN_PMCR
 	mfspr	r9,  SPRN_RPR
@@ -77,7 +79,9 @@ real_mode:
 	mtspr	SPRN_HDEC, r4
 
 	/* Restore SPR values now we are split */
+BEGIN_FW_FTR_SECTION
 	mtspr	SPRN_LDBAR, r6
+END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
 	mtspr	SPRN_PMMAR, r7
 	mtspr	SPRN_PMCR, r8
 	mtspr	SPRN_RPR, r9
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 08/10] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (6 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-20  6:17   ` Paul Mackerras
  2019-05-18 14:25 ` [RFC PATCH v2 09/10] KVM: PPC: Book3S HV: Fixed for running secure guests Claudio Carvalho
  2019-05-18 14:25 ` [RFC PATCH v2 10/10] KVM: PPC: Ultravisor: Check for MSR_S during hv_reset_msr Claudio Carvalho
  9 siblings, 1 reply; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>

All hcalls from a secure VM go to the ultravisor from where they are
reflected into the HV. When we (HV) complete processing such hcalls,
we should return to the UV rather than to the guest kernel.

Have fast_guest_return check the kvm_arch.secure_guest field so that
even a new CPU will enter UV when started (in response to a RTAS
start-cpu call).

Thanks to input from Paul Mackerras, Ram Pai and Mike Anderson.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
[Fix UV_RETURN token number and arch.secure_guest check]
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/include/asm/kvm_host.h       |  1 +
 arch/powerpc/include/asm/ultravisor-api.h |  1 +
 arch/powerpc/kernel/asm-offsets.c         |  1 +
 arch/powerpc/kvm/book3s_hv_rmhandlers.S   | 30 ++++++++++++++++++++---
 4 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index e6b5bb012ccb..ba7dd35cb916 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -290,6 +290,7 @@ struct kvm_arch {
 	cpumask_t cpu_in_guest;
 	u8 radix;
 	u8 fwnmi_enabled;
+	u8 secure_guest;
 	bool threads_indep;
 	bool nested_enable;
 	pgd_t *pgtable;
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h
index 24bfb4c1737e..15e6ce77a131 100644
--- a/arch/powerpc/include/asm/ultravisor-api.h
+++ b/arch/powerpc/include/asm/ultravisor-api.h
@@ -19,5 +19,6 @@
 
 /* opcodes */
 #define UV_WRITE_PATE			0xF104
+#define UV_RETURN			0xF11C
 
 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 8e02444e9d3d..44742724513e 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -508,6 +508,7 @@ int main(void)
 	OFFSET(KVM_VRMA_SLB_V, kvm, arch.vrma_slb_v);
 	OFFSET(KVM_RADIX, kvm, arch.radix);
 	OFFSET(KVM_FWNMI, kvm, arch.fwnmi_enabled);
+	OFFSET(KVM_SECURE_GUEST, kvm, arch.secure_guest);
 	OFFSET(VCPU_DSISR, kvm_vcpu, arch.shregs.dsisr);
 	OFFSET(VCPU_DAR, kvm_vcpu, arch.shregs.dar);
 	OFFSET(VCPU_VPA, kvm_vcpu, arch.vpa.pinned_addr);
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 938cfa5dceed..d89efa0783a2 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -36,6 +36,7 @@
 #include <asm/asm-compat.h>
 #include <asm/feature-fixups.h>
 #include <asm/cpuidle.h>
+#include <asm/ultravisor-api.h>
 
 /* Sign-extend HDEC if not on POWER9 */
 #define EXTEND_HDEC(reg)			\
@@ -1112,16 +1113,12 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 
 	ld	r5, VCPU_LR(r4)
-	ld	r6, VCPU_CR(r4)
 	mtlr	r5
-	mtcr	r6
 
 	ld	r1, VCPU_GPR(R1)(r4)
 	ld	r2, VCPU_GPR(R2)(r4)
 	ld	r3, VCPU_GPR(R3)(r4)
 	ld	r5, VCPU_GPR(R5)(r4)
-	ld	r6, VCPU_GPR(R6)(r4)
-	ld	r7, VCPU_GPR(R7)(r4)
 	ld	r8, VCPU_GPR(R8)(r4)
 	ld	r9, VCPU_GPR(R9)(r4)
 	ld	r10, VCPU_GPR(R10)(r4)
@@ -1139,10 +1136,35 @@ BEGIN_FTR_SECTION
 	mtspr	SPRN_HDSISR, r0
 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
 
+	ld	r6, VCPU_KVM(r4)
+	lbz	r7, KVM_SECURE_GUEST(r6)
+	cmpdi	r7, 0
+	bne	ret_to_ultra
+
+	lwz	r6, VCPU_CR(r4)
+	mtcr	r6
+
+	ld	r7, VCPU_GPR(R7)(r4)
+	ld	r6, VCPU_GPR(R6)(r4)
 	ld	r0, VCPU_GPR(R0)(r4)
 	ld	r4, VCPU_GPR(R4)(r4)
 	HRFI_TO_GUEST
 	b	.
+/*
+ * The hcall we just completed was from Ultravisor. Use UV_RETURN
+ * ultra call to return to the Ultravisor. Results from the hcall
+ * are already in the appropriate registers (r3:12), except for
+ * R6,7 which we used as temporary registers above. Restore them,
+ * and set R0 to the ucall number (UV_RETURN).
+ */
+ret_to_ultra:
+	lwz	r6, VCPU_CR(r4)
+	mtcr	r6
+	LOAD_REG_IMMEDIATE(r0, UV_RETURN)
+	ld	r7, VCPU_GPR(R7)(r4)
+	ld	r6, VCPU_GPR(R6)(r4)
+	ld	r4, VCPU_GPR(R4)(r4)
+	sc	2
 
 /*
  * Enter the guest on a P9 or later system where we have exactly
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 09/10] KVM: PPC: Book3S HV: Fixed for running secure guests
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (7 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 08/10] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  2019-05-20  6:40   ` Paul Mackerras
  2019-05-18 14:25 ` [RFC PATCH v2 10/10] KVM: PPC: Ultravisor: Check for MSR_S during hv_reset_msr Claudio Carvalho
  9 siblings, 1 reply; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Paul Mackerras <paulus@ozlabs.org>

- Pass SRR1 in r11 for UV_RETURN because SRR0 and SRR1 get set by
  the sc 2 instruction. (Note r3 - r10 potentially have hcall return
  values in them.)

- Fix kvmppc_msr_interrupt to preserve the MSR_S bit.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index d89efa0783a2..1b44c85956b9 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1160,6 +1160,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
 ret_to_ultra:
 	lwz	r6, VCPU_CR(r4)
 	mtcr	r6
+	mfspr	r11, SPRN_SRR1
 	LOAD_REG_IMMEDIATE(r0, UV_RETURN)
 	ld	r7, VCPU_GPR(R7)(r4)
 	ld	r6, VCPU_GPR(R6)(r4)
@@ -3360,13 +3361,16 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
  *   r0 is used as a scratch register
  */
 kvmppc_msr_interrupt:
+	andis.	r0, r11, MSR_S@h
 	rldicl	r0, r11, 64 - MSR_TS_S_LG, 62
-	cmpwi	r0, 2 /* Check if we are in transactional state..  */
+	cmpwi	cr1, r0, 2 /* Check if we are in transactional state..  */
 	ld	r11, VCPU_INTR_MSR(r9)
-	bne	1f
+	bne	cr1, 1f
 	/* ... if transactional, change to suspended */
 	li	r0, 1
 1:	rldimi	r11, r0, MSR_TS_S_LG, 63 - MSR_TS_T_LG
+	beqlr
+	oris	r11, r11, MSR_S@h		/* preserve MSR_S bit setting */
 	blr
 
 /*
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH v2 10/10] KVM: PPC: Ultravisor: Check for MSR_S during hv_reset_msr
  2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
                   ` (8 preceding siblings ...)
  2019-05-18 14:25 ` [RFC PATCH v2 09/10] KVM: PPC: Book3S HV: Fixed for running secure guests Claudio Carvalho
@ 2019-05-18 14:25 ` Claudio Carvalho
  9 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-18 14:25 UTC (permalink / raw)
  To: Paul Mackerras, Michael Ellerman, kvm-ppc, linuxppc-dev
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, Bharata B Rao,
	Sukadev Bhattiprolu, Thiago Jung Bauermann, Anshuman Khandual

From: Michael Anderson <andmike@linux.ibm.com>

 - Check for MSR_S so that kvmppc_set_msr will include. Prior to this
   change return to guest would not have the S bit set.

 - Patch based on comment from Paul Mackerras <pmac@au1.ibm.com>

Signed-off-by: Michael Anderson <andmike@linux.ibm.com>
Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index be7bc070eae5..dcc1c1fb5f9c 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -295,6 +295,7 @@ static void kvmppc_mmu_book3s_64_hv_reset_msr(struct kvm_vcpu *vcpu)
 		msr |= MSR_TS_S;
 	else
 		msr |= vcpu->arch.shregs.msr & MSR_TS_MASK;
+	msr |= vcpu->arch.shregs.msr & MSR_S;
 	kvmppc_set_msr(vcpu, msr);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access
  2019-05-18 14:25 ` [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access Claudio Carvalho
@ 2019-05-20  5:43   ` Paul Mackerras
  2019-05-21  5:24   ` Madhavan Srinivasan
  1 sibling, 0 replies; 16+ messages in thread
From: Paul Mackerras @ 2019-05-20  5:43 UTC (permalink / raw)
  To: Claudio Carvalho
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, kvm-ppc,
	Bharata B Rao, linuxppc-dev, Sukadev Bhattiprolu,
	Thiago Jung Bauermann, Anshuman Khandual

On Sat, May 18, 2019 at 11:25:21AM -0300, Claudio Carvalho wrote:
> From: Ram Pai <linuxram@us.ibm.com>
> 
> When the ultravisor firmware is available, it takes control over the
> LDBAR register. In this case, thread-imc updates and save/restore
> operations on the LDBAR register are handled by ultravisor.
> 
> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [Restrict LDBAR access in assembly code and some in C, update the commit
>  message]
> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>

Some of the places that you are patching below are explicitly only
executed on POWER8, which can't have an ultravisor, and therefore
the change isn't needed:

> ---
>  arch/powerpc/kvm/book3s_hv.c                 |  4 +-
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S      |  2 +
>  arch/powerpc/perf/imc-pmu.c                  | 64 ++++++++++++--------
>  arch/powerpc/platforms/powernv/idle.c        |  6 +-
>  arch/powerpc/platforms/powernv/subcore-asm.S |  4 ++
>  5 files changed, 52 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 0fab0a201027..81f35f955d16 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -75,6 +75,7 @@
>  #include <asm/xics.h>
>  #include <asm/xive.h>
>  #include <asm/hw_breakpoint.h>
> +#include <asm/firmware.h>
>  
>  #include "book3s.h"
>  
> @@ -3117,7 +3118,8 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
>  			subcore_size = MAX_SMT_THREADS / split;
>  			split_info.rpr = mfspr(SPRN_RPR);
>  			split_info.pmmar = mfspr(SPRN_PMMAR);
> -			split_info.ldbar = mfspr(SPRN_LDBAR);
> +			if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
> +				split_info.ldbar = mfspr(SPRN_LDBAR);

This is inside an if (is_power8) statement.

> diff --git a/arch/powerpc/platforms/powernv/subcore-asm.S b/arch/powerpc/platforms/powernv/subcore-asm.S
> index 39bb24aa8f34..e4383fa5e150 100644
> --- a/arch/powerpc/platforms/powernv/subcore-asm.S
> +++ b/arch/powerpc/platforms/powernv/subcore-asm.S
> @@ -44,7 +44,9 @@ _GLOBAL(split_core_secondary_loop)
>  
>  real_mode:
>  	/* Grab values from unsplit SPRs */
> +BEGIN_FW_FTR_SECTION
>  	mfspr	r6,  SPRN_LDBAR
> +END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
>  	mfspr	r7,  SPRN_PMMAR
>  	mfspr	r8,  SPRN_PMCR
>  	mfspr	r9,  SPRN_RPR
> @@ -77,7 +79,9 @@ real_mode:
>  	mtspr	SPRN_HDEC, r4
>  
>  	/* Restore SPR values now we are split */
> +BEGIN_FW_FTR_SECTION
>  	mtspr	SPRN_LDBAR, r6
> +END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)

Only POWER8 supports split-core mode, so we can only get here on
POWER8.

Paul.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 08/10] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM
  2019-05-18 14:25 ` [RFC PATCH v2 08/10] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM Claudio Carvalho
@ 2019-05-20  6:17   ` Paul Mackerras
  0 siblings, 0 replies; 16+ messages in thread
From: Paul Mackerras @ 2019-05-20  6:17 UTC (permalink / raw)
  To: Claudio Carvalho
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, kvm-ppc,
	Bharata B Rao, linuxppc-dev, Sukadev Bhattiprolu,
	Thiago Jung Bauermann, Anshuman Khandual

On Sat, May 18, 2019 at 11:25:22AM -0300, Claudio Carvalho wrote:
> From: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> 
> All hcalls from a secure VM go to the ultravisor from where they are
> reflected into the HV. When we (HV) complete processing such hcalls,
> we should return to the UV rather than to the guest kernel.

This paragraph in the patch description, and the comment in
book3s_hv_rmhandlers.S, are confusing and possibly misleading in
focussing on returns from hcalls, when the change is needed for any
sort of entry to the guest from the hypervisor, whether it is a return
from an hcall, a return from a hypervisor interrupt, or the first time
that a guest vCPU is run.

This paragraph needs to explain that to enter a secure guest, we have
to go through the ultravisor, therefore we do a ucall when we are
entering a secure guest.

[snip]

> +/*
> + * The hcall we just completed was from Ultravisor. Use UV_RETURN
> + * ultra call to return to the Ultravisor. Results from the hcall
> + * are already in the appropriate registers (r3:12), except for
> + * R6,7 which we used as temporary registers above. Restore them,
> + * and set R0 to the ucall number (UV_RETURN).
> + */

This needs to say something like "We are entering a secure guest, so
we have to invoke the ultravisor to do that.  If we are returning from
a hcall, the results are already ...".

> +ret_to_ultra:
> +	lwz	r6, VCPU_CR(r4)
> +	mtcr	r6
> +	LOAD_REG_IMMEDIATE(r0, UV_RETURN)
> +	ld	r7, VCPU_GPR(R7)(r4)
> +	ld	r6, VCPU_GPR(R6)(r4)
> +	ld	r4, VCPU_GPR(R4)(r4)
> +	sc	2
>  
>  /*
>   * Enter the guest on a P9 or later system where we have exactly
> -- 
> 2.20.1

Paul.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 09/10] KVM: PPC: Book3S HV: Fixed for running secure guests
  2019-05-18 14:25 ` [RFC PATCH v2 09/10] KVM: PPC: Book3S HV: Fixed for running secure guests Claudio Carvalho
@ 2019-05-20  6:40   ` Paul Mackerras
  0 siblings, 0 replies; 16+ messages in thread
From: Paul Mackerras @ 2019-05-20  6:40 UTC (permalink / raw)
  To: Claudio Carvalho
  Cc: Madhavan Srinivasan, Michael Anderson, Ram Pai, kvm-ppc,
	Bharata B Rao, linuxppc-dev, Sukadev Bhattiprolu,
	Thiago Jung Bauermann, Anshuman Khandual

On Sat, May 18, 2019 at 11:25:23AM -0300, Claudio Carvalho wrote:
> From: Paul Mackerras <paulus@ozlabs.org>
> 
> - Pass SRR1 in r11 for UV_RETURN because SRR0 and SRR1 get set by
>   the sc 2 instruction. (Note r3 - r10 potentially have hcall return
>   values in them.)
> 
> - Fix kvmppc_msr_interrupt to preserve the MSR_S bit.
> 
> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>

This should be folded into the previous patch.

Paul.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access
  2019-05-18 14:25 ` [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access Claudio Carvalho
  2019-05-20  5:43   ` Paul Mackerras
@ 2019-05-21  5:24   ` Madhavan Srinivasan
  2019-05-30 22:51     ` Claudio Carvalho
  1 sibling, 1 reply; 16+ messages in thread
From: Madhavan Srinivasan @ 2019-05-21  5:24 UTC (permalink / raw)
  To: linuxppc-dev


On 18/05/19 7:55 PM, Claudio Carvalho wrote:
> From: Ram Pai <linuxram@us.ibm.com> When the ultravisor firmware is 
> available, it takes control over the LDBAR register. In this case, 
> thread-imc updates and save/restore operations on the LDBAR register 
> are handled by ultravisor.
we should remove the core and thread imc nodes in the skiboot
if the ultravisor is enabled. We dont need imc-pmu.c file changes, imc-pmu.c
inits only if we have the corresponding nodes. I will post a skiboot patch.

Maddy

> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
> [Restrict LDBAR access in assembly code and some in C, update the commit
>   message]
> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
> ---
>   arch/powerpc/kvm/book3s_hv.c                 |  4 +-
>   arch/powerpc/kvm/book3s_hv_rmhandlers.S      |  2 +
>   arch/powerpc/perf/imc-pmu.c                  | 64 ++++++++++++--------
>   arch/powerpc/platforms/powernv/idle.c        |  6 +-
>   arch/powerpc/platforms/powernv/subcore-asm.S |  4 ++
>   5 files changed, 52 insertions(+), 28 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 0fab0a201027..81f35f955d16 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -75,6 +75,7 @@
>   #include <asm/xics.h>
>   #include <asm/xive.h>
>   #include <asm/hw_breakpoint.h>
> +#include <asm/firmware.h>
>
>   #include "book3s.h"
>
> @@ -3117,7 +3118,8 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
>   			subcore_size = MAX_SMT_THREADS / split;
>   			split_info.rpr = mfspr(SPRN_RPR);
>   			split_info.pmmar = mfspr(SPRN_PMMAR);
> -			split_info.ldbar = mfspr(SPRN_LDBAR);
> +			if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
> +				split_info.ldbar = mfspr(SPRN_LDBAR);
>   			split_info.subcore_size = subcore_size;
>   		} else {
>   			split_info.subcore_size = 1;
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index dd014308f065..938cfa5dceed 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -375,8 +375,10 @@ BEGIN_FTR_SECTION
>   	mtspr	SPRN_RPR, r0
>   	ld	r0, KVM_SPLIT_PMMAR(r6)
>   	mtspr	SPRN_PMMAR, r0
> +BEGIN_FW_FTR_SECTION_NESTED(70)
>   	ld	r0, KVM_SPLIT_LDBAR(r6)
>   	mtspr	SPRN_LDBAR, r0
> +END_FW_FTR_SECTION_NESTED(FW_FEATURE_ULTRAVISOR, 0, 70)
>   	isync
>   FTR_SECTION_ELSE
>   	/* On P9 we use the split_info for coordinating LPCR changes */
> diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
> index 31fa753e2eb2..39c84de74da9 100644
> --- a/arch/powerpc/perf/imc-pmu.c
> +++ b/arch/powerpc/perf/imc-pmu.c
> @@ -17,6 +17,7 @@
>   #include <asm/cputhreads.h>
>   #include <asm/smp.h>
>   #include <linux/string.h>
> +#include <asm/firmware.h>
>
>   /* Nest IMC data structures and variables */
>
> @@ -816,6 +817,17 @@ static int core_imc_event_init(struct perf_event *event)
>   	return 0;
>   }
>
> +static void thread_imc_ldbar_disable(void *dummy)
> +{
> +	/*
> +	 * By Zeroing LDBAR, we disable thread-imc updates. When the ultravisor
> +	 * firmware is available, it is responsible for handling thread-imc
> +	 * updates, though
> +	 */
> +	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
> +		mtspr(SPRN_LDBAR, 0);
> +}
> +
>   /*
>    * Allocates a page of memory for each of the online cpus, and load
>    * LDBAR with 0.
> @@ -856,7 +868,7 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
>   		per_cpu(thread_imc_mem, cpu_id) = local_mem;
>   	}
>
> -	mtspr(SPRN_LDBAR, 0);
> +	thread_imc_ldbar_disable(NULL);
>   	return 0;
>   }
>
> @@ -867,7 +879,7 @@ static int ppc_thread_imc_cpu_online(unsigned int cpu)
>
>   static int ppc_thread_imc_cpu_offline(unsigned int cpu)
>   {
> -	mtspr(SPRN_LDBAR, 0);
> +	thread_imc_ldbar_disable(NULL);
>   	return 0;
>   }
>
> @@ -1010,7 +1022,6 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
>   {
>   	int core_id;
>   	struct imc_pmu_ref *ref;
> -	u64 ldbar_value, *local_mem = per_cpu(thread_imc_mem, smp_processor_id());
>
>   	if (flags & PERF_EF_START)
>   		imc_event_start(event, flags);
> @@ -1019,8 +1030,14 @@ static int thread_imc_event_add(struct perf_event *event, int flags)
>   		return -EINVAL;
>
>   	core_id = smp_processor_id() / threads_per_core;
> -	ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) | THREAD_IMC_ENABLE;
> -	mtspr(SPRN_LDBAR, ldbar_value);
> +	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
> +		u64 ldbar_value, *local_mem;
> +
> +		local_mem = per_cpu(thread_imc_mem, smp_processor_id());
> +		ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
> +				THREAD_IMC_ENABLE;
> +		mtspr(SPRN_LDBAR, ldbar_value);
> +	}
>
>   	/*
>   	 * imc pmus are enabled only when it is used.
> @@ -1053,7 +1070,7 @@ static void thread_imc_event_del(struct perf_event *event, int flags)
>   	int core_id;
>   	struct imc_pmu_ref *ref;
>
> -	mtspr(SPRN_LDBAR, 0);
> +	thread_imc_ldbar_disable(NULL);
>
>   	core_id = smp_processor_id() / threads_per_core;
>   	ref = &core_imc_refc[core_id];
> @@ -1109,7 +1126,7 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
>   	trace_imc_refc[core_id].id = core_id;
>   	mutex_init(&trace_imc_refc[core_id].lock);
>
> -	mtspr(SPRN_LDBAR, 0);
> +	thread_imc_ldbar_disable(NULL);
>   	return 0;
>   }
>
> @@ -1120,7 +1137,7 @@ static int ppc_trace_imc_cpu_online(unsigned int cpu)
>
>   static int ppc_trace_imc_cpu_offline(unsigned int cpu)
>   {
> -	mtspr(SPRN_LDBAR, 0);
> +	thread_imc_ldbar_disable(NULL);
>   	return 0;
>   }
>
> @@ -1207,11 +1224,6 @@ static int trace_imc_event_add(struct perf_event *event, int flags)
>   {
>   	int core_id = smp_processor_id() / threads_per_core;
>   	struct imc_pmu_ref *ref = NULL;
> -	u64 local_mem, ldbar_value;
> -
> -	/* Set trace-imc bit in ldbar and load ldbar with per-thread memory address */
> -	local_mem = get_trace_imc_event_base_addr();
> -	ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) | TRACE_IMC_ENABLE;
>
>   	if (core_imc_refc)
>   		ref = &core_imc_refc[core_id];
> @@ -1222,14 +1234,25 @@ static int trace_imc_event_add(struct perf_event *event, int flags)
>   		if (!ref)
>   			return -EINVAL;
>   	}
> -	mtspr(SPRN_LDBAR, ldbar_value);
> +	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
> +		u64 local_mem, ldbar_value;
> +
> +		/*
> +		 * Set trace-imc bit in ldbar and load ldbar with per-thread
> +		 * memory address
> +		 */
> +		local_mem = get_trace_imc_event_base_addr();
> +		ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
> +				TRACE_IMC_ENABLE;
> +		mtspr(SPRN_LDBAR, ldbar_value);
> +	}
>   	mutex_lock(&ref->lock);
>   	if (ref->refc == 0) {
>   		if (opal_imc_counters_start(OPAL_IMC_COUNTERS_TRACE,
>   				get_hard_smp_processor_id(smp_processor_id()))) {
>   			mutex_unlock(&ref->lock);
>   			pr_err("trace-imc: Unable to start the counters for core %d\n", core_id);
> -			mtspr(SPRN_LDBAR, 0);
> +			thread_imc_ldbar_disable(NULL);
>   			return -EINVAL;
>   		}
>   	}
> @@ -1270,7 +1293,7 @@ static void trace_imc_event_del(struct perf_event *event, int flags)
>   		if (!ref)
>   			return;
>   	}
> -	mtspr(SPRN_LDBAR, 0);
> +	thread_imc_ldbar_disable(NULL);
>   	mutex_lock(&ref->lock);
>   	ref->refc--;
>   	if (ref->refc == 0) {
> @@ -1413,15 +1436,6 @@ static void cleanup_all_core_imc_memory(void)
>   	kfree(core_imc_refc);
>   }
>
> -static void thread_imc_ldbar_disable(void *dummy)
> -{
> -	/*
> -	 * By Zeroing LDBAR, we disable thread-imc
> -	 * updates.
> -	 */
> -	mtspr(SPRN_LDBAR, 0);
> -}
> -
>   void thread_imc_disable(void)
>   {
>   	on_each_cpu(thread_imc_ldbar_disable, NULL, 1);
> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index c9133f7908ca..fd62435e3267 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -679,7 +679,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
>   		sprs.ptcr	= mfspr(SPRN_PTCR);
>   		sprs.rpr	= mfspr(SPRN_RPR);
>   		sprs.tscr	= mfspr(SPRN_TSCR);
> -		sprs.ldbar	= mfspr(SPRN_LDBAR);
> +		if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
> +			sprs.ldbar	= mfspr(SPRN_LDBAR);
>
>   		sprs_saved = true;
>
> @@ -762,7 +763,8 @@ static unsigned long power9_idle_stop(unsigned long psscr, bool mmu_on)
>   	mtspr(SPRN_PTCR,	sprs.ptcr);
>   	mtspr(SPRN_RPR,		sprs.rpr);
>   	mtspr(SPRN_TSCR,	sprs.tscr);
> -	mtspr(SPRN_LDBAR,	sprs.ldbar);
> +	if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
> +		mtspr(SPRN_LDBAR,	sprs.ldbar);
>
>   	if (pls >= pnv_first_tb_loss_level) {
>   		/* TB loss */
> diff --git a/arch/powerpc/platforms/powernv/subcore-asm.S b/arch/powerpc/platforms/powernv/subcore-asm.S
> index 39bb24aa8f34..e4383fa5e150 100644
> --- a/arch/powerpc/platforms/powernv/subcore-asm.S
> +++ b/arch/powerpc/platforms/powernv/subcore-asm.S
> @@ -44,7 +44,9 @@ _GLOBAL(split_core_secondary_loop)
>
>   real_mode:
>   	/* Grab values from unsplit SPRs */
> +BEGIN_FW_FTR_SECTION
>   	mfspr	r6,  SPRN_LDBAR
> +END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
>   	mfspr	r7,  SPRN_PMMAR
>   	mfspr	r8,  SPRN_PMCR
>   	mfspr	r9,  SPRN_RPR
> @@ -77,7 +79,9 @@ real_mode:
>   	mtspr	SPRN_HDEC, r4
>
>   	/* Restore SPR values now we are split */
> +BEGIN_FW_FTR_SECTION
>   	mtspr	SPRN_LDBAR, r6
> +END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
>   	mtspr	SPRN_PMMAR, r7
>   	mtspr	SPRN_PMCR, r8
>   	mtspr	SPRN_RPR, r9


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access
  2019-05-21  5:24   ` Madhavan Srinivasan
@ 2019-05-30 22:51     ` Claudio Carvalho
  0 siblings, 0 replies; 16+ messages in thread
From: Claudio Carvalho @ 2019-05-30 22:51 UTC (permalink / raw)
  To: Madhavan Srinivasan, linuxppc-dev


On 5/21/19 2:24 AM, Madhavan Srinivasan wrote:
>
> On 18/05/19 7:55 PM, Claudio Carvalho wrote:
>> From: Ram Pai <linuxram@us.ibm.com> When the ultravisor firmware is
>> available, it takes control over the LDBAR register. In this case,
>> thread-imc updates and save/restore operations on the LDBAR register are
>> handled by ultravisor.
> we should remove the core and thread imc nodes in the skiboot
> if the ultravisor is enabled. We dont need imc-pmu.c file changes, imc-pmu.c
> inits only if we have the corresponding nodes. I will post a skiboot patch.

Hi Maddy,

Thanks for the feedback and for taking care of the change needed in skiboot.

Right, if the core and thread imc devtree nodes were not created by
skiboot, then we don't need imc-pmu changes. As a sanity check, should we
have something like the following to make sure that the imc-pmu code will
not be executed even in the situation where ultravisor is enabled, but for
some reason skiboot did not remove those devtree nodes (e.g. your skiboot
patch was not applied)?

-------------------------

--- a/arch/powerpc/platforms/powernv/opal-imc.c
+++ b/arch/powerpc/platforms/powernv/opal-imc.c
@@ -258,6 +258,15 @@ static int opal_imc_counters_probe(struct
platform_device *pdev)
        bool core_imc_reg = false, thread_imc_reg = false;
        u32 type;
 
+       /*
+        * When ultravisor is enabled, it is responsible for thread-imc
+        * updates
+        */
+       if (firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
+               pr_info("IMC Ultravisor enabled\n");
+               return -EACCES;
+       }
+
        /*
         * Check whether this is kdump kernel. If yes, force the engines to
         * stop and return.

----------------------------

Claudio


> Maddy
>
>> Signed-off-by: Ram Pai <linuxram@us.ibm.com>
>> [Restrict LDBAR access in assembly code and some in C, update the commit
>>   message]
>> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com>
>> ---
>>   arch/powerpc/kvm/book3s_hv.c                 |  4 +-
>>   arch/powerpc/kvm/book3s_hv_rmhandlers.S      |  2 +
>>   arch/powerpc/perf/imc-pmu.c                  | 64 ++++++++++++--------
>>   arch/powerpc/platforms/powernv/idle.c        |  6 +-
>>   arch/powerpc/platforms/powernv/subcore-asm.S |  4 ++
>>   5 files changed, 52 insertions(+), 28 deletions(-)
>>
>> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
>> index 0fab0a201027..81f35f955d16 100644
>> --- a/arch/powerpc/kvm/book3s_hv.c
>> +++ b/arch/powerpc/kvm/book3s_hv.c
>> @@ -75,6 +75,7 @@
>>   #include <asm/xics.h>
>>   #include <asm/xive.h>
>>   #include <asm/hw_breakpoint.h>
>> +#include <asm/firmware.h>
>>
>>   #include "book3s.h"
>>
>> @@ -3117,7 +3118,8 @@ static noinline void kvmppc_run_core(struct
>> kvmppc_vcore *vc)
>>               subcore_size = MAX_SMT_THREADS / split;
>>               split_info.rpr = mfspr(SPRN_RPR);
>>               split_info.pmmar = mfspr(SPRN_PMMAR);
>> -            split_info.ldbar = mfspr(SPRN_LDBAR);
>> +            if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
>> +                split_info.ldbar = mfspr(SPRN_LDBAR);
>>               split_info.subcore_size = subcore_size;
>>           } else {
>>               split_info.subcore_size = 1;
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> index dd014308f065..938cfa5dceed 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -375,8 +375,10 @@ BEGIN_FTR_SECTION
>>       mtspr    SPRN_RPR, r0
>>       ld    r0, KVM_SPLIT_PMMAR(r6)
>>       mtspr    SPRN_PMMAR, r0
>> +BEGIN_FW_FTR_SECTION_NESTED(70)
>>       ld    r0, KVM_SPLIT_LDBAR(r6)
>>       mtspr    SPRN_LDBAR, r0
>> +END_FW_FTR_SECTION_NESTED(FW_FEATURE_ULTRAVISOR, 0, 70)
>>       isync
>>   FTR_SECTION_ELSE
>>       /* On P9 we use the split_info for coordinating LPCR changes */
>> diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
>> index 31fa753e2eb2..39c84de74da9 100644
>> --- a/arch/powerpc/perf/imc-pmu.c
>> +++ b/arch/powerpc/perf/imc-pmu.c
>> @@ -17,6 +17,7 @@
>>   #include <asm/cputhreads.h>
>>   #include <asm/smp.h>
>>   #include <linux/string.h>
>> +#include <asm/firmware.h>
>>
>>   /* Nest IMC data structures and variables */
>>
>> @@ -816,6 +817,17 @@ static int core_imc_event_init(struct perf_event
>> *event)
>>       return 0;
>>   }
>>
>> +static void thread_imc_ldbar_disable(void *dummy)
>> +{
>> +    /*
>> +     * By Zeroing LDBAR, we disable thread-imc updates. When the
>> ultravisor
>> +     * firmware is available, it is responsible for handling thread-imc
>> +     * updates, though
>> +     */
>> +    if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
>> +        mtspr(SPRN_LDBAR, 0);
>> +}
>> +
>>   /*
>>    * Allocates a page of memory for each of the online cpus, and load
>>    * LDBAR with 0.
>> @@ -856,7 +868,7 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
>>           per_cpu(thread_imc_mem, cpu_id) = local_mem;
>>       }
>>
>> -    mtspr(SPRN_LDBAR, 0);
>> +    thread_imc_ldbar_disable(NULL);
>>       return 0;
>>   }
>>
>> @@ -867,7 +879,7 @@ static int ppc_thread_imc_cpu_online(unsigned int cpu)
>>
>>   static int ppc_thread_imc_cpu_offline(unsigned int cpu)
>>   {
>> -    mtspr(SPRN_LDBAR, 0);
>> +    thread_imc_ldbar_disable(NULL);
>>       return 0;
>>   }
>>
>> @@ -1010,7 +1022,6 @@ static int thread_imc_event_add(struct perf_event
>> *event, int flags)
>>   {
>>       int core_id;
>>       struct imc_pmu_ref *ref;
>> -    u64 ldbar_value, *local_mem = per_cpu(thread_imc_mem,
>> smp_processor_id());
>>
>>       if (flags & PERF_EF_START)
>>           imc_event_start(event, flags);
>> @@ -1019,8 +1030,14 @@ static int thread_imc_event_add(struct perf_event
>> *event, int flags)
>>           return -EINVAL;
>>
>>       core_id = smp_processor_id() / threads_per_core;
>> -    ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
>> THREAD_IMC_ENABLE;
>> -    mtspr(SPRN_LDBAR, ldbar_value);
>> +    if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
>> +        u64 ldbar_value, *local_mem;
>> +
>> +        local_mem = per_cpu(thread_imc_mem, smp_processor_id());
>> +        ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
>> +                THREAD_IMC_ENABLE;
>> +        mtspr(SPRN_LDBAR, ldbar_value);
>> +    }
>>
>>       /*
>>        * imc pmus are enabled only when it is used.
>> @@ -1053,7 +1070,7 @@ static void thread_imc_event_del(struct perf_event
>> *event, int flags)
>>       int core_id;
>>       struct imc_pmu_ref *ref;
>>
>> -    mtspr(SPRN_LDBAR, 0);
>> +    thread_imc_ldbar_disable(NULL);
>>
>>       core_id = smp_processor_id() / threads_per_core;
>>       ref = &core_imc_refc[core_id];
>> @@ -1109,7 +1126,7 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
>>       trace_imc_refc[core_id].id = core_id;
>>       mutex_init(&trace_imc_refc[core_id].lock);
>>
>> -    mtspr(SPRN_LDBAR, 0);
>> +    thread_imc_ldbar_disable(NULL);
>>       return 0;
>>   }
>>
>> @@ -1120,7 +1137,7 @@ static int ppc_trace_imc_cpu_online(unsigned int cpu)
>>
>>   static int ppc_trace_imc_cpu_offline(unsigned int cpu)
>>   {
>> -    mtspr(SPRN_LDBAR, 0);
>> +    thread_imc_ldbar_disable(NULL);
>>       return 0;
>>   }
>>
>> @@ -1207,11 +1224,6 @@ static int trace_imc_event_add(struct perf_event
>> *event, int flags)
>>   {
>>       int core_id = smp_processor_id() / threads_per_core;
>>       struct imc_pmu_ref *ref = NULL;
>> -    u64 local_mem, ldbar_value;
>> -
>> -    /* Set trace-imc bit in ldbar and load ldbar with per-thread memory
>> address */
>> -    local_mem = get_trace_imc_event_base_addr();
>> -    ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
>> TRACE_IMC_ENABLE;
>>
>>       if (core_imc_refc)
>>           ref = &core_imc_refc[core_id];
>> @@ -1222,14 +1234,25 @@ static int trace_imc_event_add(struct perf_event
>> *event, int flags)
>>           if (!ref)
>>               return -EINVAL;
>>       }
>> -    mtspr(SPRN_LDBAR, ldbar_value);
>> +    if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR)) {
>> +        u64 local_mem, ldbar_value;
>> +
>> +        /*
>> +         * Set trace-imc bit in ldbar and load ldbar with per-thread
>> +         * memory address
>> +         */
>> +        local_mem = get_trace_imc_event_base_addr();
>> +        ldbar_value = ((u64)local_mem & THREAD_IMC_LDBAR_MASK) |
>> +                TRACE_IMC_ENABLE;
>> +        mtspr(SPRN_LDBAR, ldbar_value);
>> +    }
>>       mutex_lock(&ref->lock);
>>       if (ref->refc == 0) {
>>           if (opal_imc_counters_start(OPAL_IMC_COUNTERS_TRACE,
>>                   get_hard_smp_processor_id(smp_processor_id()))) {
>>               mutex_unlock(&ref->lock);
>>               pr_err("trace-imc: Unable to start the counters for core
>> %d\n", core_id);
>> -            mtspr(SPRN_LDBAR, 0);
>> +            thread_imc_ldbar_disable(NULL);
>>               return -EINVAL;
>>           }
>>       }
>> @@ -1270,7 +1293,7 @@ static void trace_imc_event_del(struct perf_event
>> *event, int flags)
>>           if (!ref)
>>               return;
>>       }
>> -    mtspr(SPRN_LDBAR, 0);
>> +    thread_imc_ldbar_disable(NULL);
>>       mutex_lock(&ref->lock);
>>       ref->refc--;
>>       if (ref->refc == 0) {
>> @@ -1413,15 +1436,6 @@ static void cleanup_all_core_imc_memory(void)
>>       kfree(core_imc_refc);
>>   }
>>
>> -static void thread_imc_ldbar_disable(void *dummy)
>> -{
>> -    /*
>> -     * By Zeroing LDBAR, we disable thread-imc
>> -     * updates.
>> -     */
>> -    mtspr(SPRN_LDBAR, 0);
>> -}
>> -
>>   void thread_imc_disable(void)
>>   {
>>       on_each_cpu(thread_imc_ldbar_disable, NULL, 1);
>> diff --git a/arch/powerpc/platforms/powernv/idle.c
>> b/arch/powerpc/platforms/powernv/idle.c
>> index c9133f7908ca..fd62435e3267 100644
>> --- a/arch/powerpc/platforms/powernv/idle.c
>> +++ b/arch/powerpc/platforms/powernv/idle.c
>> @@ -679,7 +679,8 @@ static unsigned long power9_idle_stop(unsigned long
>> psscr, bool mmu_on)
>>           sprs.ptcr    = mfspr(SPRN_PTCR);
>>           sprs.rpr    = mfspr(SPRN_RPR);
>>           sprs.tscr    = mfspr(SPRN_TSCR);
>> -        sprs.ldbar    = mfspr(SPRN_LDBAR);
>> +        if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
>> +            sprs.ldbar    = mfspr(SPRN_LDBAR);
>>
>>           sprs_saved = true;
>>
>> @@ -762,7 +763,8 @@ static unsigned long power9_idle_stop(unsigned long
>> psscr, bool mmu_on)
>>       mtspr(SPRN_PTCR,    sprs.ptcr);
>>       mtspr(SPRN_RPR,        sprs.rpr);
>>       mtspr(SPRN_TSCR,    sprs.tscr);
>> -    mtspr(SPRN_LDBAR,    sprs.ldbar);
>> +    if (!firmware_has_feature(FW_FEATURE_ULTRAVISOR))
>> +        mtspr(SPRN_LDBAR,    sprs.ldbar);
>>
>>       if (pls >= pnv_first_tb_loss_level) {
>>           /* TB loss */
>> diff --git a/arch/powerpc/platforms/powernv/subcore-asm.S
>> b/arch/powerpc/platforms/powernv/subcore-asm.S
>> index 39bb24aa8f34..e4383fa5e150 100644
>> --- a/arch/powerpc/platforms/powernv/subcore-asm.S
>> +++ b/arch/powerpc/platforms/powernv/subcore-asm.S
>> @@ -44,7 +44,9 @@ _GLOBAL(split_core_secondary_loop)
>>
>>   real_mode:
>>       /* Grab values from unsplit SPRs */
>> +BEGIN_FW_FTR_SECTION
>>       mfspr    r6,  SPRN_LDBAR
>> +END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
>>       mfspr    r7,  SPRN_PMMAR
>>       mfspr    r8,  SPRN_PMCR
>>       mfspr    r9,  SPRN_RPR
>> @@ -77,7 +79,9 @@ real_mode:
>>       mtspr    SPRN_HDEC, r4
>>
>>       /* Restore SPR values now we are split */
>> +BEGIN_FW_FTR_SECTION
>>       mtspr    SPRN_LDBAR, r6
>> +END_FW_FTR_SECTION_IFCLR(FW_FEATURE_ULTRAVISOR)
>>       mtspr    SPRN_PMMAR, r7
>>       mtspr    SPRN_PMCR, r8
>>       mtspr    SPRN_RPR, r9
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-05-30 22:54 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-18 14:25 [RFC PATCH v2 00/10] kvmppc: Paravirtualize KVM to support ultravisor Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 01/10] KVM: PPC: Ultravisor: Add PPC_UV config option Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 02/10] KVM: PPC: Ultravisor: Introduce the MSR_S bit Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 03/10] powerpc: Introduce FW_FEATURE_ULTRAVISOR Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 04/10] KVM: PPC: Ultravisor: Add generic ultravisor call handler Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 05/10] KVM: PPC: Ultravisor: Use UV_WRITE_PATE ucall to register a PATE Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 06/10] KVM: PPC: Ultravisor: Restrict flush of the partition tlb cache Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 07/10] KVM: PPC: Ultravisor: Restrict LDBAR access Claudio Carvalho
2019-05-20  5:43   ` Paul Mackerras
2019-05-21  5:24   ` Madhavan Srinivasan
2019-05-30 22:51     ` Claudio Carvalho
2019-05-18 14:25 ` [RFC PATCH v2 08/10] KVM: PPC: Ultravisor: Return to UV for hcalls from SVM Claudio Carvalho
2019-05-20  6:17   ` Paul Mackerras
2019-05-18 14:25 ` [RFC PATCH v2 09/10] KVM: PPC: Book3S HV: Fixed for running secure guests Claudio Carvalho
2019-05-20  6:40   ` Paul Mackerras
2019-05-18 14:25 ` [RFC PATCH v2 10/10] KVM: PPC: Ultravisor: Check for MSR_S during hv_reset_msr Claudio Carvalho

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).