All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] arch/x86: Enable MPK feature on AMD
@ 2020-05-08 21:09 Babu Moger
  2020-05-08 21:09 ` [PATCH v2 1/3] arch/x86: Rename config X86_INTEL_MEMORY_PROTECTION_KEYS to generic x86 Babu Moger
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Babu Moger @ 2020-05-08 21:09 UTC (permalink / raw)
  To: corbet, tglx, mingo, bp, hpa, pbonzini, sean.j.christopherson
  Cc: x86, vkuznets, wanpengli, jmattson, joro, dave.hansen, luto,
	peterz, mchehab+samsung, babu.moger, changbin.du, namit, bigeasy,
	yang.shi, asteinhauser, anshuman.khandual, jan.kiszka, akpm,
	steven.price, rppt, peterx, dan.j.williams, arjunroy, logang,
	thellstrom, aarcange, justin.he, robin.murphy, ira.weiny,
	keescook, jgross, andrew.cooper3, pawan.kumar.gupta, fenghua.yu,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	linux-kernel, kvm

AMD's next generation of EPYC processors support the MPK (Memory
Protection Keys) feature.

AMD documentation for MPK feature is available at "AMD64 Architecture
Programmer’s Manual Volume 2: System Programming, Pub. 24593 Rev. 3.34,
Section 5.6.6 Memory Protection Keys (MPK) Bit".

The documentation can be obtained at the link below:
https://bugzilla.kernel.org/show_bug.cgi?id=206537

This series enables the feature on AMD and updates config parameters
to reflect the MPK support on generic x86 platforms.
---
v2:
  - Introduced intermediate config option X86_MEMORY_PROTECTION_KEYS to
    avoid user propmpts. Kept X86_INTEL_MEMORY_PROTECTION_KEYS as is.
    Eventually, we will be moving to X86_MEMORY_PROTECTION_KEYS after
    couple of kernel revisions. 
  - Moved pkru data structures to kvm_vcpu_arch. Moved save/restore pkru
    to kvm_load_host_xsave_state/kvm_load_guest_xsave_state.

v1:
  https://lore.kernel.org/lkml/158880240546.11615.2219410169137148044.stgit@naples-babu.amd.com/

Babu Moger (3):
      arch/x86: Rename config X86_INTEL_MEMORY_PROTECTION_KEYS to generic x86
      KVM: x86: Move pkru save/restore to x86.c
      KVM: SVM: Add support for MPK feature on AMD


 Documentation/core-api/protection-keys.rst     |    3 ++-
 arch/x86/Kconfig                               |   11 +++++++++--
 arch/x86/include/asm/disabled-features.h       |    4 ++--
 arch/x86/include/asm/kvm_host.h                |    1 +
 arch/x86/include/asm/mmu.h                     |    2 +-
 arch/x86/include/asm/mmu_context.h             |    4 ++--
 arch/x86/include/asm/pgtable.h                 |    4 ++--
 arch/x86/include/asm/pgtable_types.h           |    2 +-
 arch/x86/include/asm/special_insns.h           |    2 +-
 arch/x86/include/uapi/asm/mman.h               |    2 +-
 arch/x86/kernel/cpu/common.c                   |    2 +-
 arch/x86/kvm/svm/svm.c                         |    4 ++++
 arch/x86/kvm/vmx/vmx.c                         |   18 ------------------
 arch/x86/kvm/x86.c                             |   20 ++++++++++++++++++++
 arch/x86/mm/Makefile                           |    2 +-
 arch/x86/mm/pkeys.c                            |    2 +-
 scripts/headers_install.sh                     |    2 +-
 tools/arch/x86/include/asm/disabled-features.h |    4 ++--
 18 files changed, 52 insertions(+), 37 deletions(-)

--

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/3] arch/x86: Rename config X86_INTEL_MEMORY_PROTECTION_KEYS to generic x86
  2020-05-08 21:09 [PATCH v2 0/3] arch/x86: Enable MPK feature on AMD Babu Moger
@ 2020-05-08 21:09 ` Babu Moger
  2020-05-08 21:09 ` [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c Babu Moger
  2020-05-08 21:10 ` [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD Babu Moger
  2 siblings, 0 replies; 10+ messages in thread
From: Babu Moger @ 2020-05-08 21:09 UTC (permalink / raw)
  To: corbet, tglx, mingo, bp, hpa, pbonzini, sean.j.christopherson
  Cc: x86, vkuznets, wanpengli, jmattson, joro, dave.hansen, luto,
	peterz, mchehab+samsung, babu.moger, changbin.du, namit, bigeasy,
	yang.shi, asteinhauser, anshuman.khandual, jan.kiszka, akpm,
	steven.price, rppt, peterx, dan.j.williams, arjunroy, logang,
	thellstrom, aarcange, justin.he, robin.murphy, ira.weiny,
	keescook, jgross, andrew.cooper3, pawan.kumar.gupta, fenghua.yu,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	linux-kernel, kvm

AMD's next generation of EPYC processors support the MPK (Memory
Protection Keys) feature.

So, rename X86_INTEL_MEMORY_PROTECTION_KEYS to X86_MEMORY_PROTECTION_KEYS.

No functional changes.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 Documentation/core-api/protection-keys.rst     |    3 ++-
 arch/x86/Kconfig                               |   11 +++++++++--
 arch/x86/include/asm/disabled-features.h       |    4 ++--
 arch/x86/include/asm/mmu.h                     |    2 +-
 arch/x86/include/asm/mmu_context.h             |    4 ++--
 arch/x86/include/asm/pgtable.h                 |    4 ++--
 arch/x86/include/asm/pgtable_types.h           |    2 +-
 arch/x86/include/asm/special_insns.h           |    2 +-
 arch/x86/include/uapi/asm/mman.h               |    2 +-
 arch/x86/kernel/cpu/common.c                   |    2 +-
 arch/x86/mm/Makefile                           |    2 +-
 arch/x86/mm/pkeys.c                            |    2 +-
 scripts/headers_install.sh                     |    2 +-
 tools/arch/x86/include/asm/disabled-features.h |    4 ++--
 14 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/Documentation/core-api/protection-keys.rst b/Documentation/core-api/protection-keys.rst
index 49d9833af871..d25e89e53c59 100644
--- a/Documentation/core-api/protection-keys.rst
+++ b/Documentation/core-api/protection-keys.rst
@@ -6,7 +6,8 @@ Memory Protection Keys
 
 Memory Protection Keys for Userspace (PKU aka PKEYs) is a feature
 which is found on Intel's Skylake "Scalable Processor" Server CPUs.
-It will be avalable in future non-server parts.
+It will be available in future non-server parts. Also, AMD64
+Architecture Programmer’s Manual defines PKU feature in AMD processors.
 
 For anyone wishing to test or use this feature, it is available in
 Amazon's EC2 C5 instances and is known to work there using an Ubuntu
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1197b5596d5a..b6f1686526eb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1887,10 +1887,10 @@ config X86_UMIP
 	  results are dummy.
 
 config X86_INTEL_MEMORY_PROTECTION_KEYS
-	prompt "Intel Memory Protection Keys"
+	prompt "Memory Protection Keys"
 	def_bool y
 	# Note: only available in 64-bit mode
-	depends on CPU_SUP_INTEL && X86_64
+	depends on X86_64 && (CPU_SUP_INTEL || CPU_SUP_AMD)
 	select ARCH_USES_HIGH_VMA_FLAGS
 	select ARCH_HAS_PKEYS
 	---help---
@@ -1902,6 +1902,13 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS
 
 	  If unsure, say y.
 
+config X86_MEMORY_PROTECTION_KEYS
+	# Note: This is an intermediate change to avoid config prompt to
+	# the users. Eventually, the option X86_INTEL_MEMORY_PROTECTION_KEYS
+	# should be changed to X86_MEMORY_PROTECTION_KEYS permanently after
+	# few kernel revisions.
+	def_bool X86_INTEL_MEMORY_PROTECTION_KEYS
+
 choice
 	prompt "TSX enable mode"
 	depends on CPU_SUP_INTEL
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 4ea8584682f9..52dbdfed8043 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -36,13 +36,13 @@
 # define DISABLE_PCID		(1<<(X86_FEATURE_PCID & 31))
 #endif /* CONFIG_X86_64 */
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 # define DISABLE_PKU		0
 # define DISABLE_OSPKE		0
 #else
 # define DISABLE_PKU		(1<<(X86_FEATURE_PKU & 31))
 # define DISABLE_OSPKE		(1<<(X86_FEATURE_OSPKE & 31))
-#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
+#endif /* CONFIG_X86_MEMORY_PROTECTION_KEYS */
 
 #ifdef CONFIG_X86_5LEVEL
 # define DISABLE_LA57	0
diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index bdeae9291e5c..351d22152709 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -42,7 +42,7 @@ typedef struct {
 	const struct vdso_image *vdso_image;	/* vdso image in use */
 
 	atomic_t perf_rdpmc_allowed;	/* nonzero if rdpmc is allowed */
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 	/*
 	 * One bit per protection key says whether userspace can
 	 * use it or not.  protected by mmap_sem.
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 4e55370e48e8..33f4a7ccac5e 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -118,7 +118,7 @@ static inline int init_new_context(struct task_struct *tsk,
 	mm->context.ctx_id = atomic64_inc_return(&last_mm_ctx_id);
 	atomic64_set(&mm->context.tlb_gen, 0);
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 	if (cpu_feature_enabled(X86_FEATURE_OSPKE)) {
 		/* pkey 0 is the default and allocated implicitly */
 		mm->context.pkey_allocation_map = 0x1;
@@ -163,7 +163,7 @@ do {						\
 static inline void arch_dup_pkeys(struct mm_struct *oldmm,
 				  struct mm_struct *mm)
 {
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 	if (!cpu_feature_enabled(X86_FEATURE_OSPKE))
 		return;
 
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 4d02e64af1b3..4265720d62c2 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1451,7 +1451,7 @@ static inline pmd_t pmd_swp_clear_uffd_wp(pmd_t pmd)
 #define PKRU_WD_BIT 0x2
 #define PKRU_BITS_PER_PKEY 2
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 extern u32 init_pkru_value;
 #else
 #define init_pkru_value	0
@@ -1475,7 +1475,7 @@ static inline bool __pkru_allows_write(u32 pkru, u16 pkey)
 
 static inline u16 pte_flags_pkey(unsigned long pte_flags)
 {
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 	/* ifdef to avoid doing 59-bit shift on 32-bit values */
 	return (pte_flags & _PAGE_PKEY_MASK) >> _PAGE_BIT_PKEY_BIT0;
 #else
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index b6606fe6cfdf..c61a1ff71d53 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -56,7 +56,7 @@
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
 #define _PAGE_CPA_TEST	(_AT(pteval_t, 1) << _PAGE_BIT_CPA_TEST)
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 #define _PAGE_PKEY_BIT0	(_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT0)
 #define _PAGE_PKEY_BIT1	(_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT1)
 #define _PAGE_PKEY_BIT2	(_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT2)
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 6d37b8fcfc77..70eaae7e8f04 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -73,7 +73,7 @@ static inline unsigned long native_read_cr4(void)
 
 void native_write_cr4(unsigned long val);
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 static inline u32 rdpkru(void)
 {
 	u32 ecx = 0;
diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h
index d4a8d0424bfb..d4da414a9de2 100644
--- a/arch/x86/include/uapi/asm/mman.h
+++ b/arch/x86/include/uapi/asm/mman.h
@@ -4,7 +4,7 @@
 
 #define MAP_32BIT	0x40		/* only give out 32bit addresses */
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 /*
  * Take the 4 protection key bits out of the vma->vm_flags
  * value and turn them in to the bits that we can put in
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..e5fb9955214c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -448,7 +448,7 @@ static __always_inline void setup_pku(struct cpuinfo_x86 *c)
 	set_cpu_cap(c, X86_FEATURE_OSPKE);
 }
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 static __init int setup_disable_pku(char *arg)
 {
 	/*
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 98f7c6fa2eaa..17ebf12ba8ff 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -45,7 +45,7 @@ obj-$(CONFIG_AMD_NUMA)		+= amdtopology.o
 obj-$(CONFIG_ACPI_NUMA)		+= srat.o
 obj-$(CONFIG_NUMA_EMU)		+= numa_emulation.o
 
-obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)	+= pkeys.o
+obj-$(CONFIG_X86_MEMORY_PROTECTION_KEYS)	+= pkeys.o
 obj-$(CONFIG_RANDOMIZE_MEMORY)			+= kaslr.o
 obj-$(CONFIG_PAGE_TABLE_ISOLATION)		+= pti.o
 
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 8873ed1438a9..a77497e8d58c 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Intel Memory Protection Keys management
+ * Memory Protection Keys management
  * Copyright (c) 2015, Intel Corporation.
  */
 #include <linux/debugfs.h>		/* debugfs_create_u32()		*/
diff --git a/scripts/headers_install.sh b/scripts/headers_install.sh
index a07668a5c36b..6e60e5362d3e 100755
--- a/scripts/headers_install.sh
+++ b/scripts/headers_install.sh
@@ -86,7 +86,7 @@ arch/sh/include/uapi/asm/sigcontext.h:CONFIG_CPU_SH5
 arch/sh/include/uapi/asm/stat.h:CONFIG_CPU_SH5
 arch/x86/include/uapi/asm/auxvec.h:CONFIG_IA32_EMULATION
 arch/x86/include/uapi/asm/auxvec.h:CONFIG_X86_64
-arch/x86/include/uapi/asm/mman.h:CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+arch/x86/include/uapi/asm/mman.h:CONFIG_X86_MEMORY_PROTECTION_KEYS
 include/uapi/asm-generic/fcntl.h:CONFIG_64BIT
 include/uapi/linux/atmdev.h:CONFIG_COMPAT
 include/uapi/linux/elfcore.h:CONFIG_BINFMT_ELF_FDPIC
diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
index 4ea8584682f9..52dbdfed8043 100644
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ b/tools/arch/x86/include/asm/disabled-features.h
@@ -36,13 +36,13 @@
 # define DISABLE_PCID		(1<<(X86_FEATURE_PCID & 31))
 #endif /* CONFIG_X86_64 */
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_X86_MEMORY_PROTECTION_KEYS
 # define DISABLE_PKU		0
 # define DISABLE_OSPKE		0
 #else
 # define DISABLE_PKU		(1<<(X86_FEATURE_PKU & 31))
 # define DISABLE_OSPKE		(1<<(X86_FEATURE_OSPKE & 31))
-#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
+#endif /* CONFIG_X86_MEMORY_PROTECTION_KEYS */
 
 #ifdef CONFIG_X86_5LEVEL
 # define DISABLE_LA57	0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c
  2020-05-08 21:09 [PATCH v2 0/3] arch/x86: Enable MPK feature on AMD Babu Moger
  2020-05-08 21:09 ` [PATCH v2 1/3] arch/x86: Rename config X86_INTEL_MEMORY_PROTECTION_KEYS to generic x86 Babu Moger
@ 2020-05-08 21:09 ` Babu Moger
  2020-05-08 22:09   ` Jim Mattson
  2020-05-08 21:10 ` [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD Babu Moger
  2 siblings, 1 reply; 10+ messages in thread
From: Babu Moger @ 2020-05-08 21:09 UTC (permalink / raw)
  To: corbet, tglx, mingo, bp, hpa, pbonzini, sean.j.christopherson
  Cc: x86, vkuznets, wanpengli, jmattson, joro, dave.hansen, luto,
	peterz, mchehab+samsung, babu.moger, changbin.du, namit, bigeasy,
	yang.shi, asteinhauser, anshuman.khandual, jan.kiszka, akpm,
	steven.price, rppt, peterx, dan.j.williams, arjunroy, logang,
	thellstrom, aarcange, justin.he, robin.murphy, ira.weiny,
	keescook, jgross, andrew.cooper3, pawan.kumar.gupta, fenghua.yu,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	linux-kernel, kvm

PKU feature is supported by both VMX and SVM. So we can
safely move pkru state save/restore to common code.
Also move all the pkru data structure to kvm_vcpu_arch.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/vmx/vmx.c          |   18 ------------------
 arch/x86/kvm/x86.c              |   20 ++++++++++++++++++++
 3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 42a2d0d3984a..afd8f3780ae0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -578,6 +578,7 @@ struct kvm_vcpu_arch {
 	unsigned long cr4;
 	unsigned long cr4_guest_owned_bits;
 	unsigned long cr8;
+	u32 host_pkru;
 	u32 pkru;
 	u32 hflags;
 	u64 efer;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c2c6335a998c..46898a476ba7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1372,7 +1372,6 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 	vmx_vcpu_pi_load(vcpu, cpu);
 
-	vmx->host_pkru = read_pkru();
 	vmx->host_debugctlmsr = get_debugctlmsr();
 }
 
@@ -6577,11 +6576,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	kvm_load_guest_xsave_state(vcpu);
 
-	if (static_cpu_has(X86_FEATURE_PKU) &&
-	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
-	    vcpu->arch.pkru != vmx->host_pkru)
-		__write_pkru(vcpu->arch.pkru);
-
 	pt_guest_enter(vmx);
 
 	if (vcpu_to_pmu(vcpu)->version)
@@ -6671,18 +6665,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	pt_guest_exit(vmx);
 
-	/*
-	 * eager fpu is enabled if PKEY is supported and CR4 is switched
-	 * back on host, so it is safe to read guest PKRU from current
-	 * XSAVE.
-	 */
-	if (static_cpu_has(X86_FEATURE_PKU) &&
-	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
-		vcpu->arch.pkru = rdpkru();
-		if (vcpu->arch.pkru != vmx->host_pkru)
-			__write_pkru(vmx->host_pkru);
-	}
-
 	kvm_load_host_xsave_state(vcpu);
 
 	vmx->nested.nested_run_pending = 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c5835f9cb9ad..1b27e78fb3c1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -836,11 +836,28 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
 		    vcpu->arch.ia32_xss != host_xss)
 			wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
 	}
+
+	if (static_cpu_has(X86_FEATURE_PKU) &&
+	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
+	    vcpu->arch.pkru != vcpu->arch.host_pkru)
+		__write_pkru(vcpu->arch.pkru);
 }
 EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
 
 void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
 {
+	/*
+	 * eager fpu is enabled if PKEY is supported and CR4 is switched
+	 * back on host, so it is safe to read guest PKRU from current
+	 * XSAVE.
+	 */
+	if (static_cpu_has(X86_FEATURE_PKU) &&
+	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
+		vcpu->arch.pkru = rdpkru();
+		if (vcpu->arch.pkru != vcpu->arch.host_pkru)
+			__write_pkru(vcpu->arch.host_pkru);
+	}
+
 	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
 
 		if (vcpu->arch.xcr0 != host_xcr0)
@@ -3570,6 +3587,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 
 	kvm_x86_ops.vcpu_load(vcpu, cpu);
 
+	/* Save host pkru register if supported */
+	vcpu->arch.host_pkru = read_pkru();
+
 	/* Apply any externally detected TSC adjustments (due to suspend) */
 	if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
 		adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD
  2020-05-08 21:09 [PATCH v2 0/3] arch/x86: Enable MPK feature on AMD Babu Moger
  2020-05-08 21:09 ` [PATCH v2 1/3] arch/x86: Rename config X86_INTEL_MEMORY_PROTECTION_KEYS to generic x86 Babu Moger
  2020-05-08 21:09 ` [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c Babu Moger
@ 2020-05-08 21:10 ` Babu Moger
  2020-05-08 21:55   ` Sean Christopherson
  2 siblings, 1 reply; 10+ messages in thread
From: Babu Moger @ 2020-05-08 21:10 UTC (permalink / raw)
  To: corbet, tglx, mingo, bp, hpa, pbonzini, sean.j.christopherson
  Cc: x86, vkuznets, wanpengli, jmattson, joro, dave.hansen, luto,
	peterz, mchehab+samsung, babu.moger, changbin.du, namit, bigeasy,
	yang.shi, asteinhauser, anshuman.khandual, jan.kiszka, akpm,
	steven.price, rppt, peterx, dan.j.williams, arjunroy, logang,
	thellstrom, aarcange, justin.he, robin.murphy, ira.weiny,
	keescook, jgross, andrew.cooper3, pawan.kumar.gupta, fenghua.yu,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	linux-kernel, kvm

The Memory Protection Key (MPK) feature provides a way for applications
to impose page-based data access protections (read/write, read-only or
no access), without requiring modification of page tables and subsequent
TLB invalidations when the application changes protection domains.

This feature is already available in Intel platforms. Now enable the
feature on AMD platforms.

AMD documentation for MPK feature is available at "AMD64 Architecture
Programmer’s Manual Volume 2: System Programming, Pub. 24593 Rev. 3.34,
Section 5.6.6 Memory Protection Keys (MPK) Bit". Documentation can be
obtained at the link below.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 arch/x86/kvm/svm/svm.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2f379bacbb26..37fb41ad9149 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -818,6 +818,10 @@ static __init void svm_set_cpu_caps(void)
 	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
 	    boot_cpu_has(X86_FEATURE_AMD_SSBD))
 		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
+
+	/* PKU is not yet implemented for shadow paging. */
+	if (npt_enabled && boot_cpu_has(X86_FEATURE_OSPKE))
+		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);
 }
 
 static __init int svm_hardware_setup(void)


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD
  2020-05-08 21:10 ` [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD Babu Moger
@ 2020-05-08 21:55   ` Sean Christopherson
  2020-05-08 22:02     ` Babu Moger
  0 siblings, 1 reply; 10+ messages in thread
From: Sean Christopherson @ 2020-05-08 21:55 UTC (permalink / raw)
  To: Babu Moger
  Cc: corbet, tglx, mingo, bp, hpa, pbonzini, x86, vkuznets, wanpengli,
	jmattson, joro, dave.hansen, luto, peterz, mchehab+samsung,
	changbin.du, namit, bigeasy, yang.shi, asteinhauser,
	anshuman.khandual, jan.kiszka, akpm, steven.price, rppt, peterx,
	dan.j.williams, arjunroy, logang, thellstrom, aarcange,
	justin.he, robin.murphy, ira.weiny, keescook, jgross,
	andrew.cooper3, pawan.kumar.gupta, fenghua.yu,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	linux-kernel, kvm

On Fri, May 08, 2020 at 04:10:03PM -0500, Babu Moger wrote:
> The Memory Protection Key (MPK) feature provides a way for applications
> to impose page-based data access protections (read/write, read-only or
> no access), without requiring modification of page tables and subsequent
> TLB invalidations when the application changes protection domains.
> 
> This feature is already available in Intel platforms. Now enable the
> feature on AMD platforms.
> 
> AMD documentation for MPK feature is available at "AMD64 Architecture
> Programmer’s Manual Volume 2: System Programming, Pub. 24593 Rev. 3.34,
> Section 5.6.6 Memory Protection Keys (MPK) Bit". Documentation can be
> obtained at the link below.
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  arch/x86/kvm/svm/svm.c |    4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 2f379bacbb26..37fb41ad9149 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -818,6 +818,10 @@ static __init void svm_set_cpu_caps(void)
>  	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
>  	    boot_cpu_has(X86_FEATURE_AMD_SSBD))
>  		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
> +
> +	/* PKU is not yet implemented for shadow paging. */
> +	if (npt_enabled && boot_cpu_has(X86_FEATURE_OSPKE))
> +		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);

This can actually be done in common code as well since both VMX and SVM
call kvm_set_cpu_caps() after kvm_configure_mmu(), i.e. key off of
tdp_enabled.

>  }
>  
>  static __init int svm_hardware_setup(void)
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD
  2020-05-08 21:55   ` Sean Christopherson
@ 2020-05-08 22:02     ` Babu Moger
  0 siblings, 0 replies; 10+ messages in thread
From: Babu Moger @ 2020-05-08 22:02 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: corbet, tglx, mingo, bp, hpa, pbonzini, x86, vkuznets, wanpengli,
	jmattson, joro, dave.hansen, luto, peterz, mchehab+samsung,
	changbin.du, namit, bigeasy, yang.shi, asteinhauser,
	anshuman.khandual, jan.kiszka, akpm, steven.price, rppt, peterx,
	dan.j.williams, arjunroy, logang, thellstrom, aarcange,
	justin.he, robin.murphy, ira.weiny, keescook, jgross,
	andrew.cooper3, pawan.kumar.gupta, fenghua.yu,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	linux-kernel, kvm



On 5/8/20 4:55 PM, Sean Christopherson wrote:
> On Fri, May 08, 2020 at 04:10:03PM -0500, Babu Moger wrote:
>> The Memory Protection Key (MPK) feature provides a way for applications
>> to impose page-based data access protections (read/write, read-only or
>> no access), without requiring modification of page tables and subsequent
>> TLB invalidations when the application changes protection domains.
>>
>> This feature is already available in Intel platforms. Now enable the
>> feature on AMD platforms.
>>
>> AMD documentation for MPK feature is available at "AMD64 Architecture
>> Programmer’s Manual Volume 2: System Programming, Pub. 24593 Rev. 3.34,
>> Section 5.6.6 Memory Protection Keys (MPK) Bit". Documentation can be
>> obtained at the link below.
>>
>> Link: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla.kernel.org%2Fshow_bug.cgi%3Fid%3D206537&amp;data=02%7C01%7Cbabu.moger%40amd.com%7Ceca826ce565e450edc0b08d7f39a95f1%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637245717572330988&amp;sdata=IaZXO8LLyXMqP0pZBYKzkXY4cInzpjBbSyzcnIcj%2BoA%3D&amp;reserved=0
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>  arch/x86/kvm/svm/svm.c |    4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 2f379bacbb26..37fb41ad9149 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -818,6 +818,10 @@ static __init void svm_set_cpu_caps(void)
>>  	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
>>  	    boot_cpu_has(X86_FEATURE_AMD_SSBD))
>>  		kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
>> +
>> +	/* PKU is not yet implemented for shadow paging. */
>> +	if (npt_enabled && boot_cpu_has(X86_FEATURE_OSPKE))
>> +		kvm_cpu_cap_check_and_set(X86_FEATURE_PKU);
> 
> This can actually be done in common code as well since both VMX and SVM
> call kvm_set_cpu_caps() after kvm_configure_mmu(), i.e. key off of
> tdp_enabled.

Ok. Sure. Will change it in next revision. Thanks.
> 
>>  }
>>  
>>  static __init int svm_hardware_setup(void)
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c
  2020-05-08 21:09 ` [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c Babu Moger
@ 2020-05-08 22:09   ` Jim Mattson
  2020-05-09 12:59     ` Paolo Bonzini
  0 siblings, 1 reply; 10+ messages in thread
From: Jim Mattson @ 2020-05-08 22:09 UTC (permalink / raw)
  To: Babu Moger
  Cc: Jonathan Corbet, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Paolo Bonzini, Sean Christopherson,
	the arch/x86 maintainers, Vitaly Kuznetsov, Wanpeng Li,
	Joerg Roedel, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	mchehab+samsung, changbin.du, Nadav Amit,
	Sebastian Andrzej Siewior, yang.shi, asteinhauser,
	anshuman.khandual, Jan Kiszka, Andrew Morton, steven.price, rppt,
	peterx, Dan Williams, arjunroy, logang, Thomas Hellstrom,
	Andrea Arcangeli, justin.he, robin.murphy, ira.weiny, Kees Cook,
	Juergen Gross, Andrew Cooper, pawan.kumar.gupta, Yu, Fenghua,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	LKML, kvm list

On Fri, May 8, 2020 at 2:10 PM Babu Moger <babu.moger@amd.com> wrote:
>
> PKU feature is supported by both VMX and SVM. So we can
> safely move pkru state save/restore to common code.
> Also move all the pkru data structure to kvm_vcpu_arch.
>
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    1 +
>  arch/x86/kvm/vmx/vmx.c          |   18 ------------------
>  arch/x86/kvm/x86.c              |   20 ++++++++++++++++++++
>  3 files changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 42a2d0d3984a..afd8f3780ae0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -578,6 +578,7 @@ struct kvm_vcpu_arch {
>         unsigned long cr4;
>         unsigned long cr4_guest_owned_bits;
>         unsigned long cr8;
> +       u32 host_pkru;
>         u32 pkru;
>         u32 hflags;
>         u64 efer;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index c2c6335a998c..46898a476ba7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1372,7 +1372,6 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>
>         vmx_vcpu_pi_load(vcpu, cpu);
>
> -       vmx->host_pkru = read_pkru();
>         vmx->host_debugctlmsr = get_debugctlmsr();
>  }
>
> @@ -6577,11 +6576,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>
>         kvm_load_guest_xsave_state(vcpu);
>
> -       if (static_cpu_has(X86_FEATURE_PKU) &&
> -           kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
> -           vcpu->arch.pkru != vmx->host_pkru)
> -               __write_pkru(vcpu->arch.pkru);
> -
>         pt_guest_enter(vmx);
>
>         if (vcpu_to_pmu(vcpu)->version)
> @@ -6671,18 +6665,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
>
>         pt_guest_exit(vmx);
>
> -       /*
> -        * eager fpu is enabled if PKEY is supported and CR4 is switched
> -        * back on host, so it is safe to read guest PKRU from current
> -        * XSAVE.
> -        */
> -       if (static_cpu_has(X86_FEATURE_PKU) &&
> -           kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
> -               vcpu->arch.pkru = rdpkru();
> -               if (vcpu->arch.pkru != vmx->host_pkru)
> -                       __write_pkru(vmx->host_pkru);
> -       }
> -
>         kvm_load_host_xsave_state(vcpu);
>
>         vmx->nested.nested_run_pending = 0;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c5835f9cb9ad..1b27e78fb3c1 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -836,11 +836,28 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
>                     vcpu->arch.ia32_xss != host_xss)
>                         wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
>         }
> +
> +       if (static_cpu_has(X86_FEATURE_PKU) &&
> +           kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
> +           vcpu->arch.pkru != vcpu->arch.host_pkru)
> +               __write_pkru(vcpu->arch.pkru);

This doesn't seem quite right to me. Though rdpkru and wrpkru are
contingent upon CR4.PKE, the PKRU resource isn't. It can be read with
XSAVE and written with XRSTOR. So, if we don't set the guest PKRU
value here, the guest can read the host value, which seems dodgy at
best.

Perhaps the second conjunct should be: (kvm_read_cr4_bits(vcpu,
X86_CR4_PKE) || (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)).

>  }
>  EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
>
>  void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
>  {
> +       /*
> +        * eager fpu is enabled if PKEY is supported and CR4 is switched
> +        * back on host, so it is safe to read guest PKRU from current
> +        * XSAVE.
> +        */

I don't understand the relevance of this comment to the code below.

> +       if (static_cpu_has(X86_FEATURE_PKU) &&
> +           kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
> +               vcpu->arch.pkru = rdpkru();
> +               if (vcpu->arch.pkru != vcpu->arch.host_pkru)
> +                       __write_pkru(vcpu->arch.host_pkru);
> +       }
> +

Same concern as above, but perhaps worse in this instance, since a
guest with CR4.PKE clear could potentially use XRSTOR to change the
host PKRU value.

>         if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
>
>                 if (vcpu->arch.xcr0 != host_xcr0)
> @@ -3570,6 +3587,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>
>         kvm_x86_ops.vcpu_load(vcpu, cpu);
>
> +       /* Save host pkru register if supported */
> +       vcpu->arch.host_pkru = read_pkru();
> +
>         /* Apply any externally detected TSC adjustments (due to suspend) */
>         if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
>                 adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c
  2020-05-08 22:09   ` Jim Mattson
@ 2020-05-09 12:59     ` Paolo Bonzini
  2020-05-11 13:49       ` Babu Moger
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2020-05-09 12:59 UTC (permalink / raw)
  To: Jim Mattson, Babu Moger
  Cc: Jonathan Corbet, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Sean Christopherson, the arch/x86 maintainers,
	Vitaly Kuznetsov, Wanpeng Li, Joerg Roedel, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, mchehab+samsung, changbin.du,
	Nadav Amit, Sebastian Andrzej Siewior, yang.shi, asteinhauser,
	anshuman.khandual, Jan Kiszka, Andrew Morton, steven.price, rppt,
	peterx, Dan Williams, arjunroy, logang, Thomas Hellstrom,
	Andrea Arcangeli, justin.he, robin.murphy, ira.weiny, Kees Cook,
	Juergen Gross, Andrew Cooper, pawan.kumar.gupta, Yu, Fenghua,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	LKML, kvm list

On 09/05/20 00:09, Jim Mattson wrote:
>> +       if (static_cpu_has(X86_FEATURE_PKU) &&
>> +           kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
>> +           vcpu->arch.pkru != vcpu->arch.host_pkru)
>> +               __write_pkru(vcpu->arch.pkru);
> This doesn't seem quite right to me. Though rdpkru and wrpkru are
> contingent upon CR4.PKE, the PKRU resource isn't. It can be read with
> XSAVE and written with XRSTOR. So, if we don't set the guest PKRU
> value here, the guest can read the host value, which seems dodgy at
> best.
> 
> Perhaps the second conjunct should be: (kvm_read_cr4_bits(vcpu,
> X86_CR4_PKE) || (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)).

You're right.  The bug was preexistent, but we should fix it in 5.7 and
stable as well.

>>  }
>>  EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
>>
>>  void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
>>  {
>> +       /*
>> +        * eager fpu is enabled if PKEY is supported and CR4 is switched
>> +        * back on host, so it is safe to read guest PKRU from current
>> +        * XSAVE.
>> +        */
> I don't understand the relevance of this comment to the code below.
> 

It's probably stale.

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c
  2020-05-09 12:59     ` Paolo Bonzini
@ 2020-05-11 13:49       ` Babu Moger
  2020-05-11 13:57         ` Paolo Bonzini
  0 siblings, 1 reply; 10+ messages in thread
From: Babu Moger @ 2020-05-11 13:49 UTC (permalink / raw)
  To: Paolo Bonzini, Jim Mattson
  Cc: Jonathan Corbet, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Sean Christopherson, the arch/x86 maintainers,
	Vitaly Kuznetsov, Wanpeng Li, Joerg Roedel, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, mchehab+samsung, changbin.du,
	Nadav Amit, Sebastian Andrzej Siewior, yang.shi, asteinhauser,
	anshuman.khandual, Jan Kiszka, Andrew Morton, steven.price, rppt,
	peterx, Dan Williams, arjunroy, logang, Thomas Hellstrom,
	Andrea Arcangeli, justin.he, robin.murphy, ira.weiny, Kees Cook,
	Juergen Gross, Andrew Cooper, pawan.kumar.gupta, Yu, Fenghua,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	LKML, kvm list



On 5/9/20 7:59 AM, Paolo Bonzini wrote:
> On 09/05/20 00:09, Jim Mattson wrote:
>>> +       if (static_cpu_has(X86_FEATURE_PKU) &&
>>> +           kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
>>> +           vcpu->arch.pkru != vcpu->arch.host_pkru)
>>> +               __write_pkru(vcpu->arch.pkru);
>> This doesn't seem quite right to me. Though rdpkru and wrpkru are
>> contingent upon CR4.PKE, the PKRU resource isn't. It can be read with
>> XSAVE and written with XRSTOR. So, if we don't set the guest PKRU
>> value here, the guest can read the host value, which seems dodgy at
>> best.
>>
>> Perhaps the second conjunct should be: (kvm_read_cr4_bits(vcpu,
>> X86_CR4_PKE) || (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)).

Thanks Jim.
> 
> You're right.  The bug was preexistent, but we should fix it in 5.7 and
> stable as well.
Paolo, Do you want me to send this fix separately? Or I will send v3 just
adding this fix. Thanks

> 
>>>  }
>>>  EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
>>>
>>>  void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
>>>  {
>>> +       /*
>>> +        * eager fpu is enabled if PKEY is supported and CR4 is switched
>>> +        * back on host, so it is safe to read guest PKRU from current
>>> +        * XSAVE.
>>> +        */
>> I don't understand the relevance of this comment to the code below.
>>
> 
> It's probably stale.

Will remove it.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c
  2020-05-11 13:49       ` Babu Moger
@ 2020-05-11 13:57         ` Paolo Bonzini
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2020-05-11 13:57 UTC (permalink / raw)
  To: Babu Moger, Jim Mattson
  Cc: Jonathan Corbet, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Sean Christopherson, the arch/x86 maintainers,
	Vitaly Kuznetsov, Wanpeng Li, Joerg Roedel, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, mchehab+samsung, changbin.du,
	Nadav Amit, Sebastian Andrzej Siewior, yang.shi, asteinhauser,
	anshuman.khandual, Jan Kiszka, Andrew Morton, steven.price, rppt,
	peterx, Dan Williams, arjunroy, logang, Thomas Hellstrom,
	Andrea Arcangeli, justin.he, robin.murphy, ira.weiny, Kees Cook,
	Juergen Gross, Andrew Cooper, pawan.kumar.gupta, Yu, Fenghua,
	vineela.tummalapalli, yamada.masahiro, sam, acme, linux-doc,
	LKML, kvm list

On 11/05/20 15:49, Babu Moger wrote:
>> You're right.  The bug was preexistent, but we should fix it in 5.7 and
>> stable as well.
> Paolo, Do you want me to send this fix separately? Or I will send v3 just
> adding this fix. Thanks
> 

Yes, please do.

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-05-11 13:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-08 21:09 [PATCH v2 0/3] arch/x86: Enable MPK feature on AMD Babu Moger
2020-05-08 21:09 ` [PATCH v2 1/3] arch/x86: Rename config X86_INTEL_MEMORY_PROTECTION_KEYS to generic x86 Babu Moger
2020-05-08 21:09 ` [PATCH v2 2/3] KVM: x86: Move pkru save/restore to x86.c Babu Moger
2020-05-08 22:09   ` Jim Mattson
2020-05-09 12:59     ` Paolo Bonzini
2020-05-11 13:49       ` Babu Moger
2020-05-11 13:57         ` Paolo Bonzini
2020-05-08 21:10 ` [PATCH v2 3/3] KVM: SVM: Add support for MPK feature on AMD Babu Moger
2020-05-08 21:55   ` Sean Christopherson
2020-05-08 22:02     ` Babu Moger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.