All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP
@ 2022-03-11  7:03 Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 1/5] KVM: X86: Change the type of access u32 to u64 Lai Jiangshan
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson; +Cc: Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Some change in permission_fault() for SMAP.  It also reduces
calls two callbacks to get CPL and RFLAGS in come cases, but it
has not any measurable performance change in tests (kernel build
in guest).

Changed from 1:
	gross implicit access into @access as Sean suggested.

	Use my official email address (Ant Group).  The work is backed
	by my company and I was incorrectly misunderstood that
	XXX@linux.alibaba.com is the only portal for opensource work
	in the corporate group.

[V1]: https://lore.kernel.org/kvm/20211207095039.53166-1-jiangshanlai@gmail.com/

Lai Jiangshan (6):
  KVM: X86: Change the type of access u32 to u64
  KVM: X86: Fix comments in update_permission_bitmask
  KVM: X86: Rename variable smap to not_smap in permission_fault()
  KVM: X86: Handle implicit supervisor access with SMAP
  KVM: X86: Only get rflags when needed in permission_fault()
  KVM: X86: Propagate the nested page fault info to the guest

 arch/x86/include/asm/kvm_host.h |  6 +++-
 arch/x86/kvm/kvm_emulate.h      |  3 +-
 arch/x86/kvm/mmu.h              | 54 ++++++++++++++++++++++-----------
 arch/x86/kvm/mmu/mmu.c          | 10 +++---
 arch/x86/kvm/mmu/paging_tmpl.h  | 16 ++++++----
 arch/x86/kvm/svm/nested.c       | 10 ++----
 arch/x86/kvm/vmx/nested.c       | 11 +++++++
 arch/x86/kvm/x86.c              | 32 ++++++++++---------
 8 files changed, 89 insertions(+), 53 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH V2 1/5] KVM: X86: Change the type of access u32 to u64
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
@ 2022-03-11  7:03 ` Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 2/5] KVM: X86: Fix comments in update_permission_bitmask Lai Jiangshan
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Change the type of access u32 to u64 for FNAME(walk_addr) and
->gva_to_gpa().

The kinds of accesses are usually combinations of UWX, and VMX/SVM's
nested paging adds a new factor of access: is it an access for a guest
page table or for a final guest physical address.

And SMAP relies a factor for supervisor access: explicit or implicit.

So @access in FNAME(walk_addr) and ->gva_to_gpa() is better to include
all these information to do the walk.

Although @access(u32) has enough bits to encode all the kinds, this
patch extends it to u64:
	o Extra bits will be in the higher 32 bits, so that we can
	  easily obtain the traditional access mode (UWX) by converting
	  it to u32.
	o Reuse the value for the access kind defined by SVM's nested
	  paging (PFERR_GUEST_FINAL_MASK and PFERR_GUEST_PAGE_MASK) as
	  @error_code in kvm_handle_page_fault().

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/mmu.h              |  8 +++++---
 arch/x86/kvm/mmu/mmu.c          |  2 +-
 arch/x86/kvm/mmu/paging_tmpl.h  |  8 ++++----
 arch/x86/kvm/x86.c              | 24 ++++++++++++------------
 5 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c45ab8b5c37f..edffcf7f9c2d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -429,7 +429,7 @@ struct kvm_mmu {
 	void (*inject_page_fault)(struct kvm_vcpu *vcpu,
 				  struct x86_exception *fault);
 	gpa_t (*gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
-			    gpa_t gva_or_gpa, u32 access,
+			    gpa_t gva_or_gpa, u64 access,
 			    struct x86_exception *exception);
 	int (*sync_page)(struct kvm_vcpu *vcpu,
 			 struct kvm_mmu_page *sp);
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index bf8dbc4bb12a..74efeaefa8f8 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -214,8 +214,10 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
  */
 static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 				  unsigned pte_access, unsigned pte_pkey,
-				  unsigned pfec)
+				  u64 access)
 {
+	/* strip nested paging fault error codes */
+	unsigned int pfec = access;
 	int cpl = static_call(kvm_x86_get_cpl)(vcpu);
 	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
 
@@ -317,12 +319,12 @@ static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count)
 	atomic64_add(count, &kvm->stat.pages[level - 1]);
 }
 
-gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
+gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
 			   struct x86_exception *exception);
 
 static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
 				      struct kvm_mmu *mmu,
-				      gpa_t gpa, u32 access,
+				      gpa_t gpa, u64 access,
 				      struct x86_exception *exception)
 {
 	if (mmu != &vcpu->arch.nested_mmu)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index bd3625a875ef..c12133c3cf00 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3703,7 +3703,7 @@ void kvm_mmu_sync_prev_roots(struct kvm_vcpu *vcpu)
 }
 
 static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
-				  gpa_t vaddr, u32 access,
+				  gpa_t vaddr, u64 access,
 				  struct x86_exception *exception)
 {
 	if (exception)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 252c77805eb9..8621188b46df 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -339,7 +339,7 @@ static inline bool FNAME(is_last_gpte)(struct kvm_mmu *mmu,
  */
 static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 				    struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
-				    gpa_t addr, u32 access)
+				    gpa_t addr, u64 access)
 {
 	int ret;
 	pt_element_t pte;
@@ -347,7 +347,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 	gfn_t table_gfn;
 	u64 pt_access, pte_access;
 	unsigned index, accessed_dirty, pte_pkey;
-	unsigned nested_access;
+	u64 nested_access;
 	gpa_t pte_gpa;
 	bool have_ad;
 	int offset;
@@ -540,7 +540,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 }
 
 static int FNAME(walk_addr)(struct guest_walker *walker,
-			    struct kvm_vcpu *vcpu, gpa_t addr, u32 access)
+			    struct kvm_vcpu *vcpu, gpa_t addr, u64 access)
 {
 	return FNAME(walk_addr_generic)(walker, vcpu, vcpu->arch.mmu, addr,
 					access);
@@ -988,7 +988,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
 
 /* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */
 static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
-			       gpa_t addr, u32 access,
+			       gpa_t addr, u64 access,
 			       struct x86_exception *exception)
 {
 	struct guest_walker walker;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index cf17af4d6904..c85e48dc8310 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6705,7 +6705,7 @@ void kvm_get_segment(struct kvm_vcpu *vcpu,
 	static_call(kvm_x86_get_segment)(vcpu, var, seg);
 }
 
-gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
+gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
 			   struct x86_exception *exception)
 {
 	struct kvm_mmu *mmu = vcpu->arch.mmu;
@@ -6725,7 +6725,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
@@ -6735,7 +6735,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_FETCH_MASK;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
@@ -6745,7 +6745,7 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_WRITE_MASK;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
@@ -6761,7 +6761,7 @@ gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva,
 }
 
 static int kvm_read_guest_virt_helper(gva_t addr, void *val, unsigned int bytes,
-				      struct kvm_vcpu *vcpu, u32 access,
+				      struct kvm_vcpu *vcpu, u64 access,
 				      struct x86_exception *exception)
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
@@ -6798,7 +6798,7 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt,
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
-	u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	unsigned offset;
 	int ret;
 
@@ -6823,7 +6823,7 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
 			       gva_t addr, void *val, unsigned int bytes,
 			       struct x86_exception *exception)
 {
-	u32 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
 
 	/*
 	 * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED
@@ -6842,7 +6842,7 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
 			     struct x86_exception *exception, bool system)
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
-	u32 access = 0;
+	u64 access = 0;
 
 	if (!system && static_call(kvm_x86_get_cpl)(vcpu) == 3)
 		access |= PFERR_USER_MASK;
@@ -6860,7 +6860,7 @@ static int kvm_read_guest_phys_system(struct x86_emulate_ctxt *ctxt,
 }
 
 static int kvm_write_guest_virt_helper(gva_t addr, void *val, unsigned int bytes,
-				      struct kvm_vcpu *vcpu, u32 access,
+				      struct kvm_vcpu *vcpu, u64 access,
 				      struct x86_exception *exception)
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
@@ -6894,7 +6894,7 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
 			      bool system)
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
-	u32 access = PFERR_WRITE_MASK;
+	u64 access = PFERR_WRITE_MASK;
 
 	if (!system && static_call(kvm_x86_get_cpl)(vcpu) == 3)
 		access |= PFERR_USER_MASK;
@@ -6963,7 +6963,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 				bool write)
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
-	u32 access = ((static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0)
+	u64 access = ((static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0)
 		| (write ? PFERR_WRITE_MASK : 0);
 
 	/*
@@ -12558,7 +12558,7 @@ void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_c
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 	struct x86_exception fault;
-	u32 access = error_code &
+	u64 access = error_code &
 		(PFERR_WRITE_MASK | PFERR_FETCH_MASK | PFERR_USER_MASK);
 
 	if (!(error_code & PFERR_PRESENT_MASK) ||
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 2/5] KVM: X86: Fix comments in update_permission_bitmask
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 1/5] KVM: X86: Change the type of access u32 to u64 Lai Jiangshan
@ 2022-03-11  7:03 ` Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 3/5] KVM: X86: Rename variable smap to not_smap in permission_fault() Lai Jiangshan
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

The commit 09f037aa48f3 ("KVM: MMU: speedup update_permission_bitmask")
refactored the code of update_permission_bitmask() and change the
comments.  It added a condition into a list to match the new code,
so the number/order for conditions in the comments should be updated
too.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c12133c3cf00..781f90480d00 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4583,8 +4583,8 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
 			 *   - Page fault in kernel mode
 			 *   - if CPL = 3 or X86_EFLAGS_AC is clear
 			 *
-			 * Here, we cover the first three conditions.
-			 * The fourth is computed dynamically in permission_fault();
+			 * Here, we cover the first four conditions.
+			 * The fifth is computed dynamically in permission_fault();
 			 * PFERR_RSVD_MASK bit will be set in PFEC if the access is
 			 * *not* subject to SMAP restrictions.
 			 */
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 3/5] KVM: X86: Rename variable smap to not_smap in permission_fault()
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 1/5] KVM: X86: Change the type of access u32 to u64 Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 2/5] KVM: X86: Fix comments in update_permission_bitmask Lai Jiangshan
@ 2022-03-11  7:03 ` Lai Jiangshan
  2022-03-11  7:03 ` [PATCH V2 4/5] KVM: X86: Handle implicit supervisor access with SMAP Lai Jiangshan
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Comments above the variable says the bit is set when SMAP is overridden
or the same meaning in update_permission_bitmask(): it is not subjected
to SMAP restriction.

Renaming it to reflect the negative implication and make the code better
readability.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 74efeaefa8f8..24d94f6d378d 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -234,9 +234,9 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 	 * but it will be one in index if SMAP checks are being overridden.
 	 * It is important to keep this branchless.
 	 */
-	unsigned long smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
+	unsigned long not_smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
 	int index = (pfec >> 1) +
-		    (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
+		    (not_smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
 	bool fault = (mmu->permissions[index] >> pte_access) & 1;
 	u32 errcode = PFERR_PRESENT_MASK;
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 4/5] KVM: X86: Handle implicit supervisor access with SMAP
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
                   ` (2 preceding siblings ...)
  2022-03-11  7:03 ` [PATCH V2 3/5] KVM: X86: Rename variable smap to not_smap in permission_fault() Lai Jiangshan
@ 2022-03-11  7:03 ` Lai Jiangshan
  2022-03-15 21:04   ` Paolo Bonzini
  2022-03-11  7:03 ` [PATCH V2 5/5] KVM: X86: Only get rflags when needed in permission_fault() Lai Jiangshan
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

There are two kinds of implicit supervisor access
	implicit supervisor access when CPL = 3
	implicit supervisor access when CPL < 3

Current permission_fault() handles only the first kind for SMAP.

But if the access is implicit when SMAP is on, data may not be read
nor write from any user-mode address regardless the current CPL.

So the second kind should be also supported.

The first kind can be detect via CPL and access mode: if it is
supervisor access and CPL = 3, it must be implicit supervisor access.

But it is not possible to detect the second kind without extra
information, so this patch adds an artificial PFERR_EXPLICIT_ACCESS
into @access. This extra information also works for the first kind, so
the logic is changed to use this information for both cases.

The value of PFERR_EXPLICIT_ACCESS is deliberately chosen to be bit 48
which is in the most significant 16 bits of u64 and less likely to be
forced to change due to future hardware uses it.

This patch removes the call to ->get_cpl() for access mode is determined
by @access.  Not only does it reduce a function call, but also remove
confusions when the permission is checked for nested TDP.  The nested
TDP shouldn't have SMAP checking nor even the L2's CPL have any bearing
on it.  The original code works just because it is always user walk for
NPT and SMAP fault is not set for EPT in update_permission_bitmask.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/include/asm/kvm_host.h |  2 ++
 arch/x86/kvm/mmu.h              | 24 +++++++++++-------------
 arch/x86/kvm/mmu/mmu.c          |  4 ++--
 arch/x86/kvm/x86.c              |  8 ++++++--
 4 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index edffcf7f9c2d..565d9eb42429 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -248,6 +248,7 @@ enum x86_intercept_stage;
 #define PFERR_SGX_BIT 15
 #define PFERR_GUEST_FINAL_BIT 32
 #define PFERR_GUEST_PAGE_BIT 33
+#define PFERR_IMPLICIT_ACCESS_BIT 48
 
 #define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
 #define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
@@ -258,6 +259,7 @@ enum x86_intercept_stage;
 #define PFERR_SGX_MASK (1U << PFERR_SGX_BIT)
 #define PFERR_GUEST_FINAL_MASK (1ULL << PFERR_GUEST_FINAL_BIT)
 #define PFERR_GUEST_PAGE_MASK (1ULL << PFERR_GUEST_PAGE_BIT)
+#define PFERR_IMPLICIT_ACCESS (1ULL << PFERR_IMPLICIT_ACCESS_BIT)
 
 #define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK |	\
 				 PFERR_WRITE_MASK |		\
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 24d94f6d378d..4cb7a39ecd51 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -218,25 +218,23 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 {
 	/* strip nested paging fault error codes */
 	unsigned int pfec = access;
-	int cpl = static_call(kvm_x86_get_cpl)(vcpu);
 	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
 
 	/*
-	 * If CPL < 3, SMAP prevention are disabled if EFLAGS.AC = 1.
+	 * For explicit supervisor accesses, SMAP is disabled if EFLAGS.AC = 1.
+	 * For implicit supervisor accesses, SMAP cannot be overridden.
 	 *
-	 * If CPL = 3, SMAP applies to all supervisor-mode data accesses
-	 * (these are implicit supervisor accesses) regardless of the value
-	 * of EFLAGS.AC.
+	 * SMAP works on supervisor accesses only, and not_smap can
+	 * be set or not set when user access with neither has any bearing
+	 * on the result.
 	 *
-	 * This computes (cpl < 3) && (rflags & X86_EFLAGS_AC), leaving
-	 * the result in X86_EFLAGS_AC. We then insert it in place of
-	 * the PFERR_RSVD_MASK bit; this bit will always be zero in pfec,
-	 * but it will be one in index if SMAP checks are being overridden.
-	 * It is important to keep this branchless.
+	 * We put the SMAP checking bit in place of the PFERR_RSVD_MASK bit;
+	 * this bit will always be zero in pfec, but it will be one in index
+	 * if SMAP checks are being disabled.
 	 */
-	unsigned long not_smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
-	int index = (pfec >> 1) +
-		    (not_smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
+	bool explicit_access = !(access & PFERR_IMPLICIT_ACCESS);
+	bool not_smap = (rflags & X86_EFLAGS_AC) && explicit_access;
+	int index = (pfec + (!!not_smap << PFERR_RSVD_BIT)) >> 1;
 	bool fault = (mmu->permissions[index] >> pte_access) & 1;
 	u32 errcode = PFERR_PRESENT_MASK;
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 781f90480d00..9b593e67717a 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4580,8 +4580,8 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
 			 *   - X86_CR4_SMAP is set in CR4
 			 *   - A user page is accessed
 			 *   - The access is not a fetch
-			 *   - Page fault in kernel mode
-			 *   - if CPL = 3 or X86_EFLAGS_AC is clear
+			 *   - The access is supervisor mode
+			 *   - If implicit supervisor access or X86_EFLAGS_AC is clear
 			 *
 			 * Here, we cover the first four conditions.
 			 * The fifth is computed dynamically in permission_fault();
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c85e48dc8310..df8b05740080 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6844,7 +6844,9 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	u64 access = 0;
 
-	if (!system && static_call(kvm_x86_get_cpl)(vcpu) == 3)
+	if (system)
+		access |= PFERR_IMPLICIT_ACCESS;
+	else if (static_call(kvm_x86_get_cpl)(vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception);
@@ -6896,7 +6898,9 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	u64 access = PFERR_WRITE_MASK;
 
-	if (!system && static_call(kvm_x86_get_cpl)(vcpu) == 3)
+	if (system)
+		access |= PFERR_IMPLICIT_ACCESS;
+	else if (static_call(kvm_x86_get_cpl)(vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH V2 5/5] KVM: X86: Only get rflags when needed in permission_fault()
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
                   ` (3 preceding siblings ...)
  2022-03-11  7:03 ` [PATCH V2 4/5] KVM: X86: Handle implicit supervisor access with SMAP Lai Jiangshan
@ 2022-03-11  7:03 ` Lai Jiangshan
  2022-03-11  7:03 ` [RFC PATCH V2 6/5] KVM: X86: Propagate the nested page fault info to the guest Lai Jiangshan
  2022-03-15 21:06 ` [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Paolo Bonzini
  6 siblings, 0 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

The SMAP checking and rflags are only needed in permission_fault()
when it is supervisor access and SMAP is enabled.  These information is
already encoded in the combination of mmu->permissions[] and the index.

So we can use the encoded information to see if we need the SMAP checking
instead of getting the rflags unconditionally.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu.h | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 4cb7a39ecd51..ceac1e9e21e9 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -218,13 +218,12 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 {
 	/* strip nested paging fault error codes */
 	unsigned int pfec = access;
-	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
 
 	/*
 	 * For explicit supervisor accesses, SMAP is disabled if EFLAGS.AC = 1.
 	 * For implicit supervisor accesses, SMAP cannot be overridden.
 	 *
-	 * SMAP works on supervisor accesses only, and not_smap can
+	 * SMAP works on supervisor accesses only, and the SMAP checking bit can
 	 * be set or not set when user access with neither has any bearing
 	 * on the result.
 	 *
@@ -233,11 +232,30 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 	 * if SMAP checks are being disabled.
 	 */
 	bool explicit_access = !(access & PFERR_IMPLICIT_ACCESS);
-	bool not_smap = (rflags & X86_EFLAGS_AC) && explicit_access;
-	int index = (pfec + (!!not_smap << PFERR_RSVD_BIT)) >> 1;
-	bool fault = (mmu->permissions[index] >> pte_access) & 1;
+	bool fault = (mmu->permissions[pfec >> 1] >> pte_access) & 1;
+	int index = (pfec + PFERR_RSVD_MASK) >> 1;
+	bool fault_not_smap = (mmu->permissions[index] >> pte_access) & 1;
 	u32 errcode = PFERR_PRESENT_MASK;
 
+	/*
+	 * The value of fault has included SMAP checking if it is supervisor
+	 * access and SMAP is enabled and encoded in mmu->permissions.
+	 *
+	 * fault	fault_not_smap
+	 * 0		0		not fault due to UWX nor SMAP
+	 * 0		1		impossible combination
+	 * 1		1		fault due to UWX
+	 * 1		0		fault due to SMAP, need to check if
+	 * 				SMAP is prevented
+	 *
+	 * SMAP is prevented only when X86_EFLAGS_AC is set on explicit
+	 * supervisor access.
+	 */
+	if (unlikely(fault && !fault_not_smap && explicit_access)) {
+		unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+		fault = !(rflags & X86_EFLAGS_AC);
+	}
+
 	WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK));
 	if (unlikely(mmu->pkru_mask)) {
 		u32 pkru_bits, offset;
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH V2 6/5] KVM: X86: Propagate the nested page fault info to the guest
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
                   ` (4 preceding siblings ...)
  2022-03-11  7:03 ` [PATCH V2 5/5] KVM: X86: Only get rflags when needed in permission_fault() Lai Jiangshan
@ 2022-03-11  7:03 ` Lai Jiangshan
  2022-03-15 21:06 ` [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Paolo Bonzini
  6 siblings, 0 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-11  7:03 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Feed the nested page fault info into ->gva_to_gpa() in
walk_addr_generic(), so that the nested walk_addr_generic() can
propagate the nested page fault info into x86_exception.

Propagate the nested page fault info into EXIT_INFO_1 for SVM.

Morph the nested page fault info and other page fault error code into
EXIT_QUOLIFICATION for VMX.

It is a patch that makes use of the patch1.

It is untested, just served as a request for somebody to fix a known
problem, and will not be included in next version of this patchset
if the patchset needs to be updated.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/include/asm/kvm_host.h |  2 ++
 arch/x86/kvm/kvm_emulate.h      |  3 ++-
 arch/x86/kvm/mmu/paging_tmpl.h  |  8 ++++++--
 arch/x86/kvm/svm/nested.c       | 10 ++--------
 arch/x86/kvm/vmx/nested.c       | 11 +++++++++++
 5 files changed, 23 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 565d9eb42429..68efa9d1ef0e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -265,6 +265,8 @@ enum x86_intercept_stage;
 				 PFERR_WRITE_MASK |		\
 				 PFERR_PRESENT_MASK)
 
+#define PFERR_GUEST_MASK (PFERR_GUEST_FINAL_MASK | PFERR_GUEST_PAGE_MASK)
+
 /* apic attention bits */
 #define KVM_APIC_CHECK_VAPIC	0
 /*
diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
index 39eded2426ff..cdc2977ce086 100644
--- a/arch/x86/kvm/kvm_emulate.h
+++ b/arch/x86/kvm/kvm_emulate.h
@@ -24,8 +24,9 @@ struct x86_exception {
 	bool error_code_valid;
 	u16 error_code;
 	bool nested_page_fault;
-	u64 address; /* cr2 or nested page fault gpa */
 	u8 async_page_fault;
+	u64 nested_pfec; /* nested page fault error code */
+	u64 address; /* cr2 or nested page fault gpa */
 };
 
 /*
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 8621188b46df..95367f5ca998 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -383,7 +383,8 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 	 * by the MOV to CR instruction are treated as reads and do not cause the
 	 * processor to set the dirty flag in any EPT paging-structure entry.
 	 */
-	nested_access = (have_ad ? PFERR_WRITE_MASK : 0) | PFERR_USER_MASK;
+	nested_access = (have_ad ? PFERR_WRITE_MASK : 0) | PFERR_USER_MASK |
+			PFERR_GUEST_PAGE_MASK;
 
 	pte_access = ~0;
 	++walker->level;
@@ -466,7 +467,8 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 	if (PTTYPE == 32 && walker->level > PG_LEVEL_4K && is_cpuid_PSE36())
 		gfn += pse36_gfn_delta(pte);
 
-	real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walker->fault);
+	real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn),
+			access | PFERR_GUEST_FINAL_MASK, &walker->fault);
 	if (real_gpa == UNMAPPED_GVA)
 		return 0;
 
@@ -534,6 +536,8 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 	walker->fault.address = addr;
 	walker->fault.nested_page_fault = mmu != vcpu->arch.walk_mmu;
 	walker->fault.async_page_fault = false;
+	if (walker->fault.nested_page_fault)
+		walker->fault.nested_pfec = errcode | (access & PFERR_GUEST_MASK);
 
 	trace_kvm_mmu_walker_error(walker->fault.error_code);
 	return 0;
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 96bab464967f..0abcbd3de892 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -38,18 +38,12 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu,
 	struct vcpu_svm *svm = to_svm(vcpu);
 
 	if (svm->vmcb->control.exit_code != SVM_EXIT_NPF) {
-		/*
-		 * TODO: track the cause of the nested page fault, and
-		 * correctly fill in the high bits of exit_info_1.
-		 */
 		svm->vmcb->control.exit_code = SVM_EXIT_NPF;
 		svm->vmcb->control.exit_code_hi = 0;
-		svm->vmcb->control.exit_info_1 = (1ULL << 32);
-		svm->vmcb->control.exit_info_2 = fault->address;
 	}
 
-	svm->vmcb->control.exit_info_1 &= ~0xffffffffULL;
-	svm->vmcb->control.exit_info_1 |= fault->error_code;
+	svm->vmcb->control.exit_info_1 = fault->nested_pfec;
+	svm->vmcb->control.exit_info_2 = fault->address;
 
 	nested_svm_vmexit(svm);
 }
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 1dfe23963a9e..fd5dd5acf63b 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -372,6 +372,17 @@ static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu,
 	u32 vm_exit_reason;
 	unsigned long exit_qualification = vcpu->arch.exit_qualification;
 
+	exit_qualification &= ~(EPT_VIOLATION_ACC_READ | EPT_VIOLATION_ACC_WRITE |
+				EPT_VIOLATION_ACC_INSTR | EPT_VIOLATION_GVA_TRANSLATED);
+	exit_qualification |= fault->nested_pfec & PFERR_USER_MASK ?
+				EPT_VIOLATION_ACC_READ : 0;
+	exit_qualification |= fault->nested_pfec & PFERR_WRITE_MASK ?
+				EPT_VIOLATION_ACC_WRITE : 0;
+	exit_qualification |= fault->nested_pfec & PFERR_FETCH_MASK ?
+				EPT_VIOLATION_ACC_INSTR : 0;
+	exit_qualification |= fault->nested_pfec & PFERR_GUEST_FINAL_MASK ?
+				EPT_VIOLATION_GVA_TRANSLATED : 0;
+
 	if (vmx->nested.pml_full) {
 		vm_exit_reason = EXIT_REASON_PML_FULL;
 		vmx->nested.pml_full = false;
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH V2 4/5] KVM: X86: Handle implicit supervisor access with SMAP
  2022-03-11  7:03 ` [PATCH V2 4/5] KVM: X86: Handle implicit supervisor access with SMAP Lai Jiangshan
@ 2022-03-15 21:04   ` Paolo Bonzini
  0 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2022-03-15 21:04 UTC (permalink / raw)
  To: Lai Jiangshan, linux-kernel, kvm, Sean Christopherson
  Cc: Lai Jiangshan, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin

On 3/11/22 08:03, Lai Jiangshan wrote:
> -	unsigned long not_smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
> -	int index = (pfec >> 1) +
> -		    (not_smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
> +	bool explicit_access = !(access & PFERR_IMPLICIT_ACCESS);
> +	bool not_smap = (rflags & X86_EFLAGS_AC) && explicit_access;
> +	int index = (pfec + (!!not_smap << PFERR_RSVD_BIT)) >> 1;

Also possible:

         u64 implicit_access = access & PFERR_IMPLICIT_ACCESS;
         bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC;
         int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1;

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP
  2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
                   ` (5 preceding siblings ...)
  2022-03-11  7:03 ` [RFC PATCH V2 6/5] KVM: X86: Propagate the nested page fault info to the guest Lai Jiangshan
@ 2022-03-15 21:06 ` Paolo Bonzini
  2022-03-16  2:38   ` Lai Jiangshan
  6 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2022-03-15 21:06 UTC (permalink / raw)
  To: Lai Jiangshan, linux-kernel, kvm, Sean Christopherson; +Cc: Lai Jiangshan

On 3/11/22 08:03, Lai Jiangshan wrote:
> From: Lai Jiangshan<jiangshan.ljs@antgroup.com>
> 
> Some change in permission_fault() for SMAP.  It also reduces
> calls two callbacks to get CPL and RFLAGS in come cases, but it
> has not any measurable performance change in tests (kernel build
> in guest).

I am going to queue patches 1-4.  The last one shouldn't really have any 
performance impact with static calls.

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP
  2022-03-15 21:06 ` [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Paolo Bonzini
@ 2022-03-16  2:38   ` Lai Jiangshan
  0 siblings, 0 replies; 10+ messages in thread
From: Lai Jiangshan @ 2022-03-16  2:38 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: LKML, kvm, Sean Christopherson, Lai Jiangshan

On Wed, Mar 16, 2022 at 5:06 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 3/11/22 08:03, Lai Jiangshan wrote:
> > From: Lai Jiangshan<jiangshan.ljs@antgroup.com>
> >
> > Some change in permission_fault() for SMAP.  It also reduces
> > calls two callbacks to get CPL and RFLAGS in come cases, but it
> > has not any measurable performance change in tests (kernel build
> > in guest).
>
> I am going to queue patches 1-4.  The last one shouldn't really have any
> performance impact with static calls.
>

It is not about performance, it is about "less surprise".

The patchset was made due to it surprised me that "what the hell
is it when L0 is using L2's rflags when building shadow EPT/NPT for L1".

After some investigation, I knew the L2's rflags is "ignored" in a very
hidden and complicated way which relies on code in several other places.

I think some additional comment is necessary if that patch is not applied.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-03-16  2:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-11  7:03 [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Lai Jiangshan
2022-03-11  7:03 ` [PATCH V2 1/5] KVM: X86: Change the type of access u32 to u64 Lai Jiangshan
2022-03-11  7:03 ` [PATCH V2 2/5] KVM: X86: Fix comments in update_permission_bitmask Lai Jiangshan
2022-03-11  7:03 ` [PATCH V2 3/5] KVM: X86: Rename variable smap to not_smap in permission_fault() Lai Jiangshan
2022-03-11  7:03 ` [PATCH V2 4/5] KVM: X86: Handle implicit supervisor access with SMAP Lai Jiangshan
2022-03-15 21:04   ` Paolo Bonzini
2022-03-11  7:03 ` [PATCH V2 5/5] KVM: X86: Only get rflags when needed in permission_fault() Lai Jiangshan
2022-03-11  7:03 ` [RFC PATCH V2 6/5] KVM: X86: Propagate the nested page fault info to the guest Lai Jiangshan
2022-03-15 21:06 ` [PATCH V2 0/5] KVM: X86: permission_fault() for SMAP Paolo Bonzini
2022-03-16  2:38   ` Lai Jiangshan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.