All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Mackerras <paulus@ozlabs.org>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	kvm-ppc@vger.kernel.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu,
	syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com,
	Andrea Arcangeli <aarcange@redhat.com>,
	Barret Rhoden <brho@google.com>,
	David Hildenbrand <david@redhat.com>,
	Jason Zeng <jason.zeng@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: [PATCH 08/14] KVM: x86/mmu: Drop level optimization from fast_page_fault()
Date: Wed,  8 Jan 2020 12:24:42 -0800	[thread overview]
Message-ID: <20200108202448.9669-9-sean.j.christopherson@intel.com> (raw)
In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com>

Remove fast_page_fault()'s optimization to stop the shadow walk if the
iterator level drops below the intended map level.  The intended map
level is only acccurate for HugeTLB mappings (THP mappings are detected
after fast_page_fault()), i.e. it's not required for correctness, and
a future patch will also move HugeTLB mapping detection to after
fast_page_fault().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4bd7f745b56d..7d78d1d996ed 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3593,7 +3593,7 @@ static bool is_access_allowed(u32 fault_err_code, u64 spte)
  * - true: let the vcpu to access on the same address again.
  * - false: let the real page fault path to fix it.
  */
-static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
+static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    u32 error_code)
 {
 	struct kvm_shadow_walk_iterator iterator;
@@ -3611,8 +3611,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
 		u64 new_spte;
 
 		for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
-			if (!is_shadow_present_pte(spte) ||
-			    iterator.level < level)
+			if (!is_shadow_present_pte(spte))
 				break;
 
 		sp = page_header(__pa(iterator.sptep));
@@ -4223,7 +4222,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	if (level > PT_PAGE_TABLE_LEVEL)
 		gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
 
-	if (fast_page_fault(vcpu, gpa, level, error_code))
+	if (fast_page_fault(vcpu, gpa, error_code))
 		return RET_PF_RETRY;
 
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
-- 
2.24.1
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Mackerras <paulus@ozlabs.org>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	kvm-ppc@vger.kernel.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu,
	syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Barret Rhoden <brho@google.com>,
	David Hildenbrand <david@redhat.com>,
	Jason Zeng <jason.zeng@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: [PATCH 08/14] KVM: x86/mmu: Drop level optimization from fast_page_fault()
Date: Wed,  8 Jan 2020 12:24:42 -0800	[thread overview]
Message-ID: <20200108202448.9669-9-sean.j.christopherson@intel.com> (raw)
In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com>

Remove fast_page_fault()'s optimization to stop the shadow walk if the
iterator level drops below the intended map level.  The intended map
level is only acccurate for HugeTLB mappings (THP mappings are detected
after fast_page_fault()), i.e. it's not required for correctness, and
a future patch will also move HugeTLB mapping detection to after
fast_page_fault().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4bd7f745b56d..7d78d1d996ed 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3593,7 +3593,7 @@ static bool is_access_allowed(u32 fault_err_code, u64 spte)
  * - true: let the vcpu to access on the same address again.
  * - false: let the real page fault path to fix it.
  */
-static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
+static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    u32 error_code)
 {
 	struct kvm_shadow_walk_iterator iterator;
@@ -3611,8 +3611,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
 		u64 new_spte;
 
 		for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
-			if (!is_shadow_present_pte(spte) ||
-			    iterator.level < level)
+			if (!is_shadow_present_pte(spte))
 				break;
 
 		sp = page_header(__pa(iterator.sptep));
@@ -4223,7 +4222,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	if (level > PT_PAGE_TABLE_LEVEL)
 		gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
 
-	if (fast_page_fault(vcpu, gpa, level, error_code))
+	if (fast_page_fault(vcpu, gpa, error_code))
 		return RET_PF_RETRY;
 
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
-- 
2.24.1


WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>,
	kvm@vger.kernel.org, David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	linux-mm@kvack.org, kvmarm@lists.cs.columbia.edu,
	Dave Jiang <dave.jiang@intel.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Marc Zyngier <maz@kernel.org>, Joerg Roedel <joro@8bytes.org>,
	syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com,
	Barret Rhoden <brho@google.com>,
	kvm-ppc@vger.kernel.org, Liran Alon <liran.alon@oracle.com>,
	Andy Lutomirski <luto@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-arm-kernel@lists.infradead.org,
	Jim Mattson <jmattson@google.com>,
	linux-kernel@vger.kernel.org,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Jason Zeng <jason.zeng@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: [PATCH 08/14] KVM: x86/mmu: Drop level optimization from fast_page_fault()
Date: Wed,  8 Jan 2020 12:24:42 -0800	[thread overview]
Message-ID: <20200108202448.9669-9-sean.j.christopherson@intel.com> (raw)
In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com>

Remove fast_page_fault()'s optimization to stop the shadow walk if the
iterator level drops below the intended map level.  The intended map
level is only acccurate for HugeTLB mappings (THP mappings are detected
after fast_page_fault()), i.e. it's not required for correctness, and
a future patch will also move HugeTLB mapping detection to after
fast_page_fault().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4bd7f745b56d..7d78d1d996ed 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3593,7 +3593,7 @@ static bool is_access_allowed(u32 fault_err_code, u64 spte)
  * - true: let the vcpu to access on the same address again.
  * - false: let the real page fault path to fix it.
  */
-static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
+static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    u32 error_code)
 {
 	struct kvm_shadow_walk_iterator iterator;
@@ -3611,8 +3611,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
 		u64 new_spte;
 
 		for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
-			if (!is_shadow_present_pte(spte) ||
-			    iterator.level < level)
+			if (!is_shadow_present_pte(spte))
 				break;
 
 		sp = page_header(__pa(iterator.sptep));
@@ -4223,7 +4222,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	if (level > PT_PAGE_TABLE_LEVEL)
 		gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
 
-	if (fast_page_fault(vcpu, gpa, level, error_code))
+	if (fast_page_fault(vcpu, gpa, error_code))
 		return RET_PF_RETRY;
 
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
-- 
2.24.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>,
	kvm@vger.kernel.org, David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	linux-mm@kvack.org, kvmarm@lists.cs.columbia.edu,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dave Jiang <dave.jiang@intel.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Marc Zyngier <maz@kernel.org>, Joerg Roedel <joro@8bytes.org>,
	syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Barret Rhoden <brho@google.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	kvm-ppc@vger.kernel.org, Liran Alon <liran.alon@oracle.com>,
	Andy Lutomirski <luto@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-arm-kernel@lists.infradead.org,
	Jim Mattson <jmattson@google.com>,
	linux-kernel@vger.kernel.org,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	James Morse <james.morse@arm.com>,
	Jason Zeng <jason.zeng@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Subject: [PATCH 08/14] KVM: x86/mmu: Drop level optimization from fast_page_fault()
Date: Wed,  8 Jan 2020 12:24:42 -0800	[thread overview]
Message-ID: <20200108202448.9669-9-sean.j.christopherson@intel.com> (raw)
In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com>

Remove fast_page_fault()'s optimization to stop the shadow walk if the
iterator level drops below the intended map level.  The intended map
level is only acccurate for HugeTLB mappings (THP mappings are detected
after fast_page_fault()), i.e. it's not required for correctness, and
a future patch will also move HugeTLB mapping detection to after
fast_page_fault().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4bd7f745b56d..7d78d1d996ed 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3593,7 +3593,7 @@ static bool is_access_allowed(u32 fault_err_code, u64 spte)
  * - true: let the vcpu to access on the same address again.
  * - false: let the real page fault path to fix it.
  */
-static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
+static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    u32 error_code)
 {
 	struct kvm_shadow_walk_iterator iterator;
@@ -3611,8 +3611,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
 		u64 new_spte;
 
 		for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
-			if (!is_shadow_present_pte(spte) ||
-			    iterator.level < level)
+			if (!is_shadow_present_pte(spte))
 				break;
 
 		sp = page_header(__pa(iterator.sptep));
@@ -4223,7 +4222,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	if (level > PT_PAGE_TABLE_LEVEL)
 		gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
 
-	if (fast_page_fault(vcpu, gpa, level, error_code))
+	if (fast_page_fault(vcpu, gpa, error_code))
 		return RET_PF_RETRY;
 
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
-- 
2.24.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Mackerras <paulus@ozlabs.org>,
	Sean Christopherson <sean.j.christopherson@intel.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	kvm-ppc@vger.kernel.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu,
	syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com,
	Andrea Arcangeli <aarcange@redhat.com>,
	Barret Rhoden <brho@google.com>,
	David Hildenbrand <david@redhat.com>,
	Jason Zeng <jason.zeng@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: [PATCH 08/14] KVM: x86/mmu: Drop level optimization from fast_page_fault()
Date: Wed, 08 Jan 2020 20:24:42 +0000	[thread overview]
Message-ID: <20200108202448.9669-9-sean.j.christopherson@intel.com> (raw)
In-Reply-To: <20200108202448.9669-1-sean.j.christopherson@intel.com>

Remove fast_page_fault()'s optimization to stop the shadow walk if the
iterator level drops below the intended map level.  The intended map
level is only acccurate for HugeTLB mappings (THP mappings are detected
after fast_page_fault()), i.e. it's not required for correctness, and
a future patch will also move HugeTLB mapping detection to after
fast_page_fault().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4bd7f745b56d..7d78d1d996ed 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3593,7 +3593,7 @@ static bool is_access_allowed(u32 fault_err_code, u64 spte)
  * - true: let the vcpu to access on the same address again.
  * - false: let the real page fault path to fix it.
  */
-static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
+static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			    u32 error_code)
 {
 	struct kvm_shadow_walk_iterator iterator;
@@ -3611,8 +3611,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int level,
 		u64 new_spte;
 
 		for_each_shadow_entry_lockless(vcpu, cr2_or_gpa, iterator, spte)
-			if (!is_shadow_present_pte(spte) ||
-			    iterator.level < level)
+			if (!is_shadow_present_pte(spte))
 				break;
 
 		sp = page_header(__pa(iterator.sptep));
@@ -4223,7 +4222,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
 	if (level > PT_PAGE_TABLE_LEVEL)
 		gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1);
 
-	if (fast_page_fault(vcpu, gpa, level, error_code))
+	if (fast_page_fault(vcpu, gpa, error_code))
 		return RET_PF_RETRY;
 
 	mmu_seq = vcpu->kvm->mmu_notifier_seq;
-- 
2.24.1

  parent reply	other threads:[~2020-01-08 20:27 UTC|newest]

Thread overview: 110+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-08 20:24 [PATCH 00/14] KVM: x86/mmu: Huge page fixes, cleanup, and DAX Sean Christopherson
2020-01-08 20:24 ` Sean Christopherson
2020-01-08 20:24 ` Sean Christopherson
2020-01-08 20:24 ` Sean Christopherson
2020-01-08 20:24 ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 01/14] KVM: x86/mmu: Enforce max_level on HugeTLB mappings Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 02/14] mm: thp: KVM: Explicitly check for THP when populating secondary MMU Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 03/14] KVM: Use vcpu-specific gva->hva translation when querying host page size Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 04/14] KVM: Play nice with read-only memslots " Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-21 14:24   ` Paolo Bonzini
2020-01-21 14:24     ` Paolo Bonzini
2020-01-21 14:24     ` Paolo Bonzini
2020-01-21 14:24     ` Paolo Bonzini
2020-01-21 14:24     ` Paolo Bonzini
2020-01-08 20:24 ` [PATCH 05/14] x86/mm: Introduce lookup_address_in_mm() Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-09 21:04   ` Thomas Gleixner
2020-01-09 21:04     ` Thomas Gleixner
2020-01-09 21:04     ` Thomas Gleixner
2020-01-09 21:04     ` Thomas Gleixner
2020-01-09 21:04     ` Thomas Gleixner
2020-01-21 14:26     ` Paolo Bonzini
2020-01-21 14:26       ` Paolo Bonzini
2020-01-21 14:26       ` Paolo Bonzini
2020-01-21 14:26       ` Paolo Bonzini
2020-01-21 14:26       ` Paolo Bonzini
2020-01-08 20:24 ` [PATCH 06/14] KVM: x86/mmu: Refactor THP adjust to prep for changing query Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 07/14] KVM: x86/mmu: Walk host page tables to find THP mappings Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-21 14:40   ` Paolo Bonzini
2020-01-21 14:40     ` Paolo Bonzini
2020-01-21 14:40     ` Paolo Bonzini
2020-01-21 14:40     ` Paolo Bonzini
2020-01-21 14:40     ` Paolo Bonzini
2020-01-08 20:24 ` Sean Christopherson [this message]
2020-01-08 20:24   ` [PATCH 08/14] KVM: x86/mmu: Drop level optimization from fast_page_fault() Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 09/14] KVM: x86/mmu: Rely on host page tables to find HugeTLB mappings Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 10/14] KVM: x86/mmu: Remove obsolete gfn restoration in FNAME(fetch) Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 11/14] KVM: x86/mmu: Zap any compound page when collapsing sptes Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 12/14] KVM: x86/mmu: Fold max_mapping_level() into kvm_mmu_hugepage_adjust() Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-21 15:12   ` Paolo Bonzini
2020-01-21 15:12     ` Paolo Bonzini
2020-01-21 15:12     ` Paolo Bonzini
2020-01-21 15:12     ` Paolo Bonzini
2020-01-21 15:12     ` Paolo Bonzini
2020-01-08 20:24 ` [PATCH 13/14] KVM: x86/mmu: Remove lpage_is_disallowed() check from set_spte() Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24 ` [PATCH 14/14] KVM: x86/mmu: Use huge pages for DAX-backed files Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-08 20:24   ` Sean Christopherson
2020-01-09 19:47 ` [PATCH 00/14] KVM: x86/mmu: Huge page fixes, cleanup, and DAX Barret Rhoden
2020-01-09 19:47   ` Barret Rhoden
2020-01-09 19:47   ` Barret Rhoden
2020-01-09 19:47   ` Barret Rhoden
2020-01-09 19:47   ` Barret Rhoden
2020-01-21 15:10   ` Paolo Bonzini
2020-01-21 15:10     ` Paolo Bonzini
2020-01-21 15:10     ` Paolo Bonzini
2020-01-21 15:10     ` Paolo Bonzini
2020-01-21 15:10     ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200108202448.9669-9-sean.j.christopherson@intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=brho@google.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=james.morse@arm.com \
    --cc=jason.zeng@intel.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=liran.alon@oracle.com \
    --cc=luto@kernel.org \
    --cc=maz@kernel.org \
    --cc=paulus@ozlabs.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=suzuki.poulose@arm.com \
    --cc=syzbot+c9d1fb51ac9d0d10c39d@syzkaller.appspotmail.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.