All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM
@ 2019-01-04  8:53 ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, benh, bp, catalin.marinas, christoffer.dall, devel,
	haiyangz, hpa, jhogan, kvmarm, kvm-ppc, kvm, kys,
	linux-arm-kernel, linux, linux-kernel, linux-mips, linuxppc-dev,
	marc.zyngier, mingo, mpe, paul.burton, paulus, pbonzini, ralf,
	rkrcmar, sthemmin, tglx, will.deacon, x86, michael.h.kelley,
	vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Lan Tianyu (11):
  X86/Hyper-V: Add parameter offset for
    hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Flush tlb in the kvm_age_rmapp()

 arch/arm/include/asm/kvm_host.h     |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h    |  3 +-
 arch/mips/kvm/mmu.c                 |  8 +++-
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c           |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c    |  3 +-
 arch/x86/hyperv/nested.c            |  4 +-
 arch/x86/include/asm/kvm_host.h     | 11 +++++-
 arch/x86/include/asm/mshyperv.h     |  2 +-
 arch/x86/kvm/mmu.c                  | 73 ++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/paging_tmpl.h          | 18 ++++++++-
 arch/x86/kvm/vmx/vmx.c              | 18 ++++++++-
 include/linux/kvm_host.h            |  2 +-
 virt/kvm/arm/mmu.c                  |  8 +++-
 virt/kvm/kvm_main.c                 | 18 ++++-----
 16 files changed, 141 insertions(+), 39 deletions(-)

-- 
2.14.4


^ permalink raw reply	[flat|nested] 106+ messages in thread

* [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM
@ 2019-01-04  8:53 ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, will.deacon, christoffer.dall, hpa, kys, kvmarm,
	sthemmin, x86, linux, michael.h.kelley, mingo, catalin.marinas,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, haiyangz, kvm-ppc,
	bp, pbonzini, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, devel, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Lan Tianyu (11):
  X86/Hyper-V: Add parameter offset for
    hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Flush tlb in the kvm_age_rmapp()

 arch/arm/include/asm/kvm_host.h     |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h    |  3 +-
 arch/mips/kvm/mmu.c                 |  8 +++-
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c           |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c    |  3 +-
 arch/x86/hyperv/nested.c            |  4 +-
 arch/x86/include/asm/kvm_host.h     | 11 +++++-
 arch/x86/include/asm/mshyperv.h     |  2 +-
 arch/x86/kvm/mmu.c                  | 73 ++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/paging_tmpl.h          | 18 ++++++++-
 arch/x86/kvm/vmx/vmx.c              | 18 ++++++++-
 include/linux/kvm_host.h            |  2 +-
 virt/kvm/arm/mmu.c                  |  8 +++-
 virt/kvm/kvm_main.c                 | 18 ++++-----
 16 files changed, 141 insertions(+), 39 deletions(-)

-- 
2.14.4


^ permalink raw reply	[flat|nested] 106+ messages in thread

* [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM
@ 2019-01-04  8:53 ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, benh, bp, catalin.marinas, christoffer.dall, devel,
	haiyangz, hpa, jhogan, kvmarm, kvm-ppc, kvm, kys,
	linux-arm-kernel, linux, linux-kernel, linux-mips, linuxppc-dev,
	marc.zyngier, mingo, mpe, paul.burton, paulus, pbonzini, ralf,
	rkrcmar, sthemmin, tglx, will.deacon, x86, michael.h.kelley,
	vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Lan Tianyu (11):
  X86/Hyper-V: Add parameter offset for
    hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Flush tlb in the kvm_age_rmapp()

 arch/arm/include/asm/kvm_host.h     |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h    |  3 +-
 arch/mips/kvm/mmu.c                 |  8 +++-
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c           |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c    |  3 +-
 arch/x86/hyperv/nested.c            |  4 +-
 arch/x86/include/asm/kvm_host.h     | 11 +++++-
 arch/x86/include/asm/mshyperv.h     |  2 +-
 arch/x86/kvm/mmu.c                  | 73 ++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/paging_tmpl.h          | 18 ++++++++-
 arch/x86/kvm/vmx/vmx.c              | 18 ++++++++-
 include/linux/kvm_host.h            |  2 +-
 virt/kvm/arm/mmu.c                  |  8 +++-
 virt/kvm/kvm_main.c                 | 18 ++++-----
 16 files changed, 141 insertions(+), 39 deletions(-)

-- 
2.14.4

^ permalink raw reply	[flat|nested] 106+ messages in thread

* [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM
@ 2019-01-04  8:53 ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, benh, will.deacon, christoffer.dall, paulus, hpa,
	kys, kvmarm, sthemmin, mpe, x86, linux, michael.h.kelley, mingo,
	catalin.marinas, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	haiyangz, kvm-ppc, bp, pbonzini, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, devel, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Lan Tianyu (11):
  X86/Hyper-V: Add parameter offset for
    hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Flush tlb in the kvm_age_rmapp()

 arch/arm/include/asm/kvm_host.h     |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h    |  3 +-
 arch/mips/kvm/mmu.c                 |  8 +++-
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c           |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c    |  3 +-
 arch/x86/hyperv/nested.c            |  4 +-
 arch/x86/include/asm/kvm_host.h     | 11 +++++-
 arch/x86/include/asm/mshyperv.h     |  2 +-
 arch/x86/kvm/mmu.c                  | 73 ++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/paging_tmpl.h          | 18 ++++++++-
 arch/x86/kvm/vmx/vmx.c              | 18 ++++++++-
 include/linux/kvm_host.h            |  2 +-
 virt/kvm/arm/mmu.c                  |  8 +++-
 virt/kvm/kvm_main.c                 | 18 ++++-----
 16 files changed, 141 insertions(+), 39 deletions(-)

-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM
@ 2019-01-04  8:53 ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, benh, bp, catalin.marinas, christoffer.dall, devel,
	haiyangz, hpa, jhogan, kvmarm, kvm-ppc, kvm, kys,
	linux-arm-kernel, linux, linux-kernel, linux-mips, linuxppc-dev,
	marc.zyngier, mingo, mpe, paul.burton, paulus, pbonzini, ralf,
	rkrcmar, sthemmin, tglx, will.deacon, x86, michael.h.kelley,
	vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Lan Tianyu (11):
  X86/Hyper-V: Add parameter offset for
    hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Flush tlb in the kvm_age_rmapp()

 arch/arm/include/asm/kvm_host.h     |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h    |  3 +-
 arch/mips/kvm/mmu.c                 |  8 +++-
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c           |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c    |  3 +-
 arch/x86/hyperv/nested.c            |  4 +-
 arch/x86/include/asm/kvm_host.h     | 11 +++++-
 arch/x86/include/asm/mshyperv.h     |  2 +-
 arch/x86/kvm/mmu.c                  | 73 ++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/paging_tmpl.h          | 18 ++++++++-
 arch/x86/kvm/vmx/vmx.c              | 18 ++++++++-
 include/linux/kvm_host.h            |  2 +-
 virt/kvm/arm/mmu.c                  |  8 +++-
 virt/kvm/kvm_main.c                 | 18 ++++-----
 16 files changed, 141 insertions(+), 39 deletions(-)

-- 
2.14.4

^ permalink raw reply	[flat|nested] 106+ messages in thread

* [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:53   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, kys, haiyangz, sthemmin, tglx, mingo, bp, hpa, x86,
	pbonzini, rkrcmar, linux-arm-kernel, kvmarm, linux-kernel,
	linux-mips, kvm-ppc, linuxppc-dev, devel, kvm, michael.h.kelley,
	vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/nested.c        | 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 pages)
+		int offset, u64 start_gfn, u64 pages)
 {
 	u64 cur = start_gfn;
 	u64 additional_pages;
-	int gpa_n = 0;
+	int gpa_n = offset;
 
 	do {
 		/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
 		hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 end_gfn);
+		int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87224e4c2fd9..2c159efedc40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 {
 	struct kvm_tlb_range *range = data;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
 			range->pages);
 }
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, hpa,
	kys, kvmarm, sthemmin, x86, linux, michael.h.kelley, mingo,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, haiyangz, kvm-ppc,
	bp, devel, tglx, linux-arm-kernel, christoffer.dall, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/nested.c        | 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 pages)
+		int offset, u64 start_gfn, u64 pages)
 {
 	u64 cur = start_gfn;
 	u64 additional_pages;
-	int gpa_n = 0;
+	int gpa_n = offset;
 
 	do {
 		/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
 		hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 end_gfn);
+		int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87224e4c2fd9..2c159efedc40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 {
 	struct kvm_tlb_range *range = data;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
 			range->pages);
 }
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, kys, haiyangz, sthemmin, tglx, mingo, bp, hpa, x86,
	pbonzini, rkrcmar, linux-arm-kernel, kvmarm, linux-kernel,
	linux-mips, kvm-ppc, linuxppc-dev, devel, kvm, michael.h.kelley,
	vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/nested.c        | 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 pages)
+		int offset, u64 start_gfn, u64 pages)
 {
 	u64 cur = start_gfn;
 	u64 additional_pages;
-	int gpa_n = 0;
+	int gpa_n = offset;
 
 	do {
 		/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
 		hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 end_gfn);
+		int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87224e4c2fd9..2c159efedc40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 {
 	struct kvm_tlb_range *range = data;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
 			range->pages);
 }
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, sthemmin, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	haiyangz, kvm-ppc, bp, devel, tglx, linux-arm-kernel,
	christoffer.dall, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/nested.c        | 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 pages)
+		int offset, u64 start_gfn, u64 pages)
 {
 	u64 cur = start_gfn;
 	u64 additional_pages;
-	int gpa_n = 0;
+	int gpa_n = offset;
 
 	do {
 		/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
 		hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 end_gfn);
+		int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87224e4c2fd9..2c159efedc40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 {
 	struct kvm_tlb_range *range = data;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
 			range->pages);
 }
 
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, kys, haiyangz, sthemmin, tglx, mingo, bp, hpa, x86,
	pbonzini, rkrcmar, linux-arm-kernel, kvmarm, linux-kernel,
	linux-mips, kvm-ppc, linuxppc-dev, devel, kvm, michael.h.kelley,
	vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/nested.c        | 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c          | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 pages)
+		int offset, u64 start_gfn, u64 pages)
 {
 	u64 cur = start_gfn;
 	u64 additional_pages;
-	int gpa_n = 0;
+	int gpa_n = offset;
 
 	do {
 		/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
 		hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
 		struct hv_guest_mapping_flush_list *flush,
-		u64 start_gfn, u64 end_gfn);
+		int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87224e4c2fd9..2c159efedc40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 {
 	struct kvm_tlb_range *range = data;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
 			range->pages);
 }
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:53   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Populate ranges on the flush list into struct hv_guest_mapping_flush_list
when flush list is available in the struct kvm_tlb_range.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2c159efedc40..384f4782afba 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -427,9 +427,23 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 		void *data)
 {
 	struct kvm_tlb_range *range = data;
+	struct kvm_mmu_page *sp;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
-			range->pages);
+	if (!range->flush_list) {
+		return hyperv_fill_flush_guest_mapping_list(flush,
+			0, range->start_gfn, range->pages);
+	} else {
+		int offset = 0;
+
+		list_for_each_entry(sp, range->flush_list, flush_link) {
+			int pages = KVM_PAGES_PER_HPAGE(sp->role.level);
+
+			offset = hyperv_fill_flush_guest_mapping_list(flush,
+					offset, sp->gfn, pages);
+		}
+
+		return offset;
+	}
 }
 
 static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm,
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Populate ranges on the flush list into struct hv_guest_mapping_flush_list
when flush list is available in the struct kvm_tlb_range.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2c159efedc40..384f4782afba 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -427,9 +427,23 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 		void *data)
 {
 	struct kvm_tlb_range *range = data;
+	struct kvm_mmu_page *sp;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
-			range->pages);
+	if (!range->flush_list) {
+		return hyperv_fill_flush_guest_mapping_list(flush,
+			0, range->start_gfn, range->pages);
+	} else {
+		int offset = 0;
+
+		list_for_each_entry(sp, range->flush_list, flush_link) {
+			int pages = KVM_PAGES_PER_HPAGE(sp->role.level);
+
+			offset = hyperv_fill_flush_guest_mapping_list(flush,
+					offset, sp->gfn, pages);
+		}
+
+		return offset;
+	}
 }
 
 static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm,
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Populate ranges on the flush list into struct hv_guest_mapping_flush_list
when flush list is available in the struct kvm_tlb_range.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2c159efedc40..384f4782afba 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -427,9 +427,23 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 		void *data)
 {
 	struct kvm_tlb_range *range = data;
+	struct kvm_mmu_page *sp;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
-			range->pages);
+	if (!range->flush_list) {
+		return hyperv_fill_flush_guest_mapping_list(flush,
+			0, range->start_gfn, range->pages);
+	} else {
+		int offset = 0;
+
+		list_for_each_entry(sp, range->flush_list, flush_link) {
+			int pages = KVM_PAGES_PER_HPAGE(sp->role.level);
+
+			offset = hyperv_fill_flush_guest_mapping_list(flush,
+					offset, sp->gfn, pages);
+		}
+
+		return offset;
+	}
 }
 
 static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm,
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Populate ranges on the flush list into struct hv_guest_mapping_flush_list
when flush list is available in the struct kvm_tlb_range.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2c159efedc40..384f4782afba 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -427,9 +427,23 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 		void *data)
 {
 	struct kvm_tlb_range *range = data;
+	struct kvm_mmu_page *sp;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
-			range->pages);
+	if (!range->flush_list) {
+		return hyperv_fill_flush_guest_mapping_list(flush,
+			0, range->start_gfn, range->pages);
+	} else {
+		int offset = 0;
+
+		list_for_each_entry(sp, range->flush_list, flush_link) {
+			int pages = KVM_PAGES_PER_HPAGE(sp->role.level);
+
+			offset = hyperv_fill_flush_guest_mapping_list(flush,
+					offset, sp->gfn, pages);
+		}
+
+		return offset;
+	}
 }
 
 static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm,
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

Populate ranges on the flush list into struct hv_guest_mapping_flush_list
when flush list is available in the struct kvm_tlb_range.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/vmx/vmx.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2c159efedc40..384f4782afba 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -427,9 +427,23 @@ int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush,
 		void *data)
 {
 	struct kvm_tlb_range *range = data;
+	struct kvm_mmu_page *sp;
 
-	return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
-			range->pages);
+	if (!range->flush_list) {
+		return hyperv_fill_flush_guest_mapping_list(flush,
+			0, range->start_gfn, range->pages);
+	} else {
+		int offset = 0;
+
+		list_for_each_entry(sp, range->flush_list, flush_link) {
+			int pages = KVM_PAGES_PER_HPAGE(sp->role.level);
+
+			offset = hyperv_fill_flush_guest_mapping_list(flush,
+					offset, sp->gfn, pages);
+		}
+
+		return offset;
+	}
 }
 
 static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm,
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:53   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

It's necessary to check whether mmu page is last or large page when add
mmu page into flush list. "spte" is needed for such check and so add
spte point in the struct kvm_mmu_page.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c              | 5 +++++
 arch/x86/kvm/paging_tmpl.h      | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4660ce90de7f..78d2a6714c3b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -332,6 +332,7 @@ struct kvm_mmu_page {
 	int root_count;          /* Currently serving as active root */
 	unsigned int unsync_children;
 	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+	u64 *sptep;
 
 	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
 	unsigned long mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ce770b446238..068694fa2371 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3160,6 +3160,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 			pseudo_gfn = base_addr >> PAGE_SHIFT;
 			sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
 					      iterator.level - 1, 1, ACC_ALL);
+			sp->sptep = iterator.sptep;
 
 			link_shadow_page(vcpu, iterator.sptep, sp);
 		}
@@ -3588,6 +3589,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		sp = kvm_mmu_get_page(vcpu, 0, 0,
 				vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = __pa(sp->spt);
 	} else if (vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL) {
@@ -3604,6 +3606,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 					i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
 			root = __pa(sp->spt);
 			++sp->root_count;
+			sp->sptep = NULL;
 			spin_unlock(&vcpu->kvm->mmu_lock);
 			vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
 		}
@@ -3644,6 +3647,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = root;
 		return 0;
@@ -3681,6 +3685,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				      0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 
 		vcpu->arch.mmu->pae_root[i] = root | pm_mask;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6bdca39829bc..833e8855bbc9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -633,6 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			table_gfn = gw->table_gfn[it.level - 2];
 			sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
 					      false, access);
+			sp->sptep = it.sptep;
 		}
 
 		/*
@@ -663,6 +664,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
 		sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
 				      true, direct_access);
+		sp->sptep = it.sptep;
 		link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

It's necessary to check whether mmu page is last or large page when add
mmu page into flush list. "spte" is needed for such check and so add
spte point in the struct kvm_mmu_page.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c              | 5 +++++
 arch/x86/kvm/paging_tmpl.h      | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4660ce90de7f..78d2a6714c3b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -332,6 +332,7 @@ struct kvm_mmu_page {
 	int root_count;          /* Currently serving as active root */
 	unsigned int unsync_children;
 	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+	u64 *sptep;
 
 	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
 	unsigned long mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ce770b446238..068694fa2371 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3160,6 +3160,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 			pseudo_gfn = base_addr >> PAGE_SHIFT;
 			sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
 					      iterator.level - 1, 1, ACC_ALL);
+			sp->sptep = iterator.sptep;
 
 			link_shadow_page(vcpu, iterator.sptep, sp);
 		}
@@ -3588,6 +3589,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		sp = kvm_mmu_get_page(vcpu, 0, 0,
 				vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = __pa(sp->spt);
 	} else if (vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL) {
@@ -3604,6 +3606,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 					i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
 			root = __pa(sp->spt);
 			++sp->root_count;
+			sp->sptep = NULL;
 			spin_unlock(&vcpu->kvm->mmu_lock);
 			vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
 		}
@@ -3644,6 +3647,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = root;
 		return 0;
@@ -3681,6 +3685,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				      0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 
 		vcpu->arch.mmu->pae_root[i] = root | pm_mask;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6bdca39829bc..833e8855bbc9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -633,6 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			table_gfn = gw->table_gfn[it.level - 2];
 			sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
 					      false, access);
+			sp->sptep = it.sptep;
 		}
 
 		/*
@@ -663,6 +664,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
 		sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
 				      true, direct_access);
+		sp->sptep = it.sptep;
 		link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

It's necessary to check whether mmu page is last or large page when add
mmu page into flush list. "spte" is needed for such check and so add
spte point in the struct kvm_mmu_page.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c              | 5 +++++
 arch/x86/kvm/paging_tmpl.h      | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4660ce90de7f..78d2a6714c3b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -332,6 +332,7 @@ struct kvm_mmu_page {
 	int root_count;          /* Currently serving as active root */
 	unsigned int unsync_children;
 	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+	u64 *sptep;
 
 	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
 	unsigned long mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ce770b446238..068694fa2371 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3160,6 +3160,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 			pseudo_gfn = base_addr >> PAGE_SHIFT;
 			sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
 					      iterator.level - 1, 1, ACC_ALL);
+			sp->sptep = iterator.sptep;
 
 			link_shadow_page(vcpu, iterator.sptep, sp);
 		}
@@ -3588,6 +3589,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		sp = kvm_mmu_get_page(vcpu, 0, 0,
 				vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = __pa(sp->spt);
 	} else if (vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL) {
@@ -3604,6 +3606,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 					i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
 			root = __pa(sp->spt);
 			++sp->root_count;
+			sp->sptep = NULL;
 			spin_unlock(&vcpu->kvm->mmu_lock);
 			vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
 		}
@@ -3644,6 +3647,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = root;
 		return 0;
@@ -3681,6 +3685,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				      0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 
 		vcpu->arch.mmu->pae_root[i] = root | pm_mask;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6bdca39829bc..833e8855bbc9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -633,6 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			table_gfn = gw->table_gfn[it.level - 2];
 			sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
 					      false, access);
+			sp->sptep = it.sptep;
 		}
 
 		/*
@@ -663,6 +664,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
 		sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
 				      true, direct_access);
+		sp->sptep = it.sptep;
 		link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

It's necessary to check whether mmu page is last or large page when add
mmu page into flush list. "spte" is needed for such check and so add
spte point in the struct kvm_mmu_page.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c              | 5 +++++
 arch/x86/kvm/paging_tmpl.h      | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4660ce90de7f..78d2a6714c3b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -332,6 +332,7 @@ struct kvm_mmu_page {
 	int root_count;          /* Currently serving as active root */
 	unsigned int unsync_children;
 	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+	u64 *sptep;
 
 	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
 	unsigned long mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ce770b446238..068694fa2371 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3160,6 +3160,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 			pseudo_gfn = base_addr >> PAGE_SHIFT;
 			sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
 					      iterator.level - 1, 1, ACC_ALL);
+			sp->sptep = iterator.sptep;
 
 			link_shadow_page(vcpu, iterator.sptep, sp);
 		}
@@ -3588,6 +3589,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		sp = kvm_mmu_get_page(vcpu, 0, 0,
 				vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = __pa(sp->spt);
 	} else if (vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL) {
@@ -3604,6 +3606,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 					i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
 			root = __pa(sp->spt);
 			++sp->root_count;
+			sp->sptep = NULL;
 			spin_unlock(&vcpu->kvm->mmu_lock);
 			vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
 		}
@@ -3644,6 +3647,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = root;
 		return 0;
@@ -3681,6 +3685,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				      0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 
 		vcpu->arch.mmu->pae_root[i] = root | pm_mask;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6bdca39829bc..833e8855bbc9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -633,6 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			table_gfn = gw->table_gfn[it.level - 2];
 			sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
 					      false, access);
+			sp->sptep = it.sptep;
 		}
 
 		/*
@@ -663,6 +664,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
 		sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
 				      true, direct_access);
+		sp->sptep = it.sptep;
 		link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

It's necessary to check whether mmu page is last or large page when add
mmu page into flush list. "spte" is needed for such check and so add
spte point in the struct kvm_mmu_page.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c              | 5 +++++
 arch/x86/kvm/paging_tmpl.h      | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4660ce90de7f..78d2a6714c3b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -332,6 +332,7 @@ struct kvm_mmu_page {
 	int root_count;          /* Currently serving as active root */
 	unsigned int unsync_children;
 	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+	u64 *sptep;
 
 	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
 	unsigned long mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ce770b446238..068694fa2371 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3160,6 +3160,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 			pseudo_gfn = base_addr >> PAGE_SHIFT;
 			sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
 					      iterator.level - 1, 1, ACC_ALL);
+			sp->sptep = iterator.sptep;
 
 			link_shadow_page(vcpu, iterator.sptep, sp);
 		}
@@ -3588,6 +3589,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 		sp = kvm_mmu_get_page(vcpu, 0, 0,
 				vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = __pa(sp->spt);
 	} else if (vcpu->arch.mmu->shadow_root_level = PT32E_ROOT_LEVEL) {
@@ -3604,6 +3606,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 					i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
 			root = __pa(sp->spt);
 			++sp->root_count;
+			sp->sptep = NULL;
 			spin_unlock(&vcpu->kvm->mmu_lock);
 			vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
 		}
@@ -3644,6 +3647,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 		vcpu->arch.mmu->root_hpa = root;
 		return 0;
@@ -3681,6 +3685,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 				      0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
+		sp->sptep = NULL;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 
 		vcpu->arch.mmu->pae_root[i] = root | pm_mask;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6bdca39829bc..833e8855bbc9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -633,6 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			table_gfn = gw->table_gfn[it.level - 2];
 			sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
 					      false, access);
+			sp->sptep = it.sptep;
 		}
 
 		/*
@@ -663,6 +664,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
 		sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
 				      true, direct_access);
+		sp->sptep = it.sptep;
 		link_shadow_page(vcpu, it.sptep, sp);
 	}
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:53   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to introduce tlb flush with range list interface and use
struct kvm_mmu_page as list entry. Use flush list function in the
kvm_mmu_commit_zap_page().

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h |  7 +++++++
 arch/x86/kvm/mmu.c              | 24 +++++++++++++++++++++++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 78d2a6714c3b..22dbaa8fba32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -316,6 +316,12 @@ struct kvm_rmap_head {
 
 struct kvm_mmu_page {
 	struct list_head link;
+
+	/*
+	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
+	 * and all list operations should be under protection of mmu_lock.
+	 */
+	struct list_head flush_link;
 	struct hlist_node hash_link;
 	bool unsync;
 
@@ -443,6 +449,7 @@ struct kvm_mmu {
 struct kvm_tlb_range {
 	u64 start_gfn;
 	u64 pages;
+	struct list_head *flush_list;
 };
 
 enum pmc_type {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 068694fa2371..d3272c5066ea 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -289,6 +289,17 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
 
 	range.start_gfn = start_gfn;
 	range.pages = pages;
+	range.flush_list = NULL;
+
+	kvm_flush_remote_tlbs_with_range(kvm, &range);
+}
+
+static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm,
+		struct list_head *flush_list)
+{
+	struct kvm_tlb_range range;
+
+	range.flush_list = flush_list;
 
 	kvm_flush_remote_tlbs_with_range(kvm, &range);
 }
@@ -2708,6 +2719,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
 	struct kvm_mmu_page *sp, *nsp;
+	LIST_HEAD(flush_list);
 
 	if (list_empty(invalid_list))
 		return;
@@ -2721,7 +2733,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit
 	 * guest mode and/or lockless shadow page table walks.
 	 */
-	kvm_flush_remote_tlbs(kvm);
+	if (kvm_available_flush_tlb_with_range()) {
+		list_for_each_entry(sp, invalid_list, link)
+			if (sp->sptep && is_last_spte(*sp->sptep,
+			    sp->role.level))
+				list_add(&sp->flush_link, &flush_list);
+
+		if (!list_empty(&flush_list))
+			kvm_flush_remote_tlbs_with_list(kvm, &flush_list);
+	} else {
+		kvm_flush_remote_tlbs(kvm);
+	}
 
 	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
 		WARN_ON(!sp->role.invalid || sp->root_count);
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to introduce tlb flush with range list interface and use
struct kvm_mmu_page as list entry. Use flush list function in the
kvm_mmu_commit_zap_page().

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h |  7 +++++++
 arch/x86/kvm/mmu.c              | 24 +++++++++++++++++++++++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 78d2a6714c3b..22dbaa8fba32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -316,6 +316,12 @@ struct kvm_rmap_head {
 
 struct kvm_mmu_page {
 	struct list_head link;
+
+	/*
+	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
+	 * and all list operations should be under protection of mmu_lock.
+	 */
+	struct list_head flush_link;
 	struct hlist_node hash_link;
 	bool unsync;
 
@@ -443,6 +449,7 @@ struct kvm_mmu {
 struct kvm_tlb_range {
 	u64 start_gfn;
 	u64 pages;
+	struct list_head *flush_list;
 };
 
 enum pmc_type {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 068694fa2371..d3272c5066ea 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -289,6 +289,17 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
 
 	range.start_gfn = start_gfn;
 	range.pages = pages;
+	range.flush_list = NULL;
+
+	kvm_flush_remote_tlbs_with_range(kvm, &range);
+}
+
+static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm,
+		struct list_head *flush_list)
+{
+	struct kvm_tlb_range range;
+
+	range.flush_list = flush_list;
 
 	kvm_flush_remote_tlbs_with_range(kvm, &range);
 }
@@ -2708,6 +2719,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
 	struct kvm_mmu_page *sp, *nsp;
+	LIST_HEAD(flush_list);
 
 	if (list_empty(invalid_list))
 		return;
@@ -2721,7 +2733,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit
 	 * guest mode and/or lockless shadow page table walks.
 	 */
-	kvm_flush_remote_tlbs(kvm);
+	if (kvm_available_flush_tlb_with_range()) {
+		list_for_each_entry(sp, invalid_list, link)
+			if (sp->sptep && is_last_spte(*sp->sptep,
+			    sp->role.level))
+				list_add(&sp->flush_link, &flush_list);
+
+		if (!list_empty(&flush_list))
+			kvm_flush_remote_tlbs_with_list(kvm, &flush_list);
+	} else {
+		kvm_flush_remote_tlbs(kvm);
+	}
 
 	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
 		WARN_ON(!sp->role.invalid || sp->root_count);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to introduce tlb flush with range list interface and use
struct kvm_mmu_page as list entry. Use flush list function in the
kvm_mmu_commit_zap_page().

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h |  7 +++++++
 arch/x86/kvm/mmu.c              | 24 +++++++++++++++++++++++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 78d2a6714c3b..22dbaa8fba32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -316,6 +316,12 @@ struct kvm_rmap_head {
 
 struct kvm_mmu_page {
 	struct list_head link;
+
+	/*
+	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
+	 * and all list operations should be under protection of mmu_lock.
+	 */
+	struct list_head flush_link;
 	struct hlist_node hash_link;
 	bool unsync;
 
@@ -443,6 +449,7 @@ struct kvm_mmu {
 struct kvm_tlb_range {
 	u64 start_gfn;
 	u64 pages;
+	struct list_head *flush_list;
 };
 
 enum pmc_type {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 068694fa2371..d3272c5066ea 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -289,6 +289,17 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
 
 	range.start_gfn = start_gfn;
 	range.pages = pages;
+	range.flush_list = NULL;
+
+	kvm_flush_remote_tlbs_with_range(kvm, &range);
+}
+
+static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm,
+		struct list_head *flush_list)
+{
+	struct kvm_tlb_range range;
+
+	range.flush_list = flush_list;
 
 	kvm_flush_remote_tlbs_with_range(kvm, &range);
 }
@@ -2708,6 +2719,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
 	struct kvm_mmu_page *sp, *nsp;
+	LIST_HEAD(flush_list);
 
 	if (list_empty(invalid_list))
 		return;
@@ -2721,7 +2733,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit
 	 * guest mode and/or lockless shadow page table walks.
 	 */
-	kvm_flush_remote_tlbs(kvm);
+	if (kvm_available_flush_tlb_with_range()) {
+		list_for_each_entry(sp, invalid_list, link)
+			if (sp->sptep && is_last_spte(*sp->sptep,
+			    sp->role.level))
+				list_add(&sp->flush_link, &flush_list);
+
+		if (!list_empty(&flush_list))
+			kvm_flush_remote_tlbs_with_list(kvm, &flush_list);
+	} else {
+		kvm_flush_remote_tlbs(kvm);
+	}
 
 	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
 		WARN_ON(!sp->role.invalid || sp->root_count);
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to introduce tlb flush with range list interface and use
struct kvm_mmu_page as list entry. Use flush list function in the
kvm_mmu_commit_zap_page().

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h |  7 +++++++
 arch/x86/kvm/mmu.c              | 24 +++++++++++++++++++++++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 78d2a6714c3b..22dbaa8fba32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -316,6 +316,12 @@ struct kvm_rmap_head {
 
 struct kvm_mmu_page {
 	struct list_head link;
+
+	/*
+	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
+	 * and all list operations should be under protection of mmu_lock.
+	 */
+	struct list_head flush_link;
 	struct hlist_node hash_link;
 	bool unsync;
 
@@ -443,6 +449,7 @@ struct kvm_mmu {
 struct kvm_tlb_range {
 	u64 start_gfn;
 	u64 pages;
+	struct list_head *flush_list;
 };
 
 enum pmc_type {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 068694fa2371..d3272c5066ea 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -289,6 +289,17 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
 
 	range.start_gfn = start_gfn;
 	range.pages = pages;
+	range.flush_list = NULL;
+
+	kvm_flush_remote_tlbs_with_range(kvm, &range);
+}
+
+static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm,
+		struct list_head *flush_list)
+{
+	struct kvm_tlb_range range;
+
+	range.flush_list = flush_list;
 
 	kvm_flush_remote_tlbs_with_range(kvm, &range);
 }
@@ -2708,6 +2719,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
 	struct kvm_mmu_page *sp, *nsp;
+	LIST_HEAD(flush_list);
 
 	if (list_empty(invalid_list))
 		return;
@@ -2721,7 +2733,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit
 	 * guest mode and/or lockless shadow page table walks.
 	 */
-	kvm_flush_remote_tlbs(kvm);
+	if (kvm_available_flush_tlb_with_range()) {
+		list_for_each_entry(sp, invalid_list, link)
+			if (sp->sptep && is_last_spte(*sp->sptep,
+			    sp->role.level))
+				list_add(&sp->flush_link, &flush_list);
+
+		if (!list_empty(&flush_list))
+			kvm_flush_remote_tlbs_with_list(kvm, &flush_list);
+	} else {
+		kvm_flush_remote_tlbs(kvm);
+	}
 
 	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
 		WARN_ON(!sp->role.invalid || sp->root_count);
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to introduce tlb flush with range list interface and use
struct kvm_mmu_page as list entry. Use flush list function in the
kvm_mmu_commit_zap_page().

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/include/asm/kvm_host.h |  7 +++++++
 arch/x86/kvm/mmu.c              | 24 +++++++++++++++++++++++-
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 78d2a6714c3b..22dbaa8fba32 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -316,6 +316,12 @@ struct kvm_rmap_head {
 
 struct kvm_mmu_page {
 	struct list_head link;
+
+	/*
+	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
+	 * and all list operations should be under protection of mmu_lock.
+	 */
+	struct list_head flush_link;
 	struct hlist_node hash_link;
 	bool unsync;
 
@@ -443,6 +449,7 @@ struct kvm_mmu {
 struct kvm_tlb_range {
 	u64 start_gfn;
 	u64 pages;
+	struct list_head *flush_list;
 };
 
 enum pmc_type {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 068694fa2371..d3272c5066ea 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -289,6 +289,17 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
 
 	range.start_gfn = start_gfn;
 	range.pages = pages;
+	range.flush_list = NULL;
+
+	kvm_flush_remote_tlbs_with_range(kvm, &range);
+}
+
+static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm,
+		struct list_head *flush_list)
+{
+	struct kvm_tlb_range range;
+
+	range.flush_list = flush_list;
 
 	kvm_flush_remote_tlbs_with_range(kvm, &range);
 }
@@ -2708,6 +2719,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
 	struct kvm_mmu_page *sp, *nsp;
+	LIST_HEAD(flush_list);
 
 	if (list_empty(invalid_list))
 		return;
@@ -2721,7 +2733,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit
 	 * guest mode and/or lockless shadow page table walks.
 	 */
-	kvm_flush_remote_tlbs(kvm);
+	if (kvm_available_flush_tlb_with_range()) {
+		list_for_each_entry(sp, invalid_list, link)
+			if (sp->sptep && is_last_spte(*sp->sptep,
+			    sp->role.level))
+				list_add(&sp->flush_link, &flush_list);
+
+		if (!list_empty(&flush_list))
+			kvm_flush_remote_tlbs_with_list(kvm, &flush_list);
+	} else {
+		kvm_flush_remote_tlbs(kvm);
+	}
 
 	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
 		WARN_ON(!sp->role.invalid || sp->root_count);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:53   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
when range flush is available.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d3272c5066ea..6d4f7dfeaa57 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1715,6 +1715,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 		write_protected |= __rmap_write_protect(kvm, rmap_head, true);
 	}
 
+	if (write_protected && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+		write_protected = false;
+	}
+
 	return write_protected;
 }
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
when range flush is available.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d3272c5066ea..6d4f7dfeaa57 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1715,6 +1715,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 		write_protected |= __rmap_write_protect(kvm, rmap_head, true);
 	}
 
+	if (write_protected && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+		write_protected = false;
+	}
+
 	return write_protected;
 }
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
when range flush is available.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d3272c5066ea..6d4f7dfeaa57 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1715,6 +1715,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 		write_protected |= __rmap_write_protect(kvm, rmap_head, true);
 	}
 
+	if (write_protected && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+		write_protected = false;
+	}
+
 	return write_protected;
 }
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
when range flush is available.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d3272c5066ea..6d4f7dfeaa57 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1715,6 +1715,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 		write_protected |= __rmap_write_protect(kvm, rmap_head, true);
 	}
 
+	if (write_protected && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+		write_protected = false;
+	}
+
 	return write_protected;
 }
 
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
@ 2019-01-04  8:53   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:53 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
when range flush is available.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d3272c5066ea..6d4f7dfeaa57 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1715,6 +1715,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 		write_protected |= __rmap_write_protect(kvm, rmap_head, true);
 	}
 
+	if (write_protected && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+		write_protected = false;
+	}
+
 	return write_protected;
 }
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:54   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb via flush list function.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 833e8855bbc9..866ccdea762e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	bool host_writable;
 	gpa_t first_pte_gpa;
 	int set_spte_ret = 0;
+	LIST_HEAD(flush_list);
 
 	/* direct kvm_mmu_page can not be unsync. */
 	BUG_ON(sp->role.direct);
@@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
+		int tmp_spte_ret = 0;
 		unsigned pte_access;
 		pt_element_t gpte;
 		gpa_t pte_gpa;
@@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
 
-		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
+		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
 					 pte_access, PT_PAGE_TABLE_LEVEL,
 					 gfn, spte_to_pfn(sp->spt[i]),
 					 true, false, host_writable);
+
+		if (kvm_available_flush_tlb_with_range()
+		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
+			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
+					& PT64_BASE_ADDR_MASK);
+			list_add(&leaf_sp->flush_link, &flush_list);
+		}
+
+		set_spte_ret |= tmp_spte_ret;
+
 	}
 
 	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
-		kvm_flush_remote_tlbs(vcpu->kvm);
+		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
 
 	return nr_present;
 }
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb via flush list function.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 833e8855bbc9..866ccdea762e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	bool host_writable;
 	gpa_t first_pte_gpa;
 	int set_spte_ret = 0;
+	LIST_HEAD(flush_list);
 
 	/* direct kvm_mmu_page can not be unsync. */
 	BUG_ON(sp->role.direct);
@@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
+		int tmp_spte_ret = 0;
 		unsigned pte_access;
 		pt_element_t gpte;
 		gpa_t pte_gpa;
@@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
 
-		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
+		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
 					 pte_access, PT_PAGE_TABLE_LEVEL,
 					 gfn, spte_to_pfn(sp->spt[i]),
 					 true, false, host_writable);
+
+		if (kvm_available_flush_tlb_with_range()
+		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
+			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
+					& PT64_BASE_ADDR_MASK);
+			list_add(&leaf_sp->flush_link, &flush_list);
+		}
+
+		set_spte_ret |= tmp_spte_ret;
+
 	}
 
 	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
-		kvm_flush_remote_tlbs(vcpu->kvm);
+		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
 
 	return nr_present;
 }
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb via flush list function.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 833e8855bbc9..866ccdea762e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	bool host_writable;
 	gpa_t first_pte_gpa;
 	int set_spte_ret = 0;
+	LIST_HEAD(flush_list);
 
 	/* direct kvm_mmu_page can not be unsync. */
 	BUG_ON(sp->role.direct);
@@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
+		int tmp_spte_ret = 0;
 		unsigned pte_access;
 		pt_element_t gpte;
 		gpa_t pte_gpa;
@@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
 
-		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
+		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
 					 pte_access, PT_PAGE_TABLE_LEVEL,
 					 gfn, spte_to_pfn(sp->spt[i]),
 					 true, false, host_writable);
+
+		if (kvm_available_flush_tlb_with_range()
+		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
+			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
+					& PT64_BASE_ADDR_MASK);
+			list_add(&leaf_sp->flush_link, &flush_list);
+		}
+
+		set_spte_ret |= tmp_spte_ret;
+
 	}
 
 	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
-		kvm_flush_remote_tlbs(vcpu->kvm);
+		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
 
 	return nr_present;
 }
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb via flush list function.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 833e8855bbc9..866ccdea762e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	bool host_writable;
 	gpa_t first_pte_gpa;
 	int set_spte_ret = 0;
+	LIST_HEAD(flush_list);
 
 	/* direct kvm_mmu_page can not be unsync. */
 	BUG_ON(sp->role.direct);
@@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
+		int tmp_spte_ret = 0;
 		unsigned pte_access;
 		pt_element_t gpte;
 		gpa_t pte_gpa;
@@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
 
-		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
+		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
 					 pte_access, PT_PAGE_TABLE_LEVEL,
 					 gfn, spte_to_pfn(sp->spt[i]),
 					 true, false, host_writable);
+
+		if (kvm_available_flush_tlb_with_range()
+		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
+			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
+					& PT64_BASE_ADDR_MASK);
+			list_add(&leaf_sp->flush_link, &flush_list);
+		}
+
+		set_spte_ret |= tmp_spte_ret;
+
 	}
 
 	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
-		kvm_flush_remote_tlbs(vcpu->kvm);
+		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
 
 	return nr_present;
 }
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb via flush list function.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 833e8855bbc9..866ccdea762e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	bool host_writable;
 	gpa_t first_pte_gpa;
 	int set_spte_ret = 0;
+	LIST_HEAD(flush_list);
 
 	/* direct kvm_mmu_page can not be unsync. */
 	BUG_ON(sp->role.direct);
@@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
+		int tmp_spte_ret = 0;
 		unsigned pte_access;
 		pt_element_t gpte;
 		gpa_t pte_gpa;
@@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 
 		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
 
-		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
+		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
 					 pte_access, PT_PAGE_TABLE_LEVEL,
 					 gfn, spte_to_pfn(sp->spt[i]),
 					 true, false, host_writable);
+
+		if (kvm_available_flush_tlb_with_range()
+		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
+			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
+					& PT64_BASE_ADDR_MASK);
+			list_add(&leaf_sp->flush_link, &flush_list);
+		}
+
+		set_spte_ret |= tmp_spte_ret;
+
 	}
 
 	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
-		kvm_flush_remote_tlbs(vcpu->kvm);
+		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
 
 	return nr_present;
 }
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:54   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

The dirty bits have already been checked in the previous check of
"dirty_bitmap" and mask must be non-zero value at this point.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 virt/kvm/kvm_main.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index cf7cc0554094..e75dbb15fd09 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
-			if (mask) {
-				offset = i * BITS_PER_LONG;
-				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-									offset, mask);
-			}
+			offset = i * BITS_PER_LONG;
+			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
+								offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

The dirty bits have already been checked in the previous check of
"dirty_bitmap" and mask must be non-zero value at this point.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 virt/kvm/kvm_main.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index cf7cc0554094..e75dbb15fd09 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
-			if (mask) {
-				offset = i * BITS_PER_LONG;
-				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-									offset, mask);
-			}
+			offset = i * BITS_PER_LONG;
+			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
+								offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

The dirty bits have already been checked in the previous check of
"dirty_bitmap" and mask must be non-zero value at this point.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 virt/kvm/kvm_main.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index cf7cc0554094..e75dbb15fd09 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
-			if (mask) {
-				offset = i * BITS_PER_LONG;
-				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-									offset, mask);
-			}
+			offset = i * BITS_PER_LONG;
+			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
+								offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

The dirty bits have already been checked in the previous check of
"dirty_bitmap" and mask must be non-zero value at this point.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 virt/kvm/kvm_main.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index cf7cc0554094..e75dbb15fd09 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
-			if (mask) {
-				offset = i * BITS_PER_LONG;
-				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-									offset, mask);
-			}
+			offset = i * BITS_PER_LONG;
+			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
+								offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

The dirty bits have already been checked in the previous check of
"dirty_bitmap" and mask must be non-zero value at this point.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 virt/kvm/kvm_main.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index cf7cc0554094..e75dbb15fd09 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
-			if (mask) {
-				offset = i * BITS_PER_LONG;
-				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-									offset, mask);
-			}
+			offset = i * BITS_PER_LONG;
+			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
+								offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:54   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
and caller can use it to determine whether tlb flush is necessary.
kvm_get_dirty_log_protect() and kvm_clear_dirty_log_protect() use the return
value of kvm_arch_mmu_enable_log_dirty_pt_masked() to populate flush
parameter.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/mips/kvm/mmu.c      |  5 ++++-
 arch/x86/kvm/mmu.c       |  6 +++++-
 include/linux/kvm_host.h |  2 +-
 virt/kvm/arm/mmu.c       |  5 ++++-
 virt/kvm/kvm_main.c      | 10 ++++------
 5 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..f36ccb2d43ec 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -437,8 +437,10 @@ int kvm_mips_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
  *
  * Walks bits set in mask write protects the associated pte's. Caller must
  * acquire @kvm->mmu_lock.
+ *
+ * Returns: Whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
@@ -447,6 +449,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 	gfn_t end = base_gfn + __fls(mask);
 
 	kvm_mips_mkclean_gpa_pt(kvm, start, end);
+	return true;
 }
 
 /*
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6d4f7dfeaa57..9d8ee6ea02db 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1676,8 +1676,10 @@ EXPORT_SYMBOL_GPL(kvm_mmu_clear_dirty_pt_masked);
  *
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
@@ -1686,6 +1688,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				mask);
 	else
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+
+	return true;
 }
 
 /**
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c38cc5eb7e73..e86b8c38342b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -759,7 +759,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 int kvm_clear_dirty_log_protect(struct kvm *kvm,
 				struct kvm_clear_dirty_log *log, bool *flush);
 
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					struct kvm_memory_slot *slot,
 					gfn_t gfn_offset,
 					unsigned long mask);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3053bf2584f8..232007ff3208 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1564,12 +1564,15 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
  *
  * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
  * enable dirty logging for them.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
 	kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+	return true;
 }
 
 static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e75dbb15fd09..bcbe059d98be 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1202,13 +1202,12 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			if (!dirty_bitmap[i])
 				continue;
 
-			*flush = true;
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
 			offset = i * BITS_PER_LONG;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+							memslot, offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
@@ -1275,9 +1274,8 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 		 * a problem if userspace sets them in log->dirty_bitmap.
 		*/
 		if (mask) {
-			*flush = true;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+					memslot, offset, mask);
 		}
 	}
 	spin_unlock(&kvm->mmu_lock);
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
and caller can use it to determine whether tlb flush is necessary.
kvm_get_dirty_log_protect() and kvm_clear_dirty_log_protect() use the return
value of kvm_arch_mmu_enable_log_dirty_pt_masked() to populate flush
parameter.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/mips/kvm/mmu.c      |  5 ++++-
 arch/x86/kvm/mmu.c       |  6 +++++-
 include/linux/kvm_host.h |  2 +-
 virt/kvm/arm/mmu.c       |  5 ++++-
 virt/kvm/kvm_main.c      | 10 ++++------
 5 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..f36ccb2d43ec 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -437,8 +437,10 @@ int kvm_mips_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
  *
  * Walks bits set in mask write protects the associated pte's. Caller must
  * acquire @kvm->mmu_lock.
+ *
+ * Returns: Whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
@@ -447,6 +449,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 	gfn_t end = base_gfn + __fls(mask);
 
 	kvm_mips_mkclean_gpa_pt(kvm, start, end);
+	return true;
 }
 
 /*
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6d4f7dfeaa57..9d8ee6ea02db 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1676,8 +1676,10 @@ EXPORT_SYMBOL_GPL(kvm_mmu_clear_dirty_pt_masked);
  *
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
@@ -1686,6 +1688,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				mask);
 	else
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+
+	return true;
 }
 
 /**
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c38cc5eb7e73..e86b8c38342b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -759,7 +759,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 int kvm_clear_dirty_log_protect(struct kvm *kvm,
 				struct kvm_clear_dirty_log *log, bool *flush);
 
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					struct kvm_memory_slot *slot,
 					gfn_t gfn_offset,
 					unsigned long mask);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3053bf2584f8..232007ff3208 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1564,12 +1564,15 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
  *
  * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
  * enable dirty logging for them.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
 	kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+	return true;
 }
 
 static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e75dbb15fd09..bcbe059d98be 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1202,13 +1202,12 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			if (!dirty_bitmap[i])
 				continue;
 
-			*flush = true;
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
 			offset = i * BITS_PER_LONG;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+							memslot, offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
@@ -1275,9 +1274,8 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 		 * a problem if userspace sets them in log->dirty_bitmap.
 		*/
 		if (mask) {
-			*flush = true;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+					memslot, offset, mask);
 		}
 	}
 	spin_unlock(&kvm->mmu_lock);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
and caller can use it to determine whether tlb flush is necessary.
kvm_get_dirty_log_protect() and kvm_clear_dirty_log_protect() use the return
value of kvm_arch_mmu_enable_log_dirty_pt_masked() to populate flush
parameter.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/mips/kvm/mmu.c      |  5 ++++-
 arch/x86/kvm/mmu.c       |  6 +++++-
 include/linux/kvm_host.h |  2 +-
 virt/kvm/arm/mmu.c       |  5 ++++-
 virt/kvm/kvm_main.c      | 10 ++++------
 5 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..f36ccb2d43ec 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -437,8 +437,10 @@ int kvm_mips_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
  *
  * Walks bits set in mask write protects the associated pte's. Caller must
  * acquire @kvm->mmu_lock.
+ *
+ * Returns: Whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
@@ -447,6 +449,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 	gfn_t end = base_gfn + __fls(mask);
 
 	kvm_mips_mkclean_gpa_pt(kvm, start, end);
+	return true;
 }
 
 /*
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6d4f7dfeaa57..9d8ee6ea02db 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1676,8 +1676,10 @@ EXPORT_SYMBOL_GPL(kvm_mmu_clear_dirty_pt_masked);
  *
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
@@ -1686,6 +1688,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				mask);
 	else
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+
+	return true;
 }
 
 /**
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c38cc5eb7e73..e86b8c38342b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -759,7 +759,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 int kvm_clear_dirty_log_protect(struct kvm *kvm,
 				struct kvm_clear_dirty_log *log, bool *flush);
 
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					struct kvm_memory_slot *slot,
 					gfn_t gfn_offset,
 					unsigned long mask);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3053bf2584f8..232007ff3208 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1564,12 +1564,15 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
  *
  * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
  * enable dirty logging for them.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
 	kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+	return true;
 }
 
 static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e75dbb15fd09..bcbe059d98be 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1202,13 +1202,12 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			if (!dirty_bitmap[i])
 				continue;
 
-			*flush = true;
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
 			offset = i * BITS_PER_LONG;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+							memslot, offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
@@ -1275,9 +1274,8 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 		 * a problem if userspace sets them in log->dirty_bitmap.
 		*/
 		if (mask) {
-			*flush = true;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+					memslot, offset, mask);
 		}
 	}
 	spin_unlock(&kvm->mmu_lock);
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
and caller can use it to determine whether tlb flush is necessary.
kvm_get_dirty_log_protect() and kvm_clear_dirty_log_protect() use the return
value of kvm_arch_mmu_enable_log_dirty_pt_masked() to populate flush
parameter.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/mips/kvm/mmu.c      |  5 ++++-
 arch/x86/kvm/mmu.c       |  6 +++++-
 include/linux/kvm_host.h |  2 +-
 virt/kvm/arm/mmu.c       |  5 ++++-
 virt/kvm/kvm_main.c      | 10 ++++------
 5 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..f36ccb2d43ec 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -437,8 +437,10 @@ int kvm_mips_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
  *
  * Walks bits set in mask write protects the associated pte's. Caller must
  * acquire @kvm->mmu_lock.
+ *
+ * Returns: Whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
@@ -447,6 +449,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 	gfn_t end = base_gfn + __fls(mask);
 
 	kvm_mips_mkclean_gpa_pt(kvm, start, end);
+	return true;
 }
 
 /*
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6d4f7dfeaa57..9d8ee6ea02db 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1676,8 +1676,10 @@ EXPORT_SYMBOL_GPL(kvm_mmu_clear_dirty_pt_masked);
  *
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
@@ -1686,6 +1688,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				mask);
 	else
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+
+	return true;
 }
 
 /**
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c38cc5eb7e73..e86b8c38342b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -759,7 +759,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 int kvm_clear_dirty_log_protect(struct kvm *kvm,
 				struct kvm_clear_dirty_log *log, bool *flush);
 
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					struct kvm_memory_slot *slot,
 					gfn_t gfn_offset,
 					unsigned long mask);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3053bf2584f8..232007ff3208 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1564,12 +1564,15 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
  *
  * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
  * enable dirty logging for them.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
 	kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+	return true;
 }
 
 static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e75dbb15fd09..bcbe059d98be 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1202,13 +1202,12 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			if (!dirty_bitmap[i])
 				continue;
 
-			*flush = true;
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
 			offset = i * BITS_PER_LONG;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+							memslot, offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
@@ -1275,9 +1274,8 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 		 * a problem if userspace sets them in log->dirty_bitmap.
 		*/
 		if (mask) {
-			*flush = true;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+					memslot, offset, mask);
 		}
 	}
 	spin_unlock(&kvm->mmu_lock);
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
and caller can use it to determine whether tlb flush is necessary.
kvm_get_dirty_log_protect() and kvm_clear_dirty_log_protect() use the return
value of kvm_arch_mmu_enable_log_dirty_pt_masked() to populate flush
parameter.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/mips/kvm/mmu.c      |  5 ++++-
 arch/x86/kvm/mmu.c       |  6 +++++-
 include/linux/kvm_host.h |  2 +-
 virt/kvm/arm/mmu.c       |  5 ++++-
 virt/kvm/kvm_main.c      | 10 ++++------
 5 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 97e538a8c1be..f36ccb2d43ec 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -437,8 +437,10 @@ int kvm_mips_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
  *
  * Walks bits set in mask write protects the associated pte's. Caller must
  * acquire @kvm->mmu_lock.
+ *
+ * Returns: Whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
@@ -447,6 +449,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 	gfn_t end = base_gfn + __fls(mask);
 
 	kvm_mips_mkclean_gpa_pt(kvm, start, end);
+	return true;
 }
 
 /*
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6d4f7dfeaa57..9d8ee6ea02db 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1676,8 +1676,10 @@ EXPORT_SYMBOL_GPL(kvm_mmu_clear_dirty_pt_masked);
  *
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
@@ -1686,6 +1688,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				mask);
 	else
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+
+	return true;
 }
 
 /**
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c38cc5eb7e73..e86b8c38342b 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -759,7 +759,7 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 int kvm_clear_dirty_log_protect(struct kvm *kvm,
 				struct kvm_clear_dirty_log *log, bool *flush);
 
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 					struct kvm_memory_slot *slot,
 					gfn_t gfn_offset,
 					unsigned long mask);
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 3053bf2584f8..232007ff3208 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1564,12 +1564,15 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
  *
  * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to
  * enable dirty logging for them.
+ *
+ * Return value means whether caller needs to flush tlb.
  */
-void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		struct kvm_memory_slot *slot,
 		gfn_t gfn_offset, unsigned long mask)
 {
 	kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
+	return true;
 }
 
 static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e75dbb15fd09..bcbe059d98be 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1202,13 +1202,12 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
 			if (!dirty_bitmap[i])
 				continue;
 
-			*flush = true;
 			mask = xchg(&dirty_bitmap[i], 0);
 			dirty_bitmap_buffer[i] = mask;
 
 			offset = i * BITS_PER_LONG;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+							memslot, offset, mask);
 		}
 		spin_unlock(&kvm->mmu_lock);
 	}
@@ -1275,9 +1274,8 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
 		 * a problem if userspace sets them in log->dirty_bitmap.
 		*/
 		if (mask) {
-			*flush = true;
-			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
-								offset, mask);
+			*flush = kvm_arch_mmu_enable_log_dirty_pt_masked(kvm,
+					memslot, offset, mask);
 		}
 	}
 	spin_unlock(&kvm->mmu_lock);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:54   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb in the kvm_mmu_write_protect_pt_masked() when
tlb range flush is available and make kvm_mmu_write_protect_pt_masked()
return flush request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9d8ee6ea02db..30ed7a79335b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1624,20 +1624,30 @@ static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
  */
-static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+static bool kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 				     struct kvm_memory_slot *slot,
 				     gfn_t gfn_offset, unsigned long mask)
 {
 	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
 
 	while (mask) {
 		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
 					  PT_PAGE_TABLE_LEVEL, slot);
-		__rmap_write_protect(kvm, rmap_head, false);
+		flush |= __rmap_write_protect(kvm, rmap_head, false);
 
 		/* clear the first set bit */
 		mask &= mask - 1;
 	}
+
+	if (flush && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm,
+				slot->base_gfn + gfn_offset,
+				hweight_long(mask));
+		flush = false;
+	}
+
+	return flush;
 }
 
 /**
@@ -1683,13 +1693,14 @@ bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
-	if (kvm_x86_ops->enable_log_dirty_pt_masked)
+	if (kvm_x86_ops->enable_log_dirty_pt_masked) {
 		kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
 				mask);
-	else
-		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
-
-	return true;
+		return true;
+	} else {
+		return kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset,
+				mask);
+	}
 }
 
 /**
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb in the kvm_mmu_write_protect_pt_masked() when
tlb range flush is available and make kvm_mmu_write_protect_pt_masked()
return flush request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9d8ee6ea02db..30ed7a79335b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1624,20 +1624,30 @@ static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
  */
-static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+static bool kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 				     struct kvm_memory_slot *slot,
 				     gfn_t gfn_offset, unsigned long mask)
 {
 	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
 
 	while (mask) {
 		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
 					  PT_PAGE_TABLE_LEVEL, slot);
-		__rmap_write_protect(kvm, rmap_head, false);
+		flush |= __rmap_write_protect(kvm, rmap_head, false);
 
 		/* clear the first set bit */
 		mask &= mask - 1;
 	}
+
+	if (flush && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm,
+				slot->base_gfn + gfn_offset,
+				hweight_long(mask));
+		flush = false;
+	}
+
+	return flush;
 }
 
 /**
@@ -1683,13 +1693,14 @@ bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
-	if (kvm_x86_ops->enable_log_dirty_pt_masked)
+	if (kvm_x86_ops->enable_log_dirty_pt_masked) {
 		kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
 				mask);
-	else
-		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
-
-	return true;
+		return true;
+	} else {
+		return kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset,
+				mask);
+	}
 }
 
 /**
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb in the kvm_mmu_write_protect_pt_masked() when
tlb range flush is available and make kvm_mmu_write_protect_pt_masked()
return flush request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9d8ee6ea02db..30ed7a79335b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1624,20 +1624,30 @@ static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
  */
-static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+static bool kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 				     struct kvm_memory_slot *slot,
 				     gfn_t gfn_offset, unsigned long mask)
 {
 	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
 
 	while (mask) {
 		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
 					  PT_PAGE_TABLE_LEVEL, slot);
-		__rmap_write_protect(kvm, rmap_head, false);
+		flush |= __rmap_write_protect(kvm, rmap_head, false);
 
 		/* clear the first set bit */
 		mask &= mask - 1;
 	}
+
+	if (flush && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm,
+				slot->base_gfn + gfn_offset,
+				hweight_long(mask));
+		flush = false;
+	}
+
+	return flush;
 }
 
 /**
@@ -1683,13 +1693,14 @@ bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
-	if (kvm_x86_ops->enable_log_dirty_pt_masked)
+	if (kvm_x86_ops->enable_log_dirty_pt_masked) {
 		kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
 				mask);
-	else
-		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
-
-	return true;
+		return true;
+	} else {
+		return kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset,
+				mask);
+	}
 }
 
 /**
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb in the kvm_mmu_write_protect_pt_masked() when
tlb range flush is available and make kvm_mmu_write_protect_pt_masked()
return flush request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9d8ee6ea02db..30ed7a79335b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1624,20 +1624,30 @@ static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
  */
-static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+static bool kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 				     struct kvm_memory_slot *slot,
 				     gfn_t gfn_offset, unsigned long mask)
 {
 	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
 
 	while (mask) {
 		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
 					  PT_PAGE_TABLE_LEVEL, slot);
-		__rmap_write_protect(kvm, rmap_head, false);
+		flush |= __rmap_write_protect(kvm, rmap_head, false);
 
 		/* clear the first set bit */
 		mask &= mask - 1;
 	}
+
+	if (flush && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm,
+				slot->base_gfn + gfn_offset,
+				hweight_long(mask));
+		flush = false;
+	}
+
+	return flush;
 }
 
 /**
@@ -1683,13 +1693,14 @@ bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
-	if (kvm_x86_ops->enable_log_dirty_pt_masked)
+	if (kvm_x86_ops->enable_log_dirty_pt_masked) {
 		kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
 				mask);
-	else
-		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
-
-	return true;
+		return true;
+	} else {
+		return kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset,
+				mask);
+	}
 }
 
 /**
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb in the kvm_mmu_write_protect_pt_masked() when
tlb range flush is available and make kvm_mmu_write_protect_pt_masked()
return flush request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9d8ee6ea02db..30ed7a79335b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1624,20 +1624,30 @@ static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head)
  * Used when we do not need to care about huge page mappings: e.g. during dirty
  * logging we do not have any such mappings.
  */
-static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
+static bool kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
 				     struct kvm_memory_slot *slot,
 				     gfn_t gfn_offset, unsigned long mask)
 {
 	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
 
 	while (mask) {
 		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
 					  PT_PAGE_TABLE_LEVEL, slot);
-		__rmap_write_protect(kvm, rmap_head, false);
+		flush |= __rmap_write_protect(kvm, rmap_head, false);
 
 		/* clear the first set bit */
 		mask &= mask - 1;
 	}
+
+	if (flush && kvm_available_flush_tlb_with_range()) {
+		kvm_flush_remote_tlbs_with_address(kvm,
+				slot->base_gfn + gfn_offset,
+				hweight_long(mask));
+		flush = false;
+	}
+
+	return flush;
 }
 
 /**
@@ -1683,13 +1693,14 @@ bool kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 				struct kvm_memory_slot *slot,
 				gfn_t gfn_offset, unsigned long mask)
 {
-	if (kvm_x86_ops->enable_log_dirty_pt_masked)
+	if (kvm_x86_ops->enable_log_dirty_pt_masked) {
 		kvm_x86_ops->enable_log_dirty_pt_masked(kvm, slot, gfn_offset,
 				mask);
-	else
-		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
-
-	return true;
+		return true;
+	} else {
+		return kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset,
+				mask);
+	}
 }
 
 /**
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva()
  2019-01-04  8:53 ` lantianyu1986
                     ` (2 preceding siblings ...)
  (?)
@ 2019-01-04  8:54   ` lantianyu1986
  -1 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to add flush parameter for kvm_aga_hva() and inside code
can check whether tlb flush is necessary when associated sptes are changed.
The platform may just flush affected address tlbs instead of entire
table's.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/arm/include/asm/kvm_host.h     | 3 ++-
 arch/arm64/include/asm/kvm_host.h   | 3 ++-
 arch/mips/include/asm/kvm_host.h    | 3 ++-
 arch/mips/kvm/mmu.c                 | 3 ++-
 arch/powerpc/include/asm/kvm_host.h | 3 ++-
 arch/powerpc/kvm/book3s.c           | 3 ++-
 arch/powerpc/kvm/e500_mmu_host.c    | 3 ++-
 arch/x86/include/asm/kvm_host.h     | 3 ++-
 arch/x86/kvm/mmu.c                  | 5 +++--
 virt/kvm/arm/mmu.c                  | 3 ++-
 virt/kvm/kvm_main.c                 | 4 ++--
 11 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4f3400a74a17..7d7f9ff27500 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -229,7 +229,8 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
 int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 063886be25ad..6f4539e13a26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -361,7 +361,8 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 71c3f21d80d5..ae1b079ad740 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -934,7 +934,8 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 /* Emulation */
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index f36ccb2d43ec..b69baf01dbac 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -582,7 +582,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
 	return pte_young(*gpa_pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL);
 }
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f98f00da2ea..d160e6b8ccfb 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -70,7 +70,8 @@
 
 extern int kvm_unmap_hva_range(struct kvm *kvm,
 			       unsigned long start, unsigned long end);
-extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		       bool flush);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index bd1a677dd9e4..430a8b81ef81 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -841,7 +841,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return kvm->arch.kvm_ops->age_hva(kvm, start, end);
 }
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index c3f312b2bcb3..e2f6c23ec39a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -745,7 +745,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return 0;
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	/* XXX could be more clever ;) */
 	return 0;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 22dbaa8fba32..4f3ff9d5b631 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1518,7 +1518,8 @@ asmlinkage void kvm_spurious_fault(void);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 30ed7a79335b..a5728f51bf7d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1995,9 +1995,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 			KVM_PAGES_PER_HPAGE(sp->role.level));
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
-	return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
+	return kvm_handle_hva_range(kvm, start, end, flush, kvm_age_rmapp);
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 232007ff3208..bbea7cfd6909 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2110,7 +2110,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *
 		return pte_young(*pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	if (!kvm->arch.pgd)
 		return 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bcbe059d98be..afec5787fc1d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -432,7 +432,7 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn,
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
 
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, true);
 	if (young)
 		kvm_flush_remote_tlbs(kvm);
 
@@ -465,7 +465,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
 	 * cadence. If we find this inaccurate, we might come up with a
 	 * more sophisticated heuristic later.
 	 */
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, false);
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, pbonzini, vkuznets,
	linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to add flush parameter for kvm_aga_hva() and inside code
can check whether tlb flush is necessary when associated sptes are changed.
The platform may just flush affected address tlbs instead of entire
table's.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/arm/include/asm/kvm_host.h     | 3 ++-
 arch/arm64/include/asm/kvm_host.h   | 3 ++-
 arch/mips/include/asm/kvm_host.h    | 3 ++-
 arch/mips/kvm/mmu.c                 | 3 ++-
 arch/powerpc/include/asm/kvm_host.h | 3 ++-
 arch/powerpc/kvm/book3s.c           | 3 ++-
 arch/powerpc/kvm/e500_mmu_host.c    | 3 ++-
 arch/x86/include/asm/kvm_host.h     | 3 ++-
 arch/x86/kvm/mmu.c                  | 5 +++--
 virt/kvm/arm/mmu.c                  | 3 ++-
 virt/kvm/kvm_main.c                 | 4 ++--
 11 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4f3400a74a17..7d7f9ff27500 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -229,7 +229,8 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
 int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 063886be25ad..6f4539e13a26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -361,7 +361,8 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 71c3f21d80d5..ae1b079ad740 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -934,7 +934,8 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 /* Emulation */
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index f36ccb2d43ec..b69baf01dbac 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -582,7 +582,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
 	return pte_young(*gpa_pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL);
 }
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f98f00da2ea..d160e6b8ccfb 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -70,7 +70,8 @@
 
 extern int kvm_unmap_hva_range(struct kvm *kvm,
 			       unsigned long start, unsigned long end);
-extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		       bool flush);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index bd1a677dd9e4..430a8b81ef81 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -841,7 +841,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return kvm->arch.kvm_ops->age_hva(kvm, start, end);
 }
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index c3f312b2bcb3..e2f6c23ec39a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -745,7 +745,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return 0;
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	/* XXX could be more clever ;) */
 	return 0;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 22dbaa8fba32..4f3ff9d5b631 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1518,7 +1518,8 @@ asmlinkage void kvm_spurious_fault(void);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 30ed7a79335b..a5728f51bf7d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1995,9 +1995,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 			KVM_PAGES_PER_HPAGE(sp->role.level));
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
-	return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
+	return kvm_handle_hva_range(kvm, start, end, flush, kvm_age_rmapp);
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 232007ff3208..bbea7cfd6909 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2110,7 +2110,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *
 		return pte_young(*pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	if (!kvm->arch.pgd)
 		return 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bcbe059d98be..afec5787fc1d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -432,7 +432,7 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn,
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
 
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, true);
 	if (young)
 		kvm_flush_remote_tlbs(kvm);
 
@@ -465,7 +465,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
 	 * cadence. If we find this inaccurate, we might come up with a
 	 * more sophisticated heuristic later.
 	 */
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, false);
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to add flush parameter for kvm_aga_hva() and inside code
can check whether tlb flush is necessary when associated sptes are changed.
The platform may just flush affected address tlbs instead of entire
table's.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/arm/include/asm/kvm_host.h     | 3 ++-
 arch/arm64/include/asm/kvm_host.h   | 3 ++-
 arch/mips/include/asm/kvm_host.h    | 3 ++-
 arch/mips/kvm/mmu.c                 | 3 ++-
 arch/powerpc/include/asm/kvm_host.h | 3 ++-
 arch/powerpc/kvm/book3s.c           | 3 ++-
 arch/powerpc/kvm/e500_mmu_host.c    | 3 ++-
 arch/x86/include/asm/kvm_host.h     | 3 ++-
 arch/x86/kvm/mmu.c                  | 5 +++--
 virt/kvm/arm/mmu.c                  | 3 ++-
 virt/kvm/kvm_main.c                 | 4 ++--
 11 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4f3400a74a17..7d7f9ff27500 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -229,7 +229,8 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
 int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 063886be25ad..6f4539e13a26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -361,7 +361,8 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 71c3f21d80d5..ae1b079ad740 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -934,7 +934,8 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 /* Emulation */
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index f36ccb2d43ec..b69baf01dbac 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -582,7 +582,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
 	return pte_young(*gpa_pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL);
 }
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f98f00da2ea..d160e6b8ccfb 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -70,7 +70,8 @@
 
 extern int kvm_unmap_hva_range(struct kvm *kvm,
 			       unsigned long start, unsigned long end);
-extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		       bool flush);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index bd1a677dd9e4..430a8b81ef81 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -841,7 +841,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return kvm->arch.kvm_ops->age_hva(kvm, start, end);
 }
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index c3f312b2bcb3..e2f6c23ec39a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -745,7 +745,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return 0;
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	/* XXX could be more clever ;) */
 	return 0;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 22dbaa8fba32..4f3ff9d5b631 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1518,7 +1518,8 @@ asmlinkage void kvm_spurious_fault(void);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 30ed7a79335b..a5728f51bf7d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1995,9 +1995,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 			KVM_PAGES_PER_HPAGE(sp->role.level));
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
-	return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
+	return kvm_handle_hva_range(kvm, start, end, flush, kvm_age_rmapp);
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 232007ff3208..bbea7cfd6909 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2110,7 +2110,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *
 		return pte_young(*pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	if (!kvm->arch.pgd)
 		return 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bcbe059d98be..afec5787fc1d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -432,7 +432,7 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn,
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
 
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, true);
 	if (young)
 		kvm_flush_remote_tlbs(kvm);
 
@@ -465,7 +465,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
 	 * cadence. If we find this inaccurate, we might come up with a
 	 * more sophisticated heuristic later.
 	 */
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, false);
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
 
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to add flush parameter for kvm_aga_hva() and inside code
can check whether tlb flush is necessary when associated sptes are changed.
The platform may just flush affected address tlbs instead of entire
table's.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/arm/include/asm/kvm_host.h     | 3 ++-
 arch/arm64/include/asm/kvm_host.h   | 3 ++-
 arch/mips/include/asm/kvm_host.h    | 3 ++-
 arch/mips/kvm/mmu.c                 | 3 ++-
 arch/powerpc/include/asm/kvm_host.h | 3 ++-
 arch/powerpc/kvm/book3s.c           | 3 ++-
 arch/powerpc/kvm/e500_mmu_host.c    | 3 ++-
 arch/x86/include/asm/kvm_host.h     | 3 ++-
 arch/x86/kvm/mmu.c                  | 5 +++--
 virt/kvm/arm/mmu.c                  | 3 ++-
 virt/kvm/kvm_main.c                 | 4 ++--
 11 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4f3400a74a17..7d7f9ff27500 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -229,7 +229,8 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
 int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 063886be25ad..6f4539e13a26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -361,7 +361,8 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 71c3f21d80d5..ae1b079ad740 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -934,7 +934,8 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 /* Emulation */
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index f36ccb2d43ec..b69baf01dbac 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -582,7 +582,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
 	return pte_young(*gpa_pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL);
 }
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f98f00da2ea..d160e6b8ccfb 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -70,7 +70,8 @@
 
 extern int kvm_unmap_hva_range(struct kvm *kvm,
 			       unsigned long start, unsigned long end);
-extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		       bool flush);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index bd1a677dd9e4..430a8b81ef81 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -841,7 +841,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return kvm->arch.kvm_ops->age_hva(kvm, start, end);
 }
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index c3f312b2bcb3..e2f6c23ec39a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -745,7 +745,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return 0;
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	/* XXX could be more clever ;) */
 	return 0;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 22dbaa8fba32..4f3ff9d5b631 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1518,7 +1518,8 @@ asmlinkage void kvm_spurious_fault(void);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 30ed7a79335b..a5728f51bf7d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1995,9 +1995,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 			KVM_PAGES_PER_HPAGE(sp->role.level));
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
-	return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
+	return kvm_handle_hva_range(kvm, start, end, flush, kvm_age_rmapp);
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 232007ff3208..bbea7cfd6909 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2110,7 +2110,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *
 		return pte_young(*pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	if (!kvm->arch.pgd)
 		return 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bcbe059d98be..afec5787fc1d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -432,7 +432,7 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn,
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
 
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, true);
 	if (young)
 		kvm_flush_remote_tlbs(kvm);
 
@@ -465,7 +465,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
 	 * cadence. If we find this inaccurate, we might come up with a
 	 * more sophisticated heuristic later.
 	 */
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, false);
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
 
-- 
2.14.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva()
@ 2019-01-04  8:54   ` lantianyu1986
  0 siblings, 0 replies; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to add flush parameter for kvm_aga_hva() and inside code
can check whether tlb flush is necessary when associated sptes are changed.
The platform may just flush affected address tlbs instead of entire
table's.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/arm/include/asm/kvm_host.h     | 3 ++-
 arch/arm64/include/asm/kvm_host.h   | 3 ++-
 arch/mips/include/asm/kvm_host.h    | 3 ++-
 arch/mips/kvm/mmu.c                 | 3 ++-
 arch/powerpc/include/asm/kvm_host.h | 3 ++-
 arch/powerpc/kvm/book3s.c           | 3 ++-
 arch/powerpc/kvm/e500_mmu_host.c    | 3 ++-
 arch/x86/include/asm/kvm_host.h     | 3 ++-
 arch/x86/kvm/mmu.c                  | 5 +++--
 virt/kvm/arm/mmu.c                  | 3 ++-
 virt/kvm/kvm_main.c                 | 4 ++--
 11 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 4f3400a74a17..7d7f9ff27500 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -229,7 +229,8 @@ int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
 int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 063886be25ad..6f4539e13a26 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -361,7 +361,8 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 71c3f21d80d5..ae1b079ad740 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -934,7 +934,8 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
 int kvm_unmap_hva_range(struct kvm *kvm,
 			unsigned long start, unsigned long end);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
 /* Emulation */
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index f36ccb2d43ec..b69baf01dbac 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -582,7 +582,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
 	return pte_young(*gpa_pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL);
 }
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 0f98f00da2ea..d160e6b8ccfb 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -70,7 +70,8 @@
 
 extern int kvm_unmap_hva_range(struct kvm *kvm,
 			       unsigned long start, unsigned long end);
-extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		       bool flush);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index bd1a677dd9e4..430a8b81ef81 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -841,7 +841,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	return kvm->arch.kvm_ops->age_hva(kvm, start, end);
 }
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index c3f312b2bcb3..e2f6c23ec39a 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -745,7 +745,8 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end)
 	return 0;
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	/* XXX could be more clever ;) */
 	return 0;
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 22dbaa8fba32..4f3ff9d5b631 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1518,7 +1518,8 @@ asmlinkage void kvm_spurious_fault(void);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 30ed7a79335b..a5728f51bf7d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1995,9 +1995,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
 			KVM_PAGES_PER_HPAGE(sp->role.level));
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
-	return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
+	return kvm_handle_hva_range(kvm, start, end, flush, kvm_age_rmapp);
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 232007ff3208..bbea7cfd6909 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2110,7 +2110,8 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *
 		return pte_young(*pte);
 }
 
-int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end,
+		bool flush)
 {
 	if (!kvm->arch.pgd)
 		return 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bcbe059d98be..afec5787fc1d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -432,7 +432,7 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn,
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
 
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, true);
 	if (young)
 		kvm_flush_remote_tlbs(kvm);
 
@@ -465,7 +465,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
 	 * cadence. If we find this inaccurate, we might come up with a
 	 * more sophisticated heuristic later.
 	 */
-	young = kvm_age_hva(kvm, start, end);
+	young = kvm_age_hva(kvm, start, end, false);
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 106+ messages in thread

* [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp()
  2019-01-04  8:53 ` lantianyu1986
                   ` (13 preceding siblings ...)
  (?)
@ 2019-01-04  8:54 ` lantianyu1986
       [not found]   ` <20190104161235.GB11288@linux.intel.com>
  -1 siblings, 1 reply; 106+ messages in thread
From: lantianyu1986 @ 2019-01-04  8:54 UTC (permalink / raw)
  Cc: Lan Tianyu, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86, kvm,
	linux-kernel, michael.h.kelley, kys, vkuznets, linux

From: Lan Tianyu <Tianyu.Lan@microsoft.com>

This patch is to flush tlb in the kvm_age_rmapp() when tlb range flush
is available and flush request is true.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
---
 arch/x86/kvm/mmu.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a5728f51bf7d..bc402a72956a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1958,10 +1958,17 @@ static int kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
 	u64 *sptep;
 	struct rmap_iterator uninitialized_var(iter);
 	int young = 0;
+	bool flush = (bool)data;
 
 	for_each_rmap_spte(rmap_head, &iter, sptep)
 		young |= mmu_spte_age(sptep);
 
+	if (young && flush) {
+		kvm_flush_remote_tlbs_with_address(kvm, gfn,
+				KVM_PAGES_PER_HPAGE(level));
+		young = 0;
+	}
+
 	trace_kvm_age_page(gfn, level, slot, young);
 	return young;
 }
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  2019-01-04  8:54   ` lantianyu1986
  (?)
  (?)
@ 2019-01-04 15:50     ` Sean Christopherson
  -1 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 15:50 UTC (permalink / raw)
  To: lantianyu1986
  Cc: , kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, christoffer.dall, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> The dirty bits have already been checked in the previous check of
> "dirty_bitmap" and mask must be non-zero value at this point.
> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  virt/kvm/kvm_main.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index cf7cc0554094..e75dbb15fd09 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  			mask = xchg(&dirty_bitmap[i], 0);
>  			dirty_bitmap_buffer[i] = mask;
>  
> -			if (mask) {
> -				offset = i * BITS_PER_LONG;
> -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> -									offset, mask);
> -			}
> +			offset = i * BITS_PER_LONG;
> +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> +								offset, mask);

Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
nice to elaborate a bit more in the changelog.

At the very least this needs a Fixes tag for the aforementioned commit.


Tangentially related, does mmu_lock actually need to be held while we
walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
itself is protected by slots_lock (a lockdep assertion would be nice
too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

>  		}
>  		spin_unlock(&kvm->mmu_lock);
>  	}
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04 15:50     ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 15:50 UTC (permalink / raw)
  To: lantianyu1986
  Cc: , kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, hpa,
	kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> The dirty bits have already been checked in the previous check of
> "dirty_bitmap" and mask must be non-zero value at this point.
> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  virt/kvm/kvm_main.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index cf7cc0554094..e75dbb15fd09 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  			mask = xchg(&dirty_bitmap[i], 0);
>  			dirty_bitmap_buffer[i] = mask;
>  
> -			if (mask) {
> -				offset = i * BITS_PER_LONG;
> -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> -									offset, mask);
> -			}
> +			offset = i * BITS_PER_LONG;
> +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> +								offset, mask);

Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
nice to elaborate a bit more in the changelog.

At the very least this needs a Fixes tag for the aforementioned commit.


Tangentially related, does mmu_lock actually need to be held while we
walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
itself is protected by slots_lock (a lockdep assertion would be nice
too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

>  		}
>  		spin_unlock(&kvm->mmu_lock);
>  	}
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04 15:50     ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 15:50 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley, mingo, benh,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> The dirty bits have already been checked in the previous check of
> "dirty_bitmap" and mask must be non-zero value at this point.
> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  virt/kvm/kvm_main.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index cf7cc0554094..e75dbb15fd09 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  			mask = xchg(&dirty_bitmap[i], 0);
>  			dirty_bitmap_buffer[i] = mask;
>  
> -			if (mask) {
> -				offset = i * BITS_PER_LONG;
> -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> -									offset, mask);
> -			}
> +			offset = i * BITS_PER_LONG;
> +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> +								offset, mask);

Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
nice to elaborate a bit more in the changelog.

At the very least this needs a Fixes tag for the aforementioned commit.


Tangentially related, does mmu_lock actually need to be held while we
walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
itself is protected by slots_lock (a lockdep assertion would be nice
too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

>  		}
>  		spin_unlock(&kvm->mmu_lock);
>  	}
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04 15:50     ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 15:50 UTC (permalink / raw)
  To: lantianyu1986
  Cc: , kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, christoffer.dall, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> The dirty bits have already been checked in the previous check of
> "dirty_bitmap" and mask must be non-zero value at this point.
> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  virt/kvm/kvm_main.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index cf7cc0554094..e75dbb15fd09 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
>  			mask = xchg(&dirty_bitmap[i], 0);
>  			dirty_bitmap_buffer[i] = mask;
>  
> -			if (mask) {
> -				offset = i * BITS_PER_LONG;
> -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> -									offset, mask);
> -			}
> +			offset = i * BITS_PER_LONG;
> +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> +								offset, mask);

Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
nice to elaborate a bit more in the changelog.

At the very least this needs a Fixes tag for the aforementioned commit.


Tangentially related, does mmu_lock actually need to be held while we
walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
itself is protected by slots_lock (a lockdep assertion would be nice
too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

>  		}
>  		spin_unlock(&kvm->mmu_lock);
>  	}
> -- 
> 2.14.4
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
  2019-01-04  8:54   ` lantianyu1986
  (?)
  (?)
@ 2019-01-04 16:30     ` Sean Christopherson
  -1 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 16:30 UTC (permalink / raw)
  To: lantianyu1986
  Cc: , kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, christoffer.dall, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> This patch is to flush tlb via flush list function.

More explanation of why this is beneficial would be nice.  Without the
context of the overall series it's not immediately obvious what
kvm_flush_remote_tlbs_with_list() does without a bit of digging.

> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 833e8855bbc9..866ccdea762e 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	bool host_writable;
>  	gpa_t first_pte_gpa;
>  	int set_spte_ret = 0;
> +	LIST_HEAD(flush_list);
>  
>  	/* direct kvm_mmu_page can not be unsync. */
>  	BUG_ON(sp->role.direct);
> @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
>  
>  	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> +		int tmp_spte_ret = 0;
>  		unsigned pte_access;
>  		pt_element_t gpte;
>  		gpa_t pte_gpa;
> @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  
>  		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
>  
> -		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> +		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
>  					 pte_access, PT_PAGE_TABLE_LEVEL,
>  					 gfn, spte_to_pfn(sp->spt[i]),
>  					 true, false, host_writable);
> +
> +		if (kvm_available_flush_tlb_with_range()
> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> +					& PT64_BASE_ADDR_MASK);
> +			list_add(&leaf_sp->flush_link, &flush_list);
> +		}
> +
> +		set_spte_ret |= tmp_spte_ret;
> +
>  	}
>  
>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> -		kvm_flush_remote_tlbs(vcpu->kvm);
> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);

This is a bit confusing and potentially fragile.  It's not obvious that
kvm_flush_remote_tlbs_with_list() is guaranteed to call
kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
chain to never optimize away the empty list case.  Rechecking
kvm_available_flush_tlb_with_range() isn't expensive.

>  
>  	return nr_present;
>  }
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04 16:30     ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 16:30 UTC (permalink / raw)
  To: lantianyu1986
  Cc: , kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, hpa,
	kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> This patch is to flush tlb via flush list function.

More explanation of why this is beneficial would be nice.  Without the
context of the overall series it's not immediately obvious what
kvm_flush_remote_tlbs_with_list() does without a bit of digging.

> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 833e8855bbc9..866ccdea762e 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	bool host_writable;
>  	gpa_t first_pte_gpa;
>  	int set_spte_ret = 0;
> +	LIST_HEAD(flush_list);
>  
>  	/* direct kvm_mmu_page can not be unsync. */
>  	BUG_ON(sp->role.direct);
> @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
>  
>  	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> +		int tmp_spte_ret = 0;
>  		unsigned pte_access;
>  		pt_element_t gpte;
>  		gpa_t pte_gpa;
> @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  
>  		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
>  
> -		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> +		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
>  					 pte_access, PT_PAGE_TABLE_LEVEL,
>  					 gfn, spte_to_pfn(sp->spt[i]),
>  					 true, false, host_writable);
> +
> +		if (kvm_available_flush_tlb_with_range()
> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> +					& PT64_BASE_ADDR_MASK);
> +			list_add(&leaf_sp->flush_link, &flush_list);
> +		}
> +
> +		set_spte_ret |= tmp_spte_ret;
> +
>  	}
>  
>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> -		kvm_flush_remote_tlbs(vcpu->kvm);
> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);

This is a bit confusing and potentially fragile.  It's not obvious that
kvm_flush_remote_tlbs_with_list() is guaranteed to call
kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
chain to never optimize away the empty list case.  Rechecking
kvm_available_flush_tlb_with_range() isn't expensive.

>  
>  	return nr_present;
>  }
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04 16:30     ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 16:30 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley, mingo, benh,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> This patch is to flush tlb via flush list function.

More explanation of why this is beneficial would be nice.  Without the
context of the overall series it's not immediately obvious what
kvm_flush_remote_tlbs_with_list() does without a bit of digging.

> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 833e8855bbc9..866ccdea762e 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	bool host_writable;
>  	gpa_t first_pte_gpa;
>  	int set_spte_ret = 0;
> +	LIST_HEAD(flush_list);
>  
>  	/* direct kvm_mmu_page can not be unsync. */
>  	BUG_ON(sp->role.direct);
> @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
>  
>  	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> +		int tmp_spte_ret = 0;
>  		unsigned pte_access;
>  		pt_element_t gpte;
>  		gpa_t pte_gpa;
> @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  
>  		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
>  
> -		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> +		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
>  					 pte_access, PT_PAGE_TABLE_LEVEL,
>  					 gfn, spte_to_pfn(sp->spt[i]),
>  					 true, false, host_writable);
> +
> +		if (kvm_available_flush_tlb_with_range()
> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> +					& PT64_BASE_ADDR_MASK);
> +			list_add(&leaf_sp->flush_link, &flush_list);
> +		}
> +
> +		set_spte_ret |= tmp_spte_ret;
> +
>  	}
>  
>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> -		kvm_flush_remote_tlbs(vcpu->kvm);
> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);

This is a bit confusing and potentially fragile.  It's not obvious that
kvm_flush_remote_tlbs_with_list() is guaranteed to call
kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
chain to never optimize away the empty list case.  Rechecking
kvm_available_flush_tlb_with_range() isn't expensive.

>  
>  	return nr_present;
>  }
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-04 16:30     ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 16:30 UTC (permalink / raw)
  To: lantianyu1986
  Cc: , kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, christoffer.dall, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> 
> This patch is to flush tlb via flush list function.

More explanation of why this is beneficial would be nice.  Without the
context of the overall series it's not immediately obvious what
kvm_flush_remote_tlbs_with_list() does without a bit of digging.

> 
> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> ---
>  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 833e8855bbc9..866ccdea762e 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	bool host_writable;
>  	gpa_t first_pte_gpa;
>  	int set_spte_ret = 0;
> +	LIST_HEAD(flush_list);
>  
>  	/* direct kvm_mmu_page can not be unsync. */
>  	BUG_ON(sp->role.direct);
> @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  	first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
>  
>  	for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> +		int tmp_spte_ret = 0;
>  		unsigned pte_access;
>  		pt_element_t gpte;
>  		gpa_t pte_gpa;
> @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
>  
>  		host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
>  
> -		set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> +		tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
>  					 pte_access, PT_PAGE_TABLE_LEVEL,
>  					 gfn, spte_to_pfn(sp->spt[i]),
>  					 true, false, host_writable);
> +
> +		if (kvm_available_flush_tlb_with_range()
> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> +					& PT64_BASE_ADDR_MASK);
> +			list_add(&leaf_sp->flush_link, &flush_list);
> +		}
> +
> +		set_spte_ret |= tmp_spte_ret;
> +
>  	}
>  
>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> -		kvm_flush_remote_tlbs(vcpu->kvm);
> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);

This is a bit confusing and potentially fragile.  It's not obvious that
kvm_flush_remote_tlbs_with_list() is guaranteed to call
kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
chain to never optimize away the empty list case.  Rechecking
kvm_available_flush_tlb_with_range() isn't expensive.

>  
>  	return nr_present;
>  }
> -- 
> 2.14.4
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  2019-01-04 15:50     ` Sean Christopherson
  (?)
  (?)
@ 2019-01-04 21:27       ` Sean Christopherson
  -1 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 21:27 UTC (permalink / raw)
  To: lantianyu1986
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

On Fri, Jan 04, 2019 at 07:50:36AM -0800, Sean Christopherson wrote:
> On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > 
> > The dirty bits have already been checked in the previous check of
> > "dirty_bitmap" and mask must be non-zero value at this point.
> > 
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  virt/kvm/kvm_main.c | 8 +++-----
> >  1 file changed, 3 insertions(+), 5 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index cf7cc0554094..e75dbb15fd09 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
> >  			mask = xchg(&dirty_bitmap[i], 0);
> >  			dirty_bitmap_buffer[i] = mask;
> >  
> > -			if (mask) {
> > -				offset = i * BITS_PER_LONG;
> > -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > -									offset, mask);
> > -			}
> > +			offset = i * BITS_PER_LONG;
> > +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > +								offset, mask);
> 
> Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
> ("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
> AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
> and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
> nice to elaborate a bit more in the changelog.
> 
> At the very least this needs a Fixes tag for the aforementioned commit.

Actually, this can be a straight revert of 58d2930f4ee3.

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04 21:27       ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 21:27 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, pbonzini,
	vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 07:50:36AM -0800, Sean Christopherson wrote:
> On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > 
> > The dirty bits have already been checked in the previous check of
> > "dirty_bitmap" and mask must be non-zero value at this point.
> > 
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  virt/kvm/kvm_main.c | 8 +++-----
> >  1 file changed, 3 insertions(+), 5 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index cf7cc0554094..e75dbb15fd09 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
> >  			mask = xchg(&dirty_bitmap[i], 0);
> >  			dirty_bitmap_buffer[i] = mask;
> >  
> > -			if (mask) {
> > -				offset = i * BITS_PER_LONG;
> > -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > -									offset, mask);
> > -			}
> > +			offset = i * BITS_PER_LONG;
> > +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > +								offset, mask);
> 
> Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
> ("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
> AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
> and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
> nice to elaborate a bit more in the changelog.
> 
> At the very least this needs a Fixes tag for the aforementioned commit.

Actually, this can be a straight revert of 58d2930f4ee3.

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04 21:27       ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 21:27 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, pbonzini, vkuznets, linuxppc-dev

On Fri, Jan 04, 2019 at 07:50:36AM -0800, Sean Christopherson wrote:
> On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > 
> > The dirty bits have already been checked in the previous check of
> > "dirty_bitmap" and mask must be non-zero value at this point.
> > 
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  virt/kvm/kvm_main.c | 8 +++-----
> >  1 file changed, 3 insertions(+), 5 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index cf7cc0554094..e75dbb15fd09 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
> >  			mask = xchg(&dirty_bitmap[i], 0);
> >  			dirty_bitmap_buffer[i] = mask;
> >  
> > -			if (mask) {
> > -				offset = i * BITS_PER_LONG;
> > -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > -									offset, mask);
> > -			}
> > +			offset = i * BITS_PER_LONG;
> > +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > +								offset, mask);
> 
> Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
> ("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
> AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
> and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
> nice to elaborate a bit more in the changelog.
> 
> At the very least this needs a Fixes tag for the aforementioned commit.

Actually, this can be a straight revert of 58d2930f4ee3.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-04 21:27       ` Sean Christopherson
  0 siblings, 0 replies; 106+ messages in thread
From: Sean Christopherson @ 2019-01-04 21:27 UTC (permalink / raw)
  To: lantianyu1986
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, pbonzini, rkrcmar, tglx, mingo, bp, hpa, x86,
	linux-arm-kernel, kvmarm, linux-kernel, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

On Fri, Jan 04, 2019 at 07:50:36AM -0800, Sean Christopherson wrote:
> On Fri, Jan 04, 2019 at 04:54:01PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > 
> > The dirty bits have already been checked in the previous check of
> > "dirty_bitmap" and mask must be non-zero value at this point.
> > 
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  virt/kvm/kvm_main.c | 8 +++-----
> >  1 file changed, 3 insertions(+), 5 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index cf7cc0554094..e75dbb15fd09 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1206,11 +1206,9 @@ int kvm_get_dirty_log_protect(struct kvm *kvm,
> >  			mask = xchg(&dirty_bitmap[i], 0);
> >  			dirty_bitmap_buffer[i] = mask;
> >  
> > -			if (mask) {
> > -				offset = i * BITS_PER_LONG;
> > -				kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > -									offset, mask);
> > -			}
> > +			offset = i * BITS_PER_LONG;
> > +			kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot,
> > +								offset, mask);
> 
> Hmm, the check against mask was explicitly added by commit 58d2930f4ee3
> ("KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()").
> AFAIK KVM only *sets* bits in dirty_bitmap without holding slots_lock
> and/or mmu_lock, so I agree that checking mask is redundant, but it'd be
> nice to elaborate a bit more in the changelog.
> 
> At the very least this needs a Fixes tag for the aforementioned commit.

Actually, this can be a straight revert of 58d2930f4ee3.

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp()
       [not found]   ` <20190104161235.GB11288@linux.intel.com>
@ 2019-01-07  3:42     ` Tianyu Lan
  2019-01-07 16:31       ` Paolo Bonzini
  0 siblings, 1 reply; 106+ messages in thread
From: Tianyu Lan @ 2019-01-07  3:42 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Lan Tianyu, Paolo Bonzini, Radim Krcmar, Thomas Gleixner,
	Ingo Molnar, bp, H. Peter Anvin, the arch/x86 maintainers, kvm,
	linux-kernel@vger kernel org, michael.h.kelley, kys, vkuznets,
	linux

Hi Sean:
             Thanks for your review.

On Sat, Jan 5, 2019 at 12:12 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:05PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> >
> > This patch is to flush tlb in the kvm_age_rmapp() when tlb range flush
> > is available and flush request is true.
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  arch/x86/kvm/mmu.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index a5728f51bf7d..bc402a72956a 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -1958,10 +1958,17 @@ static int kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
> >       u64 *sptep;
> >       struct rmap_iterator uninitialized_var(iter);
> >       int young = 0;
> > +     bool flush = (bool)data;
> >
> >       for_each_rmap_spte(rmap_head, &iter, sptep)
> >               young |= mmu_spte_age(sptep);
> >
> > +     if (young && flush) {
> > +             kvm_flush_remote_tlbs_with_address(kvm, gfn,
> > +                             KVM_PAGES_PER_HPAGE(level));
> > +             young = 0;
> > +     }
> > +
>
> young shouldn't be cleared, the tracing will be wrong and the caller
> might actually care about the return value.

Yes, this is wrong and will update.

> I'm assuming you're
> clearing young to avoid the flush in kvm_mmu_notifier_clear_flush_young(),
> but keeping that flush is silly since it will never be invoked.  Just
> squash this patch with patch 10/11 so that you can remove the unnecessary
> flush in kvm_mmu_notifier_clear_flush_young() and preserve young.
>

The platform may provide tlb flush with address range as granularity. My changes
are to use range flush when it's available. kvm_mmu_notifier_clear_flush_young()
is common function for all platforms and most platforms still need the
flush in the
kvm_mmu_notifier_clear_flush_young(). I think it's better to separate
flush request and
"young" from return value of kvm_age_hva(). New flush parameter I
added in the patch 10
can be changed to a pointer and kvm_age_hva() can use it to return
flush request.

-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
  2019-01-04 16:30     ` Sean Christopherson
                         ` (2 preceding siblings ...)
  (?)
@ 2019-01-07  5:13       ` Tianyu Lan
  -1 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-07  5:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, Paolo Bonzini, Radim Krcmar, Thomas Gleixner,
	Ingo Molnar, bp, H. Peter Anvin, the arch/x86 maintainers,
	linux-arm-kernel, kvmarm, linux-kernel@vger kernel org,
	linux-mips, kvm-ppc, linuxppc-dev, kvm, michael.h.kelley, kys,
	vkuznets

On Sat, Jan 5, 2019 at 12:30 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> >
> > This patch is to flush tlb via flush list function.
>
> More explanation of why this is beneficial would be nice.  Without the
> context of the overall series it's not immediately obvious what
> kvm_flush_remote_tlbs_with_list() does without a bit of digging.
>
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 833e8855bbc9..866ccdea762e 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       bool host_writable;
> >       gpa_t first_pte_gpa;
> >       int set_spte_ret = 0;
> > +     LIST_HEAD(flush_list);
> >
> >       /* direct kvm_mmu_page can not be unsync. */
> >       BUG_ON(sp->role.direct);
> > @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
> >
> >       for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > +             int tmp_spte_ret = 0;
> >               unsigned pte_access;
> >               pt_element_t gpte;
> >               gpa_t pte_gpa;
> > @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >
> >               host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> >
> > -             set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> > +             tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
> >                                        pte_access, PT_PAGE_TABLE_LEVEL,
> >                                        gfn, spte_to_pfn(sp->spt[i]),
> >                                        true, false, host_writable);
> > +
> > +             if (kvm_available_flush_tlb_with_range()
> > +                 && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> > +                     struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> > +                                     & PT64_BASE_ADDR_MASK);
> > +                     list_add(&leaf_sp->flush_link, &flush_list);
> > +             }
> > +
> > +             set_spte_ret |= tmp_spte_ret;
> > +
> >       }
> >
> >       if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> > -             kvm_flush_remote_tlbs(vcpu->kvm);
> > +             kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.

That makes sense. Will update. Thanks.

>
> >
> >       return nr_present;
> >  }
> > --
> > 2.14.4
> >



-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07  5:13       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-07  5:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, catalin.marinas, will.deacon, paulus, H. Peter Anvin, kys,
	kvmarm, mpe, the arch/x86 maintainers, linux, michael.h.kelley,
	Ingo Molnar, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, Thomas Gleixner, linux-arm-kernel,
	linux-kernel@vger kernel org, ralf, paul.burton, Paolo Bonzini,
	vkuznets, linuxppc-dev

On Sat, Jan 5, 2019 at 12:30 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> >
> > This patch is to flush tlb via flush list function.
>
> More explanation of why this is beneficial would be nice.  Without the
> context of the overall series it's not immediately obvious what
> kvm_flush_remote_tlbs_with_list() does without a bit of digging.
>
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 833e8855bbc9..866ccdea762e 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       bool host_writable;
> >       gpa_t first_pte_gpa;
> >       int set_spte_ret = 0;
> > +     LIST_HEAD(flush_list);
> >
> >       /* direct kvm_mmu_page can not be unsync. */
> >       BUG_ON(sp->role.direct);
> > @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
> >
> >       for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > +             int tmp_spte_ret = 0;
> >               unsigned pte_access;
> >               pt_element_t gpte;
> >               gpa_t pte_gpa;
> > @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >
> >               host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> >
> > -             set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> > +             tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
> >                                        pte_access, PT_PAGE_TABLE_LEVEL,
> >                                        gfn, spte_to_pfn(sp->spt[i]),
> >                                        true, false, host_writable);
> > +
> > +             if (kvm_available_flush_tlb_with_range()
> > +                 && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> > +                     struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> > +                                     & PT64_BASE_ADDR_MASK);
> > +                     list_add(&leaf_sp->flush_link, &flush_list);
> > +             }
> > +
> > +             set_spte_ret |= tmp_spte_ret;
> > +
> >       }
> >
> >       if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> > -             kvm_flush_remote_tlbs(vcpu->kvm);
> > +             kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.

That makes sense. Will update. Thanks.

>
> >
> >       return nr_present;
> >  }
> > --
> > 2.14.4
> >



-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07  5:13       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-07  5:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Radim Krcmar, catalin.marinas, will.deacon,
	christoffer.dall, H. Peter Anvin, kys, kvmarm,
	the arch/x86 maintainers, linux, michael.h.kelley, Ingo Molnar,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp,
	Thomas Gleixner, linux-arm-kernel, linux-kernel@vger kernel org,
	ralf, paul.burton, Paolo Bonzini, vkuznets, linuxppc-dev

On Sat, Jan 5, 2019 at 12:30 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> >
> > This patch is to flush tlb via flush list function.
>
> More explanation of why this is beneficial would be nice.  Without the
> context of the overall series it's not immediately obvious what
> kvm_flush_remote_tlbs_with_list() does without a bit of digging.
>
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 833e8855bbc9..866ccdea762e 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       bool host_writable;
> >       gpa_t first_pte_gpa;
> >       int set_spte_ret = 0;
> > +     LIST_HEAD(flush_list);
> >
> >       /* direct kvm_mmu_page can not be unsync. */
> >       BUG_ON(sp->role.direct);
> > @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
> >
> >       for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > +             int tmp_spte_ret = 0;
> >               unsigned pte_access;
> >               pt_element_t gpte;
> >               gpa_t pte_gpa;
> > @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >
> >               host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> >
> > -             set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> > +             tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
> >                                        pte_access, PT_PAGE_TABLE_LEVEL,
> >                                        gfn, spte_to_pfn(sp->spt[i]),
> >                                        true, false, host_writable);
> > +
> > +             if (kvm_available_flush_tlb_with_range()
> > +                 && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> > +                     struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> > +                                     & PT64_BASE_ADDR_MASK);
> > +                     list_add(&leaf_sp->flush_link, &flush_list);
> > +             }
> > +
> > +             set_spte_ret |= tmp_spte_ret;
> > +
> >       }
> >
> >       if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> > -             kvm_flush_remote_tlbs(vcpu->kvm);
> > +             kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.

That makes sense. Will update. Thanks.

>
> >
> >       return nr_present;
> >  }
> > --
> > 2.14.4
> >



-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07  5:13       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-07  5:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Radim Krcmar, catalin.marinas, will.deacon,
	christoffer.dall, paulus, H. Peter Anvin, kys, kvmarm, mpe,
	the arch/x86 maintainers, linux, michael.h.kelley, Ingo Molnar,
	benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp,
	Thomas Gleixner, linux-arm-kernel, linux-kernel@vger kernel org,
	ralf, paul.burton, Paolo Bonzini, vkuznets, linuxppc-dev

On Sat, Jan 5, 2019 at 12:30 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> >
> > This patch is to flush tlb via flush list function.
>
> More explanation of why this is beneficial would be nice.  Without the
> context of the overall series it's not immediately obvious what
> kvm_flush_remote_tlbs_with_list() does without a bit of digging.
>
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 833e8855bbc9..866ccdea762e 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       bool host_writable;
> >       gpa_t first_pte_gpa;
> >       int set_spte_ret = 0;
> > +     LIST_HEAD(flush_list);
> >
> >       /* direct kvm_mmu_page can not be unsync. */
> >       BUG_ON(sp->role.direct);
> > @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
> >
> >       for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > +             int tmp_spte_ret = 0;
> >               unsigned pte_access;
> >               pt_element_t gpte;
> >               gpa_t pte_gpa;
> > @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >
> >               host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> >
> > -             set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> > +             tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
> >                                        pte_access, PT_PAGE_TABLE_LEVEL,
> >                                        gfn, spte_to_pfn(sp->spt[i]),
> >                                        true, false, host_writable);
> > +
> > +             if (kvm_available_flush_tlb_with_range()
> > +                 && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> > +                     struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> > +                                     & PT64_BASE_ADDR_MASK);
> > +                     list_add(&leaf_sp->flush_link, &flush_list);
> > +             }
> > +
> > +             set_spte_ret |= tmp_spte_ret;
> > +
> >       }
> >
> >       if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> > -             kvm_flush_remote_tlbs(vcpu->kvm);
> > +             kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.

That makes sense. Will update. Thanks.

>
> >
> >       return nr_present;
> >  }
> > --
> > 2.14.4
> >



-- 
Best regards
Tianyu Lan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07  5:13       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-07  5:13 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, Paolo Bonzini, Radim Krcmar, Thomas Gleixner,
	Ingo Molnar, bp, H. Peter Anvin, the arch/x86 maintainers,
	linux-arm-kernel, kvmarm, linux-kernel@vger kernel org,
	linux-mips, kvm-ppc, linuxppc-dev, kvm, michael.h.kelley, kys,
	vkuznets

On Sat, Jan 5, 2019 at 12:30 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@gmail.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@microsoft.com>
> >
> > This patch is to flush tlb via flush list function.
>
> More explanation of why this is beneficial would be nice.  Without the
> context of the overall series it's not immediately obvious what
> kvm_flush_remote_tlbs_with_list() does without a bit of digging.
>
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
> > ---
> >  arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 833e8855bbc9..866ccdea762e 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       bool host_writable;
> >       gpa_t first_pte_gpa;
> >       int set_spte_ret = 0;
> > +     LIST_HEAD(flush_list);
> >
> >       /* direct kvm_mmu_page can not be unsync. */
> >       BUG_ON(sp->role.direct);
> > @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >       first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
> >
> >       for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > +             int tmp_spte_ret = 0;
> >               unsigned pte_access;
> >               pt_element_t gpte;
> >               gpa_t pte_gpa;
> > @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >
> >               host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> >
> > -             set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> > +             tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
> >                                        pte_access, PT_PAGE_TABLE_LEVEL,
> >                                        gfn, spte_to_pfn(sp->spt[i]),
> >                                        true, false, host_writable);
> > +
> > +             if (kvm_available_flush_tlb_with_range()
> > +                 && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> > +                     struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> > +                                     & PT64_BASE_ADDR_MASK);
> > +                     list_add(&leaf_sp->flush_link, &flush_list);
> > +             }
> > +
> > +             set_spte_ret |= tmp_spte_ret;
> > +
> >       }
> >
> >       if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> > -             kvm_flush_remote_tlbs(vcpu->kvm);
> > +             kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.

That makes sense. Will update. Thanks.

>
> >
> >       return nr_present;
> >  }
> > --
> > 2.14.4
> >



-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
  2019-01-04 16:30     ` Sean Christopherson
                         ` (2 preceding siblings ...)
  (?)
@ 2019-01-07 16:07       ` Paolo Bonzini
  -1 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:07 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley, mingo, benh,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 17:30, Sean Christopherson wrote:
>> +
>> +		if (kvm_available_flush_tlb_with_range()
>> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
>> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
>> +					& PT64_BASE_ADDR_MASK);
>> +			list_add(&leaf_sp->flush_link, &flush_list);
>> +		}
>> +
>> +		set_spte_ret |= tmp_spte_ret;
>> +
>>  	}
>>  
>>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
>> -		kvm_flush_remote_tlbs(vcpu->kvm);
>> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
> 

Alternatively, do not check it during the loop: always build the
flush_list, and always call kvm_flush_remote_tlbs_with_list.  The
function can then check whether the list is empty, and the OR of
tmp_spte_ret on every iteration goes away.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07 16:07       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:07 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, benh, will.deacon, linux-mips, paulus, hpa, kys, kvmarm,
	mpe, x86, linux, michael.h.kelley, mingo, catalin.marinas,
	jhogan, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 17:30, Sean Christopherson wrote:
>> +
>> +		if (kvm_available_flush_tlb_with_range()
>> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
>> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
>> +					& PT64_BASE_ADDR_MASK);
>> +			list_add(&leaf_sp->flush_link, &flush_list);
>> +		}
>> +
>> +		set_spte_ret |= tmp_spte_ret;
>> +
>>  	}
>>  
>>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
>> -		kvm_flush_remote_tlbs(vcpu->kvm);
>> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
> 

Alternatively, do not check it during the loop: always build the
flush_list, and always call kvm_flush_remote_tlbs_with_list.  The
function can then check whether the list is empty, and the OR of
tmp_spte_ret on every iteration goes away.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07 16:07       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:07 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, will.deacon, linux-mips, hpa, kys, kvmarm, x86,
	linux, michael.h.kelley, mingo, catalin.marinas, jhogan,
	Lan Tianyu, christoffer.dall, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 17:30, Sean Christopherson wrote:
>> +
>> +		if (kvm_available_flush_tlb_with_range()
>> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
>> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
>> +					& PT64_BASE_ADDR_MASK);
>> +			list_add(&leaf_sp->flush_link, &flush_list);
>> +		}
>> +
>> +		set_spte_ret |= tmp_spte_ret;
>> +
>>  	}
>>  
>>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
>> -		kvm_flush_remote_tlbs(vcpu->kvm);
>> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
> 

Alternatively, do not check it during the loop: always build the
flush_list, and always call kvm_flush_remote_tlbs_with_list.  The
function can then check whether the list is empty, and the OR of
tmp_spte_ret on every iteration goes away.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07 16:07       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:07 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, benh, will.deacon, linux-mips, paulus, hpa, kys,
	kvmarm, mpe, x86, linux, michael.h.kelley, mingo,
	catalin.marinas, jhogan, Lan Tianyu, christoffer.dall,
	marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel,
	ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 17:30, Sean Christopherson wrote:
>> +
>> +		if (kvm_available_flush_tlb_with_range()
>> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
>> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
>> +					& PT64_BASE_ADDR_MASK);
>> +			list_add(&leaf_sp->flush_link, &flush_list);
>> +		}
>> +
>> +		set_spte_ret |= tmp_spte_ret;
>> +
>>  	}
>>  
>>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
>> -		kvm_flush_remote_tlbs(vcpu->kvm);
>> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
> 

Alternatively, do not check it during the loop: always build the
flush_list, and always call kvm_flush_remote_tlbs_with_list.  The
function can then check whether the list is empty, and the OR of
tmp_spte_ret on every iteration goes away.

Paolo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
@ 2019-01-07 16:07       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:07 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley, mingo, benh,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 17:30, Sean Christopherson wrote:
>> +
>> +		if (kvm_available_flush_tlb_with_range()
>> +		    && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
>> +			struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
>> +					& PT64_BASE_ADDR_MASK);
>> +			list_add(&leaf_sp->flush_link, &flush_list);
>> +		}
>> +
>> +		set_spte_ret |= tmp_spte_ret;
>> +
>>  	}
>>  
>>  	if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
>> -		kvm_flush_remote_tlbs(vcpu->kvm);
>> +		kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
> 

Alternatively, do not check it during the loop: always build the
flush_list, and always call kvm_flush_remote_tlbs_with_list.  The
function can then check whether the list is empty, and the OR of
tmp_spte_ret on every iteration goes away.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  2019-01-04 15:50     ` Sean Christopherson
                         ` (2 preceding siblings ...)
  (?)
@ 2019-01-07 16:20       ` Paolo Bonzini
  -1 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:20 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley, mingo, benh,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 16:50, Sean Christopherson wrote:
> Tangentially related, does mmu_lock actually need to be held while we
> walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
> itself is protected by slots_lock (a lockdep assertion would be nice
> too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

Yes, we could avoid grabbing it as long as the bitmap is zero.  However,
without kvm->manual_dirty_log_protect, the granularity of
kvm_get_dirty_log_protect() is too coarse so it won't happen in
practice.  Instead, with the new manual clear,
kvm_get_dirty_log_protect() does not take the lock and a well-written
userspace is not going to call the clear ioctl unless some bits are set.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-07 16:20       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:20 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, benh, will.deacon, linux-mips, paulus, hpa, kys, kvmarm,
	mpe, x86, linux, michael.h.kelley, mingo, catalin.marinas,
	jhogan, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 16:50, Sean Christopherson wrote:
> Tangentially related, does mmu_lock actually need to be held while we
> walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
> itself is protected by slots_lock (a lockdep assertion would be nice
> too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

Yes, we could avoid grabbing it as long as the bitmap is zero.  However,
without kvm->manual_dirty_log_protect, the granularity of
kvm_get_dirty_log_protect() is too coarse so it won't happen in
practice.  Instead, with the new manual clear,
kvm_get_dirty_log_protect() does not take the lock and a well-written
userspace is not going to call the clear ioctl unless some bits are set.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-07 16:20       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:20 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, will.deacon, linux-mips, hpa, kys, kvmarm, x86,
	linux, michael.h.kelley, mingo, catalin.marinas, jhogan,
	Lan Tianyu, christoffer.dall, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 16:50, Sean Christopherson wrote:
> Tangentially related, does mmu_lock actually need to be held while we
> walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
> itself is protected by slots_lock (a lockdep assertion would be nice
> too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

Yes, we could avoid grabbing it as long as the bitmap is zero.  However,
without kvm->manual_dirty_log_protect, the granularity of
kvm_get_dirty_log_protect() is too coarse so it won't happen in
practice.  Instead, with the new manual clear,
kvm_get_dirty_log_protect() does not take the lock and a well-written
userspace is not going to call the clear ioctl unless some bits are set.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-07 16:20       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:20 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, benh, will.deacon, linux-mips, paulus, hpa, kys,
	kvmarm, mpe, x86, linux, michael.h.kelley, mingo,
	catalin.marinas, jhogan, Lan Tianyu, christoffer.dall,
	marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel,
	ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 16:50, Sean Christopherson wrote:
> Tangentially related, does mmu_lock actually need to be held while we
> walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
> itself is protected by slots_lock (a lockdep assertion would be nice
> too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

Yes, we could avoid grabbing it as long as the bitmap is zero.  However,
without kvm->manual_dirty_log_protect, the granularity of
kvm_get_dirty_log_protect() is too coarse so it won't happen in
practice.  Instead, with the new manual clear,
kvm_get_dirty_log_protect() does not take the lock and a well-written
userspace is not going to call the clear ioctl unless some bits are set.

Paolo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect()
@ 2019-01-07 16:20       ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:20 UTC (permalink / raw)
  To: Sean Christopherson, lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, linux-kernel, paulus,
	hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley, mingo, benh,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, christoffer.dall, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 16:50, Sean Christopherson wrote:
> Tangentially related, does mmu_lock actually need to be held while we
> walk dirty_bitmap in kvm_{clear,get}_dirty_log_protect()?  The bitmap
> itself is protected by slots_lock (a lockdep assertion would be nice
> too), e.g. can we grab the lock iff dirty_bitmap[i] != 0?

Yes, we could avoid grabbing it as long as the bitmap is zero.  However,
without kvm->manual_dirty_log_protect, the granularity of
kvm_get_dirty_log_protect() is too coarse so it won't happen in
practice.  Instead, with the new manual clear,
kvm_get_dirty_log_protect() does not take the lock and a well-written
userspace is not going to call the clear ioctl unless some bits are set.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  2019-01-04  8:54   ` lantianyu1986
  (?)
  (?)
@ 2019-01-07 16:26     ` Paolo Bonzini
  -1 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:26 UTC (permalink / raw)
  To: lantianyu1986
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, rkrcmar, tglx, mingo, bp, hpa, x86, linux-arm-kernel,
	kvmarm, linux-kernel, linux-mips, kvm-ppc, linuxppc-dev, kvm,
	michael.h.kelley, kys, vkuznets

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +


^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-07 16:26     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:26 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	hpa, kys, kvmarm, x86, linux, michael.h.kelley, mingo, jhogan,
	linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +


^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-07 16:26     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:26 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, rkrcmar, catalin.marinas, will.deacon, christoffer.dall,
	paulus, hpa, kys, kvmarm, mpe, x86, linux, michael.h.kelley,
	mingo, benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier,
	kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel, ralf,
	paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-07 16:26     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:26 UTC (permalink / raw)
  To: lantianyu1986
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, rkrcmar, tglx, mingo, bp, hpa, x86, linux-arm-kernel,
	kvmarm, linux-kernel, linux-mips, kvm-ppc, linuxppc-dev, kvm,
	michael.h.kelley, kys, vkuznets

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp()
  2019-01-07  3:42     ` Tianyu Lan
@ 2019-01-07 16:31       ` Paolo Bonzini
  2019-01-08  3:42           ` Tianyu Lan
  0 siblings, 1 reply; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:31 UTC (permalink / raw)
  To: Tianyu Lan, Sean Christopherson
  Cc: Lan Tianyu, Radim Krcmar, Thomas Gleixner, Ingo Molnar, bp,
	H. Peter Anvin, the arch/x86 maintainers, kvm,
	linux-kernel@vger kernel org, michael.h.kelley, kys, vkuznets,
	linux

On 07/01/19 04:42, Tianyu Lan wrote:
>> I'm assuming you're
>> clearing young to avoid the flush in kvm_mmu_notifier_clear_flush_young(),
>> but keeping that flush is silly since it will never be invoked.  Just
>> squash this patch with patch 10/11 so that you can remove the unnecessary
>> flush in kvm_mmu_notifier_clear_flush_young() and preserve young.
>>
> The platform may provide tlb flush with address range as granularity. My changes
> are to use range flush when it's available. kvm_mmu_notifier_clear_flush_young()
> is common function for all platforms and most platforms still need the
> flush in the
> kvm_mmu_notifier_clear_flush_young(). I think it's better to separate
> flush request and
> "young" from return value of kvm_age_hva(). New flush parameter I
> added in the patch 10
> can be changed to a pointer and kvm_age_hva() can use it to return
> flush request.

There are two possibilities:

- pass a "bool *flush".  If NULL, kvm_age_hva should not flush.  If not
NULL, kvm_age_hva should receive a true *flush, and should change it to
false if kvm_age_hva takes care of the flush

- pass a "bool flush".  In patch 10, change all kvm_age_hva
implementation to do the flush if they return 1.

I think I prefer the latter, in this case the small code duplication is
offset by a simpler API.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
  2019-01-04  8:53   ` lantianyu1986
                       ` (2 preceding siblings ...)
  (?)
@ 2019-01-07 16:34     ` Paolo Bonzini
  -1 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:34 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
> @@ -332,6 +332,7 @@ struct kvm_mmu_page {
>  	int root_count;          /* Currently serving as active root */
>  	unsigned int unsync_children;
>  	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
> +	u64 *sptep;

Is this really needed?  Can we put the "last" flag in the struct instead
as a bool?  In fact, if you do

	u16 unsync_children;
	bool unsync;
	bool last_level;

the struct does not grow at all. :)

(I'm not sure where "large" is tested using the sptep field, even though
it is in the commit message).

Paolo

>  	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
>  	unsigned long mmu_valid_gen;


^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-07 16:34     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:34 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, benh, will.deacon, linux-mips, paulus, hpa, kys, kvmarm,
	mpe, x86, linux, michael.h.kelley, mingo, catalin.marinas,
	jhogan, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
> @@ -332,6 +332,7 @@ struct kvm_mmu_page {
>  	int root_count;          /* Currently serving as active root */
>  	unsigned int unsync_children;
>  	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
> +	u64 *sptep;

Is this really needed?  Can we put the "last" flag in the struct instead
as a bool?  In fact, if you do

	u16 unsync_children;
	bool unsync;
	bool last_level;

the struct does not grow at all. :)

(I'm not sure where "large" is tested using the sptep field, even though
it is in the commit message).

Paolo

>  	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
>  	unsigned long mmu_valid_gen;

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-07 16:34     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:34 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, will.deacon, linux-mips, hpa, kys, kvmarm, x86, linux,
	michael.h.kelley, mingo, catalin.marinas, jhogan, Lan Tianyu,
	marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel,
	ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
> @@ -332,6 +332,7 @@ struct kvm_mmu_page {
>  	int root_count;          /* Currently serving as active root */
>  	unsigned int unsync_children;
>  	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
> +	u64 *sptep;

Is this really needed?  Can we put the "last" flag in the struct instead
as a bool?  In fact, if you do

	u16 unsync_children;
	bool unsync;
	bool last_level;

the struct does not grow at all. :)

(I'm not sure where "large" is tested using the sptep field, even though
it is in the commit message).

Paolo

>  	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
>  	unsigned long mmu_valid_gen;


^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-07 16:34     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:34 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, benh, will.deacon, linux-mips, paulus, hpa, kys, kvmarm,
	mpe, x86, linux, michael.h.kelley, mingo, catalin.marinas,
	jhogan, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
> @@ -332,6 +332,7 @@ struct kvm_mmu_page {
>  	int root_count;          /* Currently serving as active root */
>  	unsigned int unsync_children;
>  	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
> +	u64 *sptep;

Is this really needed?  Can we put the "last" flag in the struct instead
as a bool?  In fact, if you do

	u16 unsync_children;
	bool unsync;
	bool last_level;

the struct does not grow at all. :)

(I'm not sure where "large" is tested using the sptep field, even though
it is in the commit message).

Paolo

>  	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
>  	unsigned long mmu_valid_gen;


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page
@ 2019-01-07 16:34     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:34 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
> @@ -332,6 +332,7 @@ struct kvm_mmu_page {
>  	int root_count;          /* Currently serving as active root */
>  	unsigned int unsync_children;
>  	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
> +	u64 *sptep;

Is this really needed?  Can we put the "last" flag in the struct instead
as a bool?  In fact, if you do

	u16 unsync_children;
	bool unsync;
	bool last_level;

the struct does not grow at all. :)

(I'm not sure where "large" is tested using the sptep field, even though
it is in the commit message).

Paolo

>  	/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
>  	unsigned long mmu_valid_gen;

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
  2019-01-04  8:53   ` lantianyu1986
  (?)
  (?)
@ 2019-01-07 16:39     ` Paolo Bonzini
  -1 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:39 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
>  struct kvm_mmu_page {
>  	struct list_head link;
> +
> +	/*
> +	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
> +	 * and all list operations should be under protection of mmu_lock.
> +	 */
> +	struct list_head flush_link;
>  	struct hlist_node hash_link;
>  	bool unsync;
>  
> @@ -443,6 +449,7 @@ struct kvm_mmu {

Again, it would be nice not to grow the struct too much, though I
understand that it's already relatively big (168 bytes).

Can you at least make this an hlist, so that it only takes a single word?

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-07 16:39     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:39 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, will.deacon, linux-mips, hpa, kys, kvmarm, x86, linux,
	michael.h.kelley, mingo, catalin.marinas, jhogan, Lan Tianyu,
	marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel, linux-kernel,
	ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
>  struct kvm_mmu_page {
>  	struct list_head link;
> +
> +	/*
> +	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
> +	 * and all list operations should be under protection of mmu_lock.
> +	 */
> +	struct list_head flush_link;
>  	struct hlist_node hash_link;
>  	bool unsync;
>  
> @@ -443,6 +449,7 @@ struct kvm_mmu {

Again, it would be nice not to grow the struct too much, though I
understand that it's already relatively big (168 bytes).

Can you at least make this an hlist, so that it only takes a single word?

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-07 16:39     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:39 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, benh, will.deacon, linux-mips, paulus, hpa, kys, kvmarm,
	mpe, x86, linux, michael.h.kelley, mingo, catalin.marinas,
	jhogan, Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx,
	linux-arm-kernel, linux-kernel, ralf, paul.burton, vkuznets,
	linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
>  struct kvm_mmu_page {
>  	struct list_head link;
> +
> +	/*
> +	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
> +	 * and all list operations should be under protection of mmu_lock.
> +	 */
> +	struct list_head flush_link;
>  	struct hlist_node hash_link;
>  	bool unsync;
>  
> @@ -443,6 +449,7 @@ struct kvm_mmu {

Again, it would be nice not to grow the struct too much, though I
understand that it's already relatively big (168 bytes).

Can you at least make this an hlist, so that it only takes a single word?

Paolo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list
@ 2019-01-07 16:39     ` Paolo Bonzini
  0 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-07 16:39 UTC (permalink / raw)
  To: lantianyu1986
  Cc: kvm, catalin.marinas, will.deacon, paulus, hpa, kys, kvmarm, mpe,
	x86, linux, michael.h.kelley, mingo, benh, jhogan, linux-mips,
	Lan Tianyu, marc.zyngier, kvm-ppc, bp, tglx, linux-arm-kernel,
	linux-kernel, ralf, paul.burton, vkuznets, linuxppc-dev

On 04/01/19 09:53, lantianyu1986@gmail.com wrote:
>  struct kvm_mmu_page {
>  	struct list_head link;
> +
> +	/*
> +	 * Tlb flush with range list uses struct kvm_mmu_page as list entry
> +	 * and all list operations should be under protection of mmu_lock.
> +	 */
> +	struct list_head flush_link;
>  	struct hlist_node hash_link;
>  	bool unsync;
>  
> @@ -443,6 +449,7 @@ struct kvm_mmu {

Again, it would be nice not to grow the struct too much, though I
understand that it's already relatively big (168 bytes).

Can you at least make this an hlist, so that it only takes a single word?

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp()
  2019-01-07 16:31       ` Paolo Bonzini
@ 2019-01-08  3:42           ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-08  3:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Lan Tianyu, Radim Krcmar, Thomas Gleixner,
	Ingo Molnar, bp, H. Peter Anvin, the arch/x86 maintainers, kvm,
	linux-kernel@vger kernel org, michael.h.kelley, kys, vkuznets,
	linux

Hi Paolo:
               Thanks for your review.

On Tue, Jan 8, 2019 at 12:31 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 07/01/19 04:42, Tianyu Lan wrote:
> >> I'm assuming you're
> >> clearing young to avoid the flush in kvm_mmu_notifier_clear_flush_young(),
> >> but keeping that flush is silly since it will never be invoked.  Just
> >> squash this patch with patch 10/11 so that you can remove the unnecessary
> >> flush in kvm_mmu_notifier_clear_flush_young() and preserve young.
> >>
> > The platform may provide tlb flush with address range as granularity. My changes
> > are to use range flush when it's available. kvm_mmu_notifier_clear_flush_young()
> > is common function for all platforms and most platforms still need the
> > flush in the
> > kvm_mmu_notifier_clear_flush_young(). I think it's better to separate
> > flush request and
> > "young" from return value of kvm_age_hva(). New flush parameter I
> > added in the patch 10
> > can be changed to a pointer and kvm_age_hva() can use it to return
> > flush request.
>
> There are two possibilities:
>
> - pass a "bool *flush".  If NULL, kvm_age_hva should not flush.  If not
> NULL, kvm_age_hva should receive a true *flush, and should change it to
> false if kvm_age_hva takes care of the flush
>
> - pass a "bool flush".  In patch 10, change all kvm_age_hva
> implementation to do the flush if they return 1.
>
> I think I prefer the latter, in this case the small code duplication is
> offset by a simpler API.
>

From my understanding, this means to move the flush in the
kvm_mmu_notifier_clear_flush_young()
to kvm_age_hva() and do flush in kvm_age_hva() when young is >0 and "flush"
parameter is true, right?
-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp()
@ 2019-01-08  3:42           ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-08  3:42 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Lan Tianyu, Radim Krcmar, Thomas Gleixner,
	Ingo Molnar, bp, H. Peter Anvin, the arch/x86 maintainers, kvm,
	linux-kernel@vger kernel org, michael.h.kelley, kys, vkuznets,
	linux

Hi Paolo:
               Thanks for your review.

On Tue, Jan 8, 2019 at 12:31 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 07/01/19 04:42, Tianyu Lan wrote:
> >> I'm assuming you're
> >> clearing young to avoid the flush in kvm_mmu_notifier_clear_flush_young(),
> >> but keeping that flush is silly since it will never be invoked.  Just
> >> squash this patch with patch 10/11 so that you can remove the unnecessary
> >> flush in kvm_mmu_notifier_clear_flush_young() and preserve young.
> >>
> > The platform may provide tlb flush with address range as granularity. My changes
> > are to use range flush when it's available. kvm_mmu_notifier_clear_flush_young()
> > is common function for all platforms and most platforms still need the
> > flush in the
> > kvm_mmu_notifier_clear_flush_young(). I think it's better to separate
> > flush request and
> > "young" from return value of kvm_age_hva(). New flush parameter I
> > added in the patch 10
> > can be changed to a pointer and kvm_age_hva() can use it to return
> > flush request.
>
> There are two possibilities:
>
> - pass a "bool *flush".  If NULL, kvm_age_hva should not flush.  If not
> NULL, kvm_age_hva should receive a true *flush, and should change it to
> false if kvm_age_hva takes care of the flush
>
> - pass a "bool flush".  In patch 10, change all kvm_age_hva
> implementation to do the flush if they return 1.
>
> I think I prefer the latter, in this case the small code duplication is
> offset by a simpler API.
>

>From my understanding, this means to move the flush in the
kvm_mmu_notifier_clear_flush_young()
to kvm_age_hva() and do flush in kvm_age_hva() when young is >0 and "flush"
parameter is true, right?
-- 
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp()
  2019-01-08  3:42           ` Tianyu Lan
  (?)
@ 2019-01-08 11:52           ` Paolo Bonzini
  -1 siblings, 0 replies; 106+ messages in thread
From: Paolo Bonzini @ 2019-01-08 11:52 UTC (permalink / raw)
  To: Tianyu Lan
  Cc: Sean Christopherson, Lan Tianyu, Radim Krcmar, Thomas Gleixner,
	Ingo Molnar, bp, H. Peter Anvin, the arch/x86 maintainers, kvm,
	linux-kernel@vger kernel org, michael.h.kelley, kys, vkuznets,
	linux

On 08/01/19 04:42, Tianyu Lan wrote:
> From my understanding, this means to move the flush in the
> kvm_mmu_notifier_clear_flush_young()
> to kvm_age_hva() and do flush in kvm_age_hva() when young is >0 and "flush"
> parameter is true, right?

Yes.

Paolo

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  2019-01-07 16:26     ` Paolo Bonzini
                         ` (2 preceding siblings ...)
  (?)
@ 2019-01-10  9:06       ` Tianyu Lan
  -1 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-10  9:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, Radim Krcmar, Thomas Gleixner, Ingo Molnar, bp,
	H. Peter Anvin, the arch/x86 maintainers, linux-arm-kernel,
	kvmarm, linux-kernel@vger kernel org, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

On Tue, Jan 8, 2019 at 12:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
> >               rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
> >                                         PT_PAGE_TABLE_LEVEL, slot);
> > -             __rmap_write_protect(kvm, rmap_head, false);
> > +             flush |= __rmap_write_protect(kvm, rmap_head, false);
> >
> >               /* clear the first set bit */
> >               mask &= mask - 1;
> >       }
> > +
> > +     if (flush && kvm_available_flush_tlb_with_range()) {
> > +             kvm_flush_remote_tlbs_with_address(kvm,
> > +                             slot->base_gfn + gfn_offset,
> > +                             hweight_long(mask));
>
> Mask is zero here, so this probably won't work.
>
> In addition, I suspect calling the hypercall once for every 64 pages is
> not very efficient.  Passing a flush list into
> kvm_mmu_write_protect_pt_masked, and flushing in
> kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
> kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.
>
Yes, this is not efficient.

> I don't have any good ideas, except for moving the whole
> kvm_clear_dirty_log_protect loop into architecture-specific code (which
> is not the direction we want---architectures should share more code, not
> less).

kvm_vm_ioctl_clear_dirty_log/get_dirty_log()  is to get/clear dirty log with
memslot as unit. We may just flush tlbs of the affected memslot instead of
entire page table's when range flush is available.

>
> Paolo
>
> > +             flush = false;
> > +     }
> > +
>


--
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-10  9:06       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-10  9:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, Radim Krcmar, Thomas Gleixner, Ingo Molnar, bp,
	H. Peter Anvin, the arch/x86 maintainers, linux-arm-kernel,
	kvmarm, linux-kernel@vger kernel org, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vk

On Tue, Jan 8, 2019 at 12:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
> >               rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
> >                                         PT_PAGE_TABLE_LEVEL, slot);
> > -             __rmap_write_protect(kvm, rmap_head, false);
> > +             flush |= __rmap_write_protect(kvm, rmap_head, false);
> >
> >               /* clear the first set bit */
> >               mask &= mask - 1;
> >       }
> > +
> > +     if (flush && kvm_available_flush_tlb_with_range()) {
> > +             kvm_flush_remote_tlbs_with_address(kvm,
> > +                             slot->base_gfn + gfn_offset,
> > +                             hweight_long(mask));
>
> Mask is zero here, so this probably won't work.
>
> In addition, I suspect calling the hypercall once for every 64 pages is
> not very efficient.  Passing a flush list into
> kvm_mmu_write_protect_pt_masked, and flushing in
> kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
> kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.
>
Yes, this is not efficient.

> I don't have any good ideas, except for moving the whole
> kvm_clear_dirty_log_protect loop into architecture-specific code (which
> is not the direction we want---architectures should share more code, not
> less).

kvm_vm_ioctl_clear_dirty_log/get_dirty_log()  is to get/clear dirty log with
memslot as unit. We may just flush tlbs of the affected memslot instead of
entire page table's when range flush is available.

>
> Paolo
>
> > +             flush = false;
> > +     }
> > +
>


--
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-10  9:06       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-10  9:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Radim Krcmar, catalin.marinas, will.deacon,
	christoffer.dall, H. Peter Anvin, kys, kvmarm,
	the arch/x86 maintainers, linux, michael.h.kelley, Ingo Molnar,
	jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp,
	Thomas Gleixner, linux-arm-kernel, linux-kernel@vger kernel org,
	ralf, paul.burton, vkuznets, linuxppc-dev

On Tue, Jan 8, 2019 at 12:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
> >               rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
> >                                         PT_PAGE_TABLE_LEVEL, slot);
> > -             __rmap_write_protect(kvm, rmap_head, false);
> > +             flush |= __rmap_write_protect(kvm, rmap_head, false);
> >
> >               /* clear the first set bit */
> >               mask &= mask - 1;
> >       }
> > +
> > +     if (flush && kvm_available_flush_tlb_with_range()) {
> > +             kvm_flush_remote_tlbs_with_address(kvm,
> > +                             slot->base_gfn + gfn_offset,
> > +                             hweight_long(mask));
>
> Mask is zero here, so this probably won't work.
>
> In addition, I suspect calling the hypercall once for every 64 pages is
> not very efficient.  Passing a flush list into
> kvm_mmu_write_protect_pt_masked, and flushing in
> kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
> kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.
>
Yes, this is not efficient.

> I don't have any good ideas, except for moving the whole
> kvm_clear_dirty_log_protect loop into architecture-specific code (which
> is not the direction we want---architectures should share more code, not
> less).

kvm_vm_ioctl_clear_dirty_log/get_dirty_log()  is to get/clear dirty log with
memslot as unit. We may just flush tlbs of the affected memslot instead of
entire page table's when range flush is available.

>
> Paolo
>
> > +             flush = false;
> > +     }
> > +
>


--
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-10  9:06       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-10  9:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Radim Krcmar, catalin.marinas, will.deacon,
	christoffer.dall, paulus, H. Peter Anvin, kys, kvmarm, mpe,
	the arch/x86 maintainers, linux, michael.h.kelley, Ingo Molnar,
	benh, jhogan, linux-mips, Lan Tianyu, marc.zyngier, kvm-ppc, bp,
	Thomas Gleixner, linux-arm-kernel, linux-kernel@vger kernel org,
	ralf, paul.burton, vkuznets, linuxppc-dev

On Tue, Jan 8, 2019 at 12:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
> >               rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
> >                                         PT_PAGE_TABLE_LEVEL, slot);
> > -             __rmap_write_protect(kvm, rmap_head, false);
> > +             flush |= __rmap_write_protect(kvm, rmap_head, false);
> >
> >               /* clear the first set bit */
> >               mask &= mask - 1;
> >       }
> > +
> > +     if (flush && kvm_available_flush_tlb_with_range()) {
> > +             kvm_flush_remote_tlbs_with_address(kvm,
> > +                             slot->base_gfn + gfn_offset,
> > +                             hweight_long(mask));
>
> Mask is zero here, so this probably won't work.
>
> In addition, I suspect calling the hypercall once for every 64 pages is
> not very efficient.  Passing a flush list into
> kvm_mmu_write_protect_pt_masked, and flushing in
> kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
> kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.
>
Yes, this is not efficient.

> I don't have any good ideas, except for moving the whole
> kvm_clear_dirty_log_protect loop into architecture-specific code (which
> is not the direction we want---architectures should share more code, not
> less).

kvm_vm_ioctl_clear_dirty_log/get_dirty_log()  is to get/clear dirty log with
memslot as unit. We may just flush tlbs of the affected memslot instead of
entire page table's when range flush is available.

>
> Paolo
>
> > +             flush = false;
> > +     }
> > +
>


--
Best regards
Tianyu Lan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 106+ messages in thread

* Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
@ 2019-01-10  9:06       ` Tianyu Lan
  0 siblings, 0 replies; 106+ messages in thread
From: Tianyu Lan @ 2019-01-10  9:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Lan Tianyu, christoffer.dall, marc.zyngier, linux,
	catalin.marinas, will.deacon, jhogan, ralf, paul.burton, paulus,
	benh, mpe, Radim Krcmar, Thomas Gleixner, Ingo Molnar, bp,
	H. Peter Anvin, the arch/x86 maintainers, linux-arm-kernel,
	kvmarm, linux-kernel@vger kernel org, linux-mips, kvm-ppc,
	linuxppc-dev, kvm, michael.h.kelley, kys, vkuznets

On Tue, Jan 8, 2019 at 12:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
> >               rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
> >                                         PT_PAGE_TABLE_LEVEL, slot);
> > -             __rmap_write_protect(kvm, rmap_head, false);
> > +             flush |= __rmap_write_protect(kvm, rmap_head, false);
> >
> >               /* clear the first set bit */
> >               mask &= mask - 1;
> >       }
> > +
> > +     if (flush && kvm_available_flush_tlb_with_range()) {
> > +             kvm_flush_remote_tlbs_with_address(kvm,
> > +                             slot->base_gfn + gfn_offset,
> > +                             hweight_long(mask));
>
> Mask is zero here, so this probably won't work.
>
> In addition, I suspect calling the hypercall once for every 64 pages is
> not very efficient.  Passing a flush list into
> kvm_mmu_write_protect_pt_masked, and flushing in
> kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
> kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.
>
Yes, this is not efficient.

> I don't have any good ideas, except for moving the whole
> kvm_clear_dirty_log_protect loop into architecture-specific code (which
> is not the direction we want---architectures should share more code, not
> less).

kvm_vm_ioctl_clear_dirty_log/get_dirty_log()  is to get/clear dirty log with
memslot as unit. We may just flush tlbs of the affected memslot instead of
entire page table's when range flush is available.

>
> Paolo
>
> > +             flush = false;
> > +     }
> > +
>


--
Best regards
Tianyu Lan

^ permalink raw reply	[flat|nested] 106+ messages in thread

end of thread, other threads:[~2019-01-10  9:08 UTC | newest]

Thread overview: 106+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-04  8:53 [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list() lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53 ` [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func() lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53 ` [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-07 16:34   ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-04  8:53 ` [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-07 16:39   ` Paolo Bonzini
2019-01-07 16:39     ` Paolo Bonzini
2019-01-07 16:39     ` Paolo Bonzini
2019-01-07 16:39     ` Paolo Bonzini
2019-01-04  8:53 ` [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect() lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:54 ` [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04 16:30   ` Sean Christopherson
2019-01-04 16:30     ` Sean Christopherson
2019-01-04 16:30     ` Sean Christopherson
2019-01-04 16:30     ` Sean Christopherson
2019-01-07  5:13     ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07 16:07     ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-04  8:54 ` [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04 15:50   ` Sean Christopherson
2019-01-04 15:50     ` Sean Christopherson
2019-01-04 15:50     ` Sean Christopherson
2019-01-04 15:50     ` Sean Christopherson
2019-01-04 21:27     ` Sean Christopherson
2019-01-04 21:27       ` Sean Christopherson
2019-01-04 21:27       ` Sean Christopherson
2019-01-04 21:27       ` Sean Christopherson
2019-01-07 16:20     ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-04  8:54 ` [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54 ` [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-07 16:26   ` Paolo Bonzini
2019-01-07 16:26     ` Paolo Bonzini
2019-01-07 16:26     ` Paolo Bonzini
2019-01-07 16:26     ` Paolo Bonzini
2019-01-10  9:06     ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-04  8:54 ` [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54 ` [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp() lantianyu1986
     [not found]   ` <20190104161235.GB11288@linux.intel.com>
2019-01-07  3:42     ` Tianyu Lan
2019-01-07 16:31       ` Paolo Bonzini
2019-01-08  3:42         ` Tianyu Lan
2019-01-08  3:42           ` Tianyu Lan
2019-01-08 11:52           ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.