linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: arm64: Minor page fault handler improvement
@ 2021-03-15  4:18 Gavin Shan
  2021-03-15  4:18 ` [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region() Gavin Shan
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  4:18 UTC (permalink / raw)
  To: kvmarm; +Cc: linux-kernel, maz, will, alexandru.elisei, shan.gavin

The series includes several minior improvements to stage-2 page fault
handler: PATCH[1/2] are cleaning up the code. PATCH[3] fixes the address
range check on adding new memory slot. PATCH[4] don't retrieve the memory
slot again in the page fault handler to save a bit CPU cycles.

Gavin Shan (4):
  KVM: arm64: Hide kvm_mmu_wp_memory_region()
  KVM: arm64: Use find_vma_intersection()
  KVM: arm64: Fix address check for memory slot
  KVM: arm64: Don't retrieve memory slot again in page fault handler

 arch/arm64/include/asm/kvm_host.h |  1 -
 arch/arm64/kvm/mmu.c              | 19 +++++++++++--------
 2 files changed, 11 insertions(+), 9 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region()
  2021-03-15  4:18 [PATCH 0/4] KVM: arm64: Minor page fault handler improvement Gavin Shan
@ 2021-03-15  4:18 ` Gavin Shan
  2021-03-15  7:49   ` Keqian Zhu
  2021-03-15  4:18 ` [PATCH 2/4] KVM: arm64: Use find_vma_intersection() Gavin Shan
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  4:18 UTC (permalink / raw)
  To: kvmarm; +Cc: linux-kernel, maz, will, alexandru.elisei, shan.gavin

We needn't expose the function as it's only used by mmu.c.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 -
 arch/arm64/kvm/mmu.c              | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3d10e6527f7d..688f2df1957b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -632,7 +632,6 @@ void kvm_arm_resume_guest(struct kvm *kvm);
 	})
 
 void force_vm_exit(const cpumask_t *mask);
-void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
 int handle_exit(struct kvm_vcpu *vcpu, int exception_index);
 void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 77cb2d28f2a4..84e70f953de6 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -555,7 +555,7 @@ static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_
  * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired,
  * serializing operations for VM memory regions.
  */
-void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
+static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 	struct kvm_memory_slot *memslot = id_to_memslot(slots, slot);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-15  4:18 [PATCH 0/4] KVM: arm64: Minor page fault handler improvement Gavin Shan
  2021-03-15  4:18 ` [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region() Gavin Shan
@ 2021-03-15  4:18 ` Gavin Shan
  2021-03-15  8:04   ` Keqian Zhu
  2021-03-15  8:52   ` Marc Zyngier
  2021-03-15  4:18 ` [PATCH 3/4] KVM: arm64: Fix address check for memory slot Gavin Shan
  2021-03-15  4:18 ` [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler Gavin Shan
  3 siblings, 2 replies; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  4:18 UTC (permalink / raw)
  To: kvmarm; +Cc: linux-kernel, maz, will, alexandru.elisei, shan.gavin

find_vma_intersection() has been existing to search the intersected
vma. This uses the function where it's applicable, to simplify the
code.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/mmu.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 84e70f953de6..286b603ed0d3 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
 	 *     +--------------------------------------------+
 	 */
 	do {
-		struct vm_area_struct *vma = find_vma(current->mm, hva);
+		struct vm_area_struct *vma;
 		hva_t vm_start, vm_end;
 
-		if (!vma || vma->vm_start >= reg_end)
+		vma = find_vma_intersection(current->mm, hva, reg_end);
+		if (!vma)
 			break;
 
 		/*
@@ -1330,10 +1331,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 	 *     +--------------------------------------------+
 	 */
 	do {
-		struct vm_area_struct *vma = find_vma(current->mm, hva);
+		struct vm_area_struct *vma;
 		hva_t vm_start, vm_end;
 
-		if (!vma || vma->vm_start >= reg_end)
+		vma = find_vma_intersection(current->mm, hva, reg_end);
+		if (!vma)
 			break;
 
 		/*
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] KVM: arm64: Fix address check for memory slot
  2021-03-15  4:18 [PATCH 0/4] KVM: arm64: Minor page fault handler improvement Gavin Shan
  2021-03-15  4:18 ` [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region() Gavin Shan
  2021-03-15  4:18 ` [PATCH 2/4] KVM: arm64: Use find_vma_intersection() Gavin Shan
@ 2021-03-15  4:18 ` Gavin Shan
  2021-03-15  7:33   ` Keqian Zhu
  2021-03-15  4:18 ` [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler Gavin Shan
  3 siblings, 1 reply; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  4:18 UTC (permalink / raw)
  To: kvmarm; +Cc: linux-kernel, maz, will, alexandru.elisei, shan.gavin

The last (IPA) page can't be specified when a new memory slot is
added. The error -EFAULT is returned when the memory slot is added
with the following parameters for the VM, which has 40-bits IPA
limit. The host has 4KB base page size. It's not correct because
the last (IPA) page is still usable.

   struct kvm_userspace_memory_region {
      __u32 slot;               /* 1            */
      __u32 flags;              /* 0            */
      __u64 guest_phys_addr;    /* 0xfffffff000 */
      __u64 memory_size;        /* 0x1000       */
      __u64 userspace_addr;
   };

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/mmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 286b603ed0d3..a5a8ade9fde4 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1313,7 +1313,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 	 * Prevent userspace from creating a memory region outside of the IPA
 	 * space addressable by the KVM guest IPA space.
 	 */
-	if (memslot->base_gfn + memslot->npages >=
+	if (memslot->base_gfn + memslot->npages >
 	    (kvm_phys_size(kvm) >> PAGE_SHIFT))
 		return -EFAULT;
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler
  2021-03-15  4:18 [PATCH 0/4] KVM: arm64: Minor page fault handler improvement Gavin Shan
                   ` (2 preceding siblings ...)
  2021-03-15  4:18 ` [PATCH 3/4] KVM: arm64: Fix address check for memory slot Gavin Shan
@ 2021-03-15  4:18 ` Gavin Shan
  2021-03-15  8:25   ` Keqian Zhu
  3 siblings, 1 reply; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  4:18 UTC (permalink / raw)
  To: kvmarm; +Cc: linux-kernel, maz, will, alexandru.elisei, shan.gavin

We needn't retrieve the memory slot again in user_mem_abort() because
the corresponding memory slot has been passed from the caller. This
would save some CPU cycles. For example, the time used to write 1GB
memory, which is backed by 2MB hugetlb pages and write-protected, is
dropped by 6.8% from 928ms to 864ms.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/kvm/mmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index a5a8ade9fde4..4a4abcccfafb 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -846,7 +846,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	 */
 	smp_rmb();
 
-	pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable);
+	pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
+				   write_fault, &writable, NULL);
 	if (pfn == KVM_PFN_ERR_HWPOISON) {
 		kvm_send_hwpoison_signal(hva, vma_shift);
 		return 0;
@@ -912,7 +913,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	/* Mark the page dirty only if the fault is handled successfully */
 	if (writable && !ret) {
 		kvm_set_pfn_dirty(pfn);
-		mark_page_dirty(kvm, gfn);
+		mark_page_dirty_in_slot(kvm, memslot, gfn);
 	}
 
 out_unlock:
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: Fix address check for memory slot
  2021-03-15  4:18 ` [PATCH 3/4] KVM: arm64: Fix address check for memory slot Gavin Shan
@ 2021-03-15  7:33   ` Keqian Zhu
  2021-03-15  9:46     ` Gavin Shan
  0 siblings, 1 reply; 17+ messages in thread
From: Keqian Zhu @ 2021-03-15  7:33 UTC (permalink / raw)
  To: Gavin Shan, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Gavin,

FYI, this has been fixed by Marc in commit 262b003d059c.

Thanks,
Keqian

On 2021/3/15 12:18, Gavin Shan wrote:
> The last (IPA) page can't be specified when a new memory slot is
> added. The error -EFAULT is returned when the memory slot is added
> with the following parameters for the VM, which has 40-bits IPA
> limit. The host has 4KB base page size. It's not correct because
> the last (IPA) page is still usable.
> 
>    struct kvm_userspace_memory_region {
>       __u32 slot;               /* 1            */
>       __u32 flags;              /* 0            */
>       __u64 guest_phys_addr;    /* 0xfffffff000 */
>       __u64 memory_size;        /* 0x1000       */
>       __u64 userspace_addr;
>    };
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 286b603ed0d3..a5a8ade9fde4 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1313,7 +1313,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  	 * Prevent userspace from creating a memory region outside of the IPA
>  	 * space addressable by the KVM guest IPA space.
>  	 */
> -	if (memslot->base_gfn + memslot->npages >=
> +	if (memslot->base_gfn + memslot->npages >
>  	    (kvm_phys_size(kvm) >> PAGE_SHIFT))
>  		return -EFAULT;
>  
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region()
  2021-03-15  4:18 ` [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region() Gavin Shan
@ 2021-03-15  7:49   ` Keqian Zhu
  0 siblings, 0 replies; 17+ messages in thread
From: Keqian Zhu @ 2021-03-15  7:49 UTC (permalink / raw)
  To: Gavin Shan, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Gavin,

This function is only used by mmu.c in the first commit c64735554c0a, so please feel free
to add:

Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com>


Thanks,
Keqian

On 2021/3/15 12:18, Gavin Shan wrote:
> We needn't expose the function as it's only used by mmu.c.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 -
>  arch/arm64/kvm/mmu.c              | 2 +-
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 3d10e6527f7d..688f2df1957b 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -632,7 +632,6 @@ void kvm_arm_resume_guest(struct kvm *kvm);
>  	})
>  
>  void force_vm_exit(const cpumask_t *mask);
> -void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
>  
>  int handle_exit(struct kvm_vcpu *vcpu, int exception_index);
>  void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index);
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 77cb2d28f2a4..84e70f953de6 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -555,7 +555,7 @@ static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_
>   * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired,
>   * serializing operations for VM memory regions.
>   */
> -void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
> +static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
>  {
>  	struct kvm_memslots *slots = kvm_memslots(kvm);
>  	struct kvm_memory_slot *memslot = id_to_memslot(slots, slot);
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-15  4:18 ` [PATCH 2/4] KVM: arm64: Use find_vma_intersection() Gavin Shan
@ 2021-03-15  8:04   ` Keqian Zhu
  2021-03-15  9:42     ` Gavin Shan
  2021-03-15  8:52   ` Marc Zyngier
  1 sibling, 1 reply; 17+ messages in thread
From: Keqian Zhu @ 2021-03-15  8:04 UTC (permalink / raw)
  To: Gavin Shan, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Gavin,

On 2021/3/15 12:18, Gavin Shan wrote:
> find_vma_intersection() has been existing to search the intersected
> vma. This uses the function where it's applicable, to simplify the
> code.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/kvm/mmu.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 84e70f953de6..286b603ed0d3 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>  	 *     +--------------------------------------------+
>  	 */
>  	do {
> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
> +		struct vm_area_struct *vma;
>  		hva_t vm_start, vm_end;
>  
> -		if (!vma || vma->vm_start >= reg_end)
> +		vma = find_vma_intersection(current->mm, hva, reg_end);
Nit: Keep a same style may be better(Assign vma when declare it).
Other looks good to me.

Thank,
Keqian


> +		if (!vma)
>  			break;
>  
>  		/*
> @@ -1330,10 +1331,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  	 *     +--------------------------------------------+
>  	 */
>  	do {
> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
> +		struct vm_area_struct *vma;
>  		hva_t vm_start, vm_end;
>  
> -		if (!vma || vma->vm_start >= reg_end)
> +		vma = find_vma_intersection(current->mm, hva, reg_end);
> +		if (!vma)
>  			break;
>  
>  		/*
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler
  2021-03-15  4:18 ` [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler Gavin Shan
@ 2021-03-15  8:25   ` Keqian Zhu
  2021-03-15  9:56     ` Gavin Shan
  0 siblings, 1 reply; 17+ messages in thread
From: Keqian Zhu @ 2021-03-15  8:25 UTC (permalink / raw)
  To: Gavin Shan, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Gavin,

On 2021/3/15 12:18, Gavin Shan wrote:
> We needn't retrieve the memory slot again in user_mem_abort() because
> the corresponding memory slot has been passed from the caller. This
I think you are right, though fault_ipa will be adjusted when we try to use block mapping,
the fault_supports_stage2_huge_mapping() makes sure we're not trying to map anything
not covered by the memslot, so the adjusted fault_ipa still belongs to the memslot.

> would save some CPU cycles. For example, the time used to write 1GB
> memory, which is backed by 2MB hugetlb pages and write-protected, is
> dropped by 6.8% from 928ms to 864ms.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/kvm/mmu.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index a5a8ade9fde4..4a4abcccfafb 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -846,7 +846,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	 */
>  	smp_rmb();
>  
> -	pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable);
> +	pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
> +				   write_fault, &writable, NULL);
It's better to update the code comments at same time.

>  	if (pfn == KVM_PFN_ERR_HWPOISON) {
>  		kvm_send_hwpoison_signal(hva, vma_shift);
>  		return 0;
> @@ -912,7 +913,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	/* Mark the page dirty only if the fault is handled successfully */
>  	if (writable && !ret) {
>  		kvm_set_pfn_dirty(pfn);
> -		mark_page_dirty(kvm, gfn);
> +		mark_page_dirty_in_slot(kvm, memslot, gfn);
>  	}
>  
>  out_unlock:
> 

Thanks,
Keqian.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-15  4:18 ` [PATCH 2/4] KVM: arm64: Use find_vma_intersection() Gavin Shan
  2021-03-15  8:04   ` Keqian Zhu
@ 2021-03-15  8:52   ` Marc Zyngier
  2021-03-15  9:40     ` Gavin Shan
  1 sibling, 1 reply; 17+ messages in thread
From: Marc Zyngier @ 2021-03-15  8:52 UTC (permalink / raw)
  To: Gavin Shan; +Cc: kvmarm, linux-kernel, will, alexandru.elisei, shan.gavin

On Mon, 15 Mar 2021 04:18:42 +0000,
Gavin Shan <gshan@redhat.com> wrote:
> 
> find_vma_intersection() has been existing to search the intersected
> vma. This uses the function where it's applicable, to simplify the
> code.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/kvm/mmu.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 84e70f953de6..286b603ed0d3 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>  	 *     +--------------------------------------------+
>  	 */
>  	do {
> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
> +		struct vm_area_struct *vma;
>  		hva_t vm_start, vm_end;
>  
> -		if (!vma || vma->vm_start >= reg_end)
> +		vma = find_vma_intersection(current->mm, hva, reg_end);

For context, here's the definition of find_vma_intersection():

<quote>
static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr)
{
	struct vm_area_struct * vma = find_vma(mm,start_addr);

	if (vma && end_addr <= vma->vm_start)
		vma = NULL;
	return vma;
}
</quote>

It seems that there is a boundary issue in either the old code or the
new one in the case where (reg_end == vma->start).

Which one is which?

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-15  8:52   ` Marc Zyngier
@ 2021-03-15  9:40     ` Gavin Shan
  0 siblings, 0 replies; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  9:40 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-kernel, will, alexandru.elisei, shan.gavin

Hi Marc,

On 3/15/21 7:52 PM, Marc Zyngier wrote:
> On Mon, 15 Mar 2021 04:18:42 +0000,
> Gavin Shan <gshan@redhat.com> wrote:
>>
>> find_vma_intersection() has been existing to search the intersected
>> vma. This uses the function where it's applicable, to simplify the
>> code.
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   arch/arm64/kvm/mmu.c | 10 ++++++----
>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 84e70f953de6..286b603ed0d3 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>>   	 *     +--------------------------------------------+
>>   	 */
>>   	do {
>> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
>> +		struct vm_area_struct *vma;
>>   		hva_t vm_start, vm_end;
>>   
>> -		if (!vma || vma->vm_start >= reg_end)
>> +		vma = find_vma_intersection(current->mm, hva, reg_end);
> 
> For context, here's the definition of find_vma_intersection():
> 
> <quote>
> static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr)
> {
> 	struct vm_area_struct * vma = find_vma(mm,start_addr);
> 
> 	if (vma && end_addr <= vma->vm_start)
> 		vma = NULL;
> 	return vma;
> }
> </quote>
> 
> It seems that there is a boundary issue in either the old code or the
> new one in the case where (reg_end == vma->start).
> 
> Which one is which?
> 

The old and new code is interchangeable, meaning "reg_end == vma->start"
is invalid in both cases. So if there is a boundary issue, the old and new
code should have same issue.

According to the code, "reg_end == vma->start" is invalid. So I don't see
there is a boundary issue. Hopefully, I don't miss anything :)

Thanks,
Gavin


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-15  8:04   ` Keqian Zhu
@ 2021-03-15  9:42     ` Gavin Shan
  2021-03-16  3:52       ` Gavin Shan
  0 siblings, 1 reply; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  9:42 UTC (permalink / raw)
  To: Keqian Zhu, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Keqian,

On 3/15/21 7:04 PM, Keqian Zhu wrote:
> On 2021/3/15 12:18, Gavin Shan wrote:
>> find_vma_intersection() has been existing to search the intersected
>> vma. This uses the function where it's applicable, to simplify the
>> code.
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   arch/arm64/kvm/mmu.c | 10 ++++++----
>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 84e70f953de6..286b603ed0d3 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>>   	 *     +--------------------------------------------+
>>   	 */
>>   	do {
>> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
>> +		struct vm_area_struct *vma;
>>   		hva_t vm_start, vm_end;
>>   
>> -		if (!vma || vma->vm_start >= reg_end)
>> +		vma = find_vma_intersection(current->mm, hva, reg_end);
> Nit: Keep a same style may be better(Assign vma when declare it).
> Other looks good to me.
> 

Yeah, I agree. I will adjust the code in v2 and included your r-b.
Thanks for your time to review.

Thanks,
Gavin

  
>> +		if (!vma)
>>   			break;
>>   
>>   		/*
>> @@ -1330,10 +1331,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>>   	 *     +--------------------------------------------+
>>   	 */
>>   	do {
>> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
>> +		struct vm_area_struct *vma;
>>   		hva_t vm_start, vm_end;
>>   
>> -		if (!vma || vma->vm_start >= reg_end)
>> +		vma = find_vma_intersection(current->mm, hva, reg_end);
>> +		if (!vma)
>>   			break;
>>   
>>   		/*
>>
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: Fix address check for memory slot
  2021-03-15  7:33   ` Keqian Zhu
@ 2021-03-15  9:46     ` Gavin Shan
  0 siblings, 0 replies; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  9:46 UTC (permalink / raw)
  To: Keqian Zhu, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Keqian,

On 3/15/21 6:33 PM, Keqian Zhu wrote:
> FYI, this has been fixed by Marc in commit 262b003d059c.
> 

Yeah, I didn't check 5.12.rc3 code where the issue has been
fixed. So please ignore this one and sorry for the noise.

Thanks,
Gavin
  
> On 2021/3/15 12:18, Gavin Shan wrote:
>> The last (IPA) page can't be specified when a new memory slot is
>> added. The error -EFAULT is returned when the memory slot is added
>> with the following parameters for the VM, which has 40-bits IPA
>> limit. The host has 4KB base page size. It's not correct because
>> the last (IPA) page is still usable.
>>
>>     struct kvm_userspace_memory_region {
>>        __u32 slot;               /* 1            */
>>        __u32 flags;              /* 0            */
>>        __u64 guest_phys_addr;    /* 0xfffffff000 */
>>        __u64 memory_size;        /* 0x1000       */
>>        __u64 userspace_addr;
>>     };
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   arch/arm64/kvm/mmu.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 286b603ed0d3..a5a8ade9fde4 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1313,7 +1313,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>>   	 * Prevent userspace from creating a memory region outside of the IPA
>>   	 * space addressable by the KVM guest IPA space.
>>   	 */
>> -	if (memslot->base_gfn + memslot->npages >=
>> +	if (memslot->base_gfn + memslot->npages >
>>   	    (kvm_phys_size(kvm) >> PAGE_SHIFT))
>>   		return -EFAULT;
>>   
>>
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler
  2021-03-15  8:25   ` Keqian Zhu
@ 2021-03-15  9:56     ` Gavin Shan
  2021-03-15 10:46       ` Keqian Zhu
  0 siblings, 1 reply; 17+ messages in thread
From: Gavin Shan @ 2021-03-15  9:56 UTC (permalink / raw)
  To: Keqian Zhu, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Keqian,

On 3/15/21 7:25 PM, Keqian Zhu wrote:
> On 2021/3/15 12:18, Gavin Shan wrote:
>> We needn't retrieve the memory slot again in user_mem_abort() because
>> the corresponding memory slot has been passed from the caller. This
> I think you are right, though fault_ipa will be adjusted when we try to use block mapping,
> the fault_supports_stage2_huge_mapping() makes sure we're not trying to map anything
> not covered by the memslot, so the adjusted fault_ipa still belongs to the memslot.
> 

Yeah, it's correct. Besides, the @logging_active is determined
based on the passed memory slot. It means user_mem_abort() can't
support memory range which spans multiple memory slot.

>> would save some CPU cycles. For example, the time used to write 1GB
>> memory, which is backed by 2MB hugetlb pages and write-protected, is
>> dropped by 6.8% from 928ms to 864ms.
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   arch/arm64/kvm/mmu.c | 5 +++--
>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index a5a8ade9fde4..4a4abcccfafb 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -846,7 +846,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   	 */
>>   	smp_rmb();
>>   
>> -	pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable);
>> +	pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
>> +				   write_fault, &writable, NULL);
> It's better to update the code comments at same time.
> 

I guess you need some comments here? If so, I would add something
like below in v2:

	/*
	 * gfn_to_pfn_prot() can be used either with unnecessary overhead
	 * introduced to locate the memory slot because the memory slot is
	 * always fixed even @gfn is adjusted for huge pages.
	 */

>>   	if (pfn == KVM_PFN_ERR_HWPOISON) {
>>   		kvm_send_hwpoison_signal(hva, vma_shift);
>>   		return 0;
>> @@ -912,7 +913,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   	/* Mark the page dirty only if the fault is handled successfully */
>>   	if (writable && !ret) {
>>   		kvm_set_pfn_dirty(pfn);
>> -		mark_page_dirty(kvm, gfn);
>> +		mark_page_dirty_in_slot(kvm, memslot, gfn);
>>   	}
>>   
>>   out_unlock:
>>

Thanks,
Gavin



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler
  2021-03-15  9:56     ` Gavin Shan
@ 2021-03-15 10:46       ` Keqian Zhu
  0 siblings, 0 replies; 17+ messages in thread
From: Keqian Zhu @ 2021-03-15 10:46 UTC (permalink / raw)
  To: Gavin Shan, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Gavin,

On 2021/3/15 17:56, Gavin Shan wrote:
> Hi Keqian,
> 
> On 3/15/21 7:25 PM, Keqian Zhu wrote:
>> On 2021/3/15 12:18, Gavin Shan wrote:
>>> We needn't retrieve the memory slot again in user_mem_abort() because
>>> the corresponding memory slot has been passed from the caller. This
>> I think you are right, though fault_ipa will be adjusted when we try to use block mapping,
>> the fault_supports_stage2_huge_mapping() makes sure we're not trying to map anything
>> not covered by the memslot, so the adjusted fault_ipa still belongs to the memslot.
>>
> 
> Yeah, it's correct. Besides, the @logging_active is determined
> based on the passed memory slot. It means user_mem_abort() can't
> support memory range which spans multiple memory slot.
> 
>>> would save some CPU cycles. For example, the time used to write 1GB
>>> memory, which is backed by 2MB hugetlb pages and write-protected, is
>>> dropped by 6.8% from 928ms to 864ms.
>>>
>>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>>> ---
>>>   arch/arm64/kvm/mmu.c | 5 +++--
>>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>> index a5a8ade9fde4..4a4abcccfafb 100644
>>> --- a/arch/arm64/kvm/mmu.c
>>> +++ b/arch/arm64/kvm/mmu.c
>>> @@ -846,7 +846,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>        */
>>>       smp_rmb();
>>>   -    pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable);
>>> +    pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
>>> +                   write_fault, &writable, NULL);
>> It's better to update the code comments at same time.
>>
> 
> I guess you need some comments here? If so, I would add something
> like below in v2:
> 
>     /*
>      * gfn_to_pfn_prot() can be used either with unnecessary overhead
>      * introduced to locate the memory slot because the memory slot is
>      * always fixed even @gfn is adjusted for huge pages.
>      */
Looks good.

See comments above "smp_rmb();", and actually my meaning is that we should change "gfn_to_pfn_prot"
to "__gfn_to_pfn_memslot" :)

Thanks,
Keqian

> 
>>>       if (pfn == KVM_PFN_ERR_HWPOISON) {
>>>           kvm_send_hwpoison_signal(hva, vma_shift);
>>>           return 0;
>>> @@ -912,7 +913,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>       /* Mark the page dirty only if the fault is handled successfully */
>>>       if (writable && !ret) {
>>>           kvm_set_pfn_dirty(pfn);
>>> -        mark_page_dirty(kvm, gfn);
>>> +        mark_page_dirty_in_slot(kvm, memslot, gfn);
>>>       }
>>>     out_unlock:
>>>
> 
> Thanks,
> Gavin
> 
> 
> .
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-15  9:42     ` Gavin Shan
@ 2021-03-16  3:52       ` Gavin Shan
  2021-03-16  4:20         ` Keqian Zhu
  0 siblings, 1 reply; 17+ messages in thread
From: Gavin Shan @ 2021-03-16  3:52 UTC (permalink / raw)
  To: Keqian Zhu, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Keqian,

On 3/15/21 8:42 PM, Gavin Shan wrote:
> On 3/15/21 7:04 PM, Keqian Zhu wrote:
>> On 2021/3/15 12:18, Gavin Shan wrote:
>>> find_vma_intersection() has been existing to search the intersected
>>> vma. This uses the function where it's applicable, to simplify the
>>> code.
>>>
>>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>>> ---
>>>   arch/arm64/kvm/mmu.c | 10 ++++++----
>>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>> index 84e70f953de6..286b603ed0d3 100644
>>> --- a/arch/arm64/kvm/mmu.c
>>> +++ b/arch/arm64/kvm/mmu.c
>>> @@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>>>        *     +--------------------------------------------+
>>>        */
>>>       do {
>>> -        struct vm_area_struct *vma = find_vma(current->mm, hva);
>>> +        struct vm_area_struct *vma;
>>>           hva_t vm_start, vm_end;
>>> -        if (!vma || vma->vm_start >= reg_end)
>>> +        vma = find_vma_intersection(current->mm, hva, reg_end);
>> Nit: Keep a same style may be better(Assign vma when declare it).
>> Other looks good to me.
>>
> 
> Yeah, I agree. I will adjust the code in v2 and included your r-b.
> Thanks for your time to review.
> 

After rechecking the code, I think it'd better to keep current style
because there is a follow-on validation on @vma. Keeping them together
seems a good idea. I think it wouldn't a big deal to you. So I will
keep current style with your r-b in v2.

	vma = find_vma_intersection(current->mm, hva, reg_end);
         if (!vma)
              break;
Thanks,
Gavin
  
>>> +        if (!vma)
>>>               break;
>>>           /*
>>> @@ -1330,10 +1331,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>>>        *     +--------------------------------------------+
>>>        */
>>>       do {
>>> -        struct vm_area_struct *vma = find_vma(current->mm, hva);
>>> +        struct vm_area_struct *vma;
>>>           hva_t vm_start, vm_end;
>>> -        if (!vma || vma->vm_start >= reg_end)
>>> +        vma = find_vma_intersection(current->mm, hva, reg_end);
>>> +        if (!vma)
>>>               break;
>>>           /*
>>>
>>
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/4] KVM: arm64: Use find_vma_intersection()
  2021-03-16  3:52       ` Gavin Shan
@ 2021-03-16  4:20         ` Keqian Zhu
  0 siblings, 0 replies; 17+ messages in thread
From: Keqian Zhu @ 2021-03-16  4:20 UTC (permalink / raw)
  To: Gavin Shan, kvmarm; +Cc: maz, will, linux-kernel, shan.gavin

Hi Gavin,

On 2021/3/16 11:52, Gavin Shan wrote:
> Hi Keqian,
> 
> On 3/15/21 8:42 PM, Gavin Shan wrote:
>> On 3/15/21 7:04 PM, Keqian Zhu wrote:
>>> On 2021/3/15 12:18, Gavin Shan wrote:
>>>> find_vma_intersection() has been existing to search the intersected
>>>> vma. This uses the function where it's applicable, to simplify the
>>>> code.
>>>>
>>>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>>>> ---
>>>>   arch/arm64/kvm/mmu.c | 10 ++++++----
>>>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>>> index 84e70f953de6..286b603ed0d3 100644
>>>> --- a/arch/arm64/kvm/mmu.c
>>>> +++ b/arch/arm64/kvm/mmu.c
>>>> @@ -421,10 +421,11 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>>>>        *     +--------------------------------------------+
>>>>        */
>>>>       do {
>>>> -        struct vm_area_struct *vma = find_vma(current->mm, hva);
>>>> +        struct vm_area_struct *vma;
>>>>           hva_t vm_start, vm_end;
>>>> -        if (!vma || vma->vm_start >= reg_end)
>>>> +        vma = find_vma_intersection(current->mm, hva, reg_end);
>>> Nit: Keep a same style may be better(Assign vma when declare it).
>>> Other looks good to me.
>>>
>>
>> Yeah, I agree. I will adjust the code in v2 and included your r-b.
>> Thanks for your time to review.
>>
> 
> After rechecking the code, I think it'd better to keep current style
> because there is a follow-on validation on @vma. Keeping them together
> seems a good idea. I think it wouldn't a big deal to you. So I will
> keep current style with your r-b in v2.
Sure, both is OK. ;-)

Thanks,
Keqian
> 
>     vma = find_vma_intersection(current->mm, hva, reg_end);
>         if (!vma)
>              break;
> Thanks,
> Gavin
>  
>>>> +        if (!vma)
>>>>               break;
>>>>           /*
>>>> @@ -1330,10 +1331,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>>>>        *     +--------------------------------------------+
>>>>        */
>>>>       do {
>>>> -        struct vm_area_struct *vma = find_vma(current->mm, hva);
>>>> +        struct vm_area_struct *vma;
>>>>           hva_t vm_start, vm_end;
>>>> -        if (!vma || vma->vm_start >= reg_end)
>>>> +        vma = find_vma_intersection(current->mm, hva, reg_end);
>>>> +        if (!vma)
>>>>               break;
>>>>           /*
>>>>
>>>
>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-03-16  4:21 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-15  4:18 [PATCH 0/4] KVM: arm64: Minor page fault handler improvement Gavin Shan
2021-03-15  4:18 ` [PATCH 1/4] KVM: arm64: Hide kvm_mmu_wp_memory_region() Gavin Shan
2021-03-15  7:49   ` Keqian Zhu
2021-03-15  4:18 ` [PATCH 2/4] KVM: arm64: Use find_vma_intersection() Gavin Shan
2021-03-15  8:04   ` Keqian Zhu
2021-03-15  9:42     ` Gavin Shan
2021-03-16  3:52       ` Gavin Shan
2021-03-16  4:20         ` Keqian Zhu
2021-03-15  8:52   ` Marc Zyngier
2021-03-15  9:40     ` Gavin Shan
2021-03-15  4:18 ` [PATCH 3/4] KVM: arm64: Fix address check for memory slot Gavin Shan
2021-03-15  7:33   ` Keqian Zhu
2021-03-15  9:46     ` Gavin Shan
2021-03-15  4:18 ` [PATCH 4/4] KVM: arm64: Don't retrieve memory slot again in page fault handler Gavin Shan
2021-03-15  8:25   ` Keqian Zhu
2021-03-15  9:56     ` Gavin Shan
2021-03-15 10:46       ` Keqian Zhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).