kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/1] KVM: arm64: fix the mmio faulting
@ 2020-10-26 11:24 Santosh Shukla
  2020-10-26 11:24 ` [PATCH v2 1/1] KVM: arm64: Correctly handle " Santosh Shukla
  2020-10-29 21:09 ` [PATCH v2 0/1] KVM: arm64: fix " Marc Zyngier
  0 siblings, 2 replies; 4+ messages in thread
From: Santosh Shukla @ 2020-10-26 11:24 UTC (permalink / raw)
  To: maz, kvm, kvmarm, linux-kernel
  Cc: mcrossley, cjia, kwankhede, will, linux-arm-kernel

Description of the Reproducer scenario as asked in the thread [1].

Tried to create the reproducer scenario with vfio-pci driver using
nvidia GPU in PT mode, As because vfio-pci driver now supports
vma faulting (/vfio_pci_mmap_fault) so could create a crude reproducer
situation with that.

To create the repro - I did an ugly hack into arm64/kvm/mmu.c.
The hack is to make sure that stage2 mapping are not created
at the time of vm_init by unsetting VM_PFNMAP flag. This `unsetting` flag
needed because vfio-pci's mmap func(/vfio_pci_mmap) by-default
sets the VM_PFNMAP flag for the MMIO region but I want
the remap_pfn_range() func to set the _PFNMAP flag via vfio's fault
handler func vfio_pci_mmap_fault().

So with above, when guest access the MMIO region, this will
trigger the mmio fault path at arm64-kvm hypervisor layer like below:
user_mem_abort() {->...
    --> Check the VM_PFNMAP flag, since not set so marks force_pte=false
    ....
    __gfn_to_pfn_memslot()-->
    ...
    handle_mm_fault()-->
    do_fault()-->
    vfio_pci_mmio_fault()-->
    remap_pfn_range()--> Now will set the VM_PFNMAP flag.
}

Since the force_pte flag is set to false so will lead to THP oops.
By setting the force_pte=true will avoid the THP Oops which was
mentioned in the [2] and patch proposition [1/1] fixes that.

hackish change to reproduce scenario:
--->
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d4cd25334610..b0a999aa6a95 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1318,6 +1318,12 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 		vm_start = max(hva, vma->vm_start);
 		vm_end = min(reg_end, vma->vm_end);
 
+		/* Hack to make sure stage2 mapping not present, thus trigger
+		 * user_mem_abort for stage2 mapping
+		 */
+		if (vma->vm_flags & VM_PFNMAP) {
+			vma->vm_flags = vma->vm_flags & (~VM_PFNMAP);
+		}
 		if (vma->vm_flags & VM_PFNMAP) {
 			gpa_t gpa = mem->guest_phys_addr +
 				    (vm_start - mem->userspace_addr);


Thanks.
Santosh

[1] https://lkml.org/lkml/2020/10/23/310
[2] https://lkml.org/lkml/2020/10/21/460


Santosh Shukla (1):
  KVM: arm64: Correctly handle the mmio faulting

 arch/arm64/kvm/mmu.c | 1 +
 1 file changed, 1 insertion(+)

-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 1/1] KVM: arm64: Correctly handle the mmio faulting
  2020-10-26 11:24 [PATCH v2 0/1] KVM: arm64: fix the mmio faulting Santosh Shukla
@ 2020-10-26 11:24 ` Santosh Shukla
  2020-10-27  4:04   ` Gavin Shan
  2020-10-29 21:09 ` [PATCH v2 0/1] KVM: arm64: fix " Marc Zyngier
  1 sibling, 1 reply; 4+ messages in thread
From: Santosh Shukla @ 2020-10-26 11:24 UTC (permalink / raw)
  To: maz, kvm, kvmarm, linux-kernel
  Cc: mcrossley, cjia, kwankhede, will, linux-arm-kernel

The Commit:6d674e28 introduces a notion to detect and handle the
device mapping. The commit checks for the VM_PFNMAP flag is set
in vma->flags and if set then marks force_pte to true such that
if force_pte is true then ignore the THP function check
(/transparent_hugepage_adjust()).

There could be an issue with the VM_PFNMAP flag setting and checking.
For example consider a case where the mdev vendor driver register's
the vma_fault handler named vma_mmio_fault(), which maps the
host MMIO region in-turn calls remap_pfn_range() and maps
the MMIO's vma space. Where, remap_pfn_range implicitly sets
the VM_PFNMAP flag into vma->flags.

Now lets assume a mmio fault handing flow where guest first access
the MMIO region whose 2nd stage translation is not present.
So that results to arm64-kvm hypervisor executing guest abort handler,
like below:

kvm_handle_guest_abort() -->
 user_mem_abort()--> {

    ...
    0. checks the vma->flags for the VM_PFNMAP.
    1. Since VM_PFNMAP flag is not yet set so force_pte _is_ false;
    2. gfn_to_pfn_prot() -->
        __gfn_to_pfn_memslot() -->
            fixup_user_fault() -->
                handle_mm_fault()-->
                    __do_fault() -->
                       vma_mmio_fault() --> // vendor's mdev fault handler
                        remap_pfn_range()--> // Here sets the VM_PFNMAP
                                                flag into vma->flags.
    3. Now that force_pte is set to false in step-2),
       will execute transparent_hugepage_adjust() func and
       that lead to Oops [4].
 }

The proposition is to set force_pte=true if kvm_is_device_pfn is true.

[4] THP Oops:
> pc: kvm_is_transparent_hugepage+0x18/0xb0
> ...
> ...
> user_mem_abort+0x340/0x9b8
> kvm_handle_guest_abort+0x248/0x468
> handle_exit+0x150/0x1b0
> kvm_arch_vcpu_ioctl_run+0x4d4/0x778
> kvm_vcpu_ioctl+0x3c0/0x858
> ksys_ioctl+0x84/0xb8
> __arm64_sys_ioctl+0x28/0x38

Tested on Huawei Kunpeng Taishan-200 arm64 server, Using VFIO-mdev device.
Linux-5.10-rc1 tip: 3650b228

Fixes: 6d674e28 ("KVM: arm/arm64: Properly handle faulting of device mappings")
Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Santosh Shukla <sashukla@nvidia.com>
---
v2:
- Per Marc's suggestion - setting force_pte=true.
- Rebased and tested for 5.10-rc1 commit: 3650b228

v1: https://lkml.org/lkml/2020/10/21/460

arch/arm64/kvm/mmu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 19aacc7..d4cd253 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -839,6 +839,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 
 	if (kvm_is_device_pfn(pfn)) {
 		device = true;
+		force_pte = true;
 	} else if (logging_active && !write_fault) {
 		/*
 		 * Only actually map the page as writable if this was a write
-- 
2.7.4

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 1/1] KVM: arm64: Correctly handle the mmio faulting
  2020-10-26 11:24 ` [PATCH v2 1/1] KVM: arm64: Correctly handle " Santosh Shukla
@ 2020-10-27  4:04   ` Gavin Shan
  0 siblings, 0 replies; 4+ messages in thread
From: Gavin Shan @ 2020-10-27  4:04 UTC (permalink / raw)
  To: Santosh Shukla, maz, kvm, kvmarm, linux-kernel
  Cc: mcrossley, kwankhede, cjia, linux-arm-kernel, will

Hi Santosh,

On 10/26/20 10:24 PM, Santosh Shukla wrote:
> The Commit:6d674e28 introduces a notion to detect and handle the
> device mapping. The commit checks for the VM_PFNMAP flag is set
> in vma->flags and if set then marks force_pte to true such that
> if force_pte is true then ignore the THP function check
> (/transparent_hugepage_adjust()).
> 
> There could be an issue with the VM_PFNMAP flag setting and checking.
> For example consider a case where the mdev vendor driver register's
> the vma_fault handler named vma_mmio_fault(), which maps the
> host MMIO region in-turn calls remap_pfn_range() and maps
> the MMIO's vma space. Where, remap_pfn_range implicitly sets
> the VM_PFNMAP flag into vma->flags.
> 
> Now lets assume a mmio fault handing flow where guest first access
> the MMIO region whose 2nd stage translation is not present.
> So that results to arm64-kvm hypervisor executing guest abort handler,
> like below:
> 
> kvm_handle_guest_abort() -->
>   user_mem_abort()--> {
> 
>      ...
>      0. checks the vma->flags for the VM_PFNMAP.
>      1. Since VM_PFNMAP flag is not yet set so force_pte _is_ false;
>      2. gfn_to_pfn_prot() -->
>          __gfn_to_pfn_memslot() -->
>              fixup_user_fault() -->
>                  handle_mm_fault()-->
>                      __do_fault() -->
>                         vma_mmio_fault() --> // vendor's mdev fault handler
>                          remap_pfn_range()--> // Here sets the VM_PFNMAP
>                                                  flag into vma->flags.
>      3. Now that force_pte is set to false in step-2),
>         will execute transparent_hugepage_adjust() func and
>         that lead to Oops [4].
>   }
> 
> The proposition is to set force_pte=true if kvm_is_device_pfn is true.
> 
> [4] THP Oops:
>> pc: kvm_is_transparent_hugepage+0x18/0xb0
>> ...
>> ...
>> user_mem_abort+0x340/0x9b8
>> kvm_handle_guest_abort+0x248/0x468
>> handle_exit+0x150/0x1b0
>> kvm_arch_vcpu_ioctl_run+0x4d4/0x778
>> kvm_vcpu_ioctl+0x3c0/0x858
>> ksys_ioctl+0x84/0xb8
>> __arm64_sys_ioctl+0x28/0x38
> 
> Tested on Huawei Kunpeng Taishan-200 arm64 server, Using VFIO-mdev device.
> Linux-5.10-rc1 tip: 3650b228
> 
> Fixes: 6d674e28 ("KVM: arm/arm64: Properly handle faulting of device mappings")
> Suggested-by: Marc Zyngier <maz@kernel.org>
> Signed-off-by: Santosh Shukla <sashukla@nvidia.com>
> ---
> v2:
> - Per Marc's suggestion - setting force_pte=true.
> - Rebased and tested for 5.10-rc1 commit: 3650b228
> 
> v1: https://lkml.org/lkml/2020/10/21/460
> 
> arch/arm64/kvm/mmu.c | 1 +
>   1 file changed, 1 insertion(+)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>

> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 19aacc7..d4cd253 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -839,6 +839,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>   
>   	if (kvm_is_device_pfn(pfn)) {
>   		device = true;
> +		force_pte = true;
>   	} else if (logging_active && !write_fault) {
>   		/*
>   		 * Only actually map the page as writable if this was a write
> 

Cheers,
Gavin

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 0/1] KVM: arm64: fix the mmio faulting
  2020-10-26 11:24 [PATCH v2 0/1] KVM: arm64: fix the mmio faulting Santosh Shukla
  2020-10-26 11:24 ` [PATCH v2 1/1] KVM: arm64: Correctly handle " Santosh Shukla
@ 2020-10-29 21:09 ` Marc Zyngier
  1 sibling, 0 replies; 4+ messages in thread
From: Marc Zyngier @ 2020-10-29 21:09 UTC (permalink / raw)
  To: kvmarm, Gavin Shan, Santosh Shukla, kvm, linux-kernel
  Cc: mcrossley, cjia, kwankhede, linux-arm-kernel, shan.gavin, will

On Mon, 26 Oct 2020 16:54:06 +0530, Santosh Shukla wrote:
> Description of the Reproducer scenario as asked in the thread [1].
> 
> Tried to create the reproducer scenario with vfio-pci driver using
> nvidia GPU in PT mode, As because vfio-pci driver now supports
> vma faulting (/vfio_pci_mmap_fault) so could create a crude reproducer
> situation with that.
> 
> [...]

Applied to next, thanks!

[1/1] KVM: arm64: Force PTE mapping on fault resulting in a device mapping
      commit: 91a2c34b7d6fadc9c5d9433c620ea4c32ee7cae8

Cheers,

	M.
-- 
Without deviation from the norm, progress is not possible.


_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-29 21:09 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26 11:24 [PATCH v2 0/1] KVM: arm64: fix the mmio faulting Santosh Shukla
2020-10-26 11:24 ` [PATCH v2 1/1] KVM: arm64: Correctly handle " Santosh Shukla
2020-10-27  4:04   ` Gavin Shan
2020-10-29 21:09 ` [PATCH v2 0/1] KVM: arm64: fix " Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).