All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 2/2] KVM: arm64: Try PMD block mappings if PUD mappings are not supported
Date: Fri, 04 Sep 2020 10:58:58 +0100	[thread overview]
Message-ID: <87sgbx7ti5.wl-maz@kernel.org> (raw)
In-Reply-To: <20200901133357.52640-3-alexandru.elisei@arm.com>

Hi Alex,

On Tue, 01 Sep 2020 14:33:57 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> When userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to
> use the same block size to map the faulting IPA in stage 2. If stage 2
> cannot use the same size mapping because the block size doesn't fit in the
> memslot or the memslot is not properly aligned, user_mem_abort() will fall
> back to a page mapping, regardless of the block size. We can do better for
> PUD backed hugetlbfs by checking if a PMD block mapping is possible before
> deciding to use a page.
> 
> vma_pagesize is an unsigned long, use 1UL instead of 1ULL when assigning
> its value.
> 
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kvm/mmu.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 25e7dc52c086..f590f7355cda 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1871,15 +1871,24 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	else
>  		vma_shift = PAGE_SHIFT;
>  
> -	vma_pagesize = 1ULL << vma_shift;
>  	if (logging_active ||
> -	    (vma->vm_flags & VM_PFNMAP) ||
> -	    !fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) {
> +	    (vma->vm_flags & VM_PFNMAP)) {
>  		force_pte = true;
> -		vma_pagesize = PAGE_SIZE;
>  		vma_shift = PAGE_SHIFT;
>  	}
>  
> +	if (vma_shift == PUD_SHIFT &&
> +	    !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
> +		vma_shift = PMD_SHIFT;
> +
> +	if (vma_shift == PMD_SHIFT &&
> +	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
> +		force_pte = true;
> +		vma_shift = PAGE_SHIFT;
> +	}
> +
> +	vma_pagesize = 1UL << vma_shift;
> +
>  	/*
>  	 * The stage2 has a minimum of 2 level table (For arm64 see
>  	 * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can
> @@ -1889,7 +1898,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	 */
>  	if (vma_pagesize == PMD_SIZE ||
>  	    (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm)))
> -		gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
> +		gfn = (fault_ipa & ~(vma_pagesize - 1)) >> PAGE_SHIFT;
>  	mmap_read_unlock(current->mm);
>  
>  	/* We need minimum second+third level pages */

Although this looks like a sensible change, I'm a reluctant to take it
at this stage, given that we already have a bunch of patches from Will
to change the way we deal with PTs.

Could you look into how this could fit into the new code instead?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: julien.thierry.kdev@gmail.com, james.morse@arm.com,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org, suzuki.poulose@arm.com
Subject: Re: [PATCH 2/2] KVM: arm64: Try PMD block mappings if PUD mappings are not supported
Date: Fri, 04 Sep 2020 10:58:58 +0100	[thread overview]
Message-ID: <87sgbx7ti5.wl-maz@kernel.org> (raw)
In-Reply-To: <20200901133357.52640-3-alexandru.elisei@arm.com>

Hi Alex,

On Tue, 01 Sep 2020 14:33:57 +0100,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> 
> When userspace uses hugetlbfs for the VM memory, user_mem_abort() tries to
> use the same block size to map the faulting IPA in stage 2. If stage 2
> cannot use the same size mapping because the block size doesn't fit in the
> memslot or the memslot is not properly aligned, user_mem_abort() will fall
> back to a page mapping, regardless of the block size. We can do better for
> PUD backed hugetlbfs by checking if a PMD block mapping is possible before
> deciding to use a page.
> 
> vma_pagesize is an unsigned long, use 1UL instead of 1ULL when assigning
> its value.
> 
> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> ---
>  arch/arm64/kvm/mmu.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 25e7dc52c086..f590f7355cda 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1871,15 +1871,24 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	else
>  		vma_shift = PAGE_SHIFT;
>  
> -	vma_pagesize = 1ULL << vma_shift;
>  	if (logging_active ||
> -	    (vma->vm_flags & VM_PFNMAP) ||
> -	    !fault_supports_stage2_huge_mapping(memslot, hva, vma_pagesize)) {
> +	    (vma->vm_flags & VM_PFNMAP)) {
>  		force_pte = true;
> -		vma_pagesize = PAGE_SIZE;
>  		vma_shift = PAGE_SHIFT;
>  	}
>  
> +	if (vma_shift == PUD_SHIFT &&
> +	    !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
> +		vma_shift = PMD_SHIFT;
> +
> +	if (vma_shift == PMD_SHIFT &&
> +	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
> +		force_pte = true;
> +		vma_shift = PAGE_SHIFT;
> +	}
> +
> +	vma_pagesize = 1UL << vma_shift;
> +
>  	/*
>  	 * The stage2 has a minimum of 2 level table (For arm64 see
>  	 * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can
> @@ -1889,7 +1898,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	 */
>  	if (vma_pagesize == PMD_SIZE ||
>  	    (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm)))
> -		gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
> +		gfn = (fault_ipa & ~(vma_pagesize - 1)) >> PAGE_SHIFT;
>  	mmap_read_unlock(current->mm);
>  
>  	/* We need minimum second+third level pages */

Although this looks like a sensible change, I'm a reluctant to take it
at this stage, given that we already have a bunch of patches from Will
to change the way we deal with PTs.

Could you look into how this could fit into the new code instead?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-09-04  9:59 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-01 13:33 [PATCH 0/2] KVM: arm64: user_mem_abort() improvements Alexandru Elisei
2020-09-01 13:33 ` Alexandru Elisei
2020-09-01 13:33 ` [PATCH 1/2] KVM: arm64: Update page shift if stage 2 block mapping not supported Alexandru Elisei
2020-09-01 13:33   ` Alexandru Elisei
2020-09-02  0:57   ` Gavin Shan
2020-09-02  0:57     ` Gavin Shan
2020-09-01 13:33 ` [PATCH 2/2] KVM: arm64: Try PMD block mappings if PUD mappings are " Alexandru Elisei
2020-09-01 13:33   ` Alexandru Elisei
2020-09-02  1:23   ` Gavin Shan
2020-09-02  1:23     ` Gavin Shan
2020-09-02  9:01     ` Alexandru Elisei
2020-09-02  9:01       ` Alexandru Elisei
2020-09-03  0:06       ` Gavin Shan
2020-09-03  0:06         ` Gavin Shan
2020-09-04  9:58   ` Marc Zyngier [this message]
2020-09-04  9:58     ` Marc Zyngier
2020-09-08 12:23     ` Alexandru Elisei
2020-09-08 12:23       ` Alexandru Elisei
2020-09-08 12:41       ` Marc Zyngier
2020-09-08 12:41         ` Marc Zyngier
2020-09-04 10:18 ` [PATCH 0/2] KVM: arm64: user_mem_abort() improvements Marc Zyngier
2020-09-04 10:18   ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87sgbx7ti5.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=alexandru.elisei@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.