All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Gavin Shan <gshan@redhat.com>
Cc: kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	will@kernel.org, alexandru.elisei@arm.com
Subject: Re: [PATCH 3/3] KVM: arm64: Failback on unsupported huge page sizes
Date: Sun, 25 Oct 2020 10:48:27 +0000	[thread overview]
Message-ID: <87y2juoa2s.wl-maz@kernel.org> (raw)
In-Reply-To: <20201025002739.5804-4-gshan@redhat.com>

On Sun, 25 Oct 2020 01:27:39 +0100,
Gavin Shan <gshan@redhat.com> wrote:
> 
> The huge page could be mapped through multiple contiguous PMDs or PTEs.
> The corresponding huge page sizes aren't supported by the page table
> walker currently.
> 
> This fails the unsupported huge page sizes to the near one. Otherwise,
> the guest can't boot successfully: CONT_PMD_SHIFT and CONT_PTE_SHIFT
> fail back to PMD_SHIFT and PAGE_SHIFT separately.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/kvm/mmu.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 0f51585adc04..81cbdc368246 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -793,12 +793,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = PMD_SHIFT;
>  #endif
>  
> +	if (vma_shift == CONT_PMD_SHIFT)
> +		vma_shift = PMD_SHIFT;
> +
>  	if (vma_shift == PMD_SHIFT &&
>  	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>  		force_pte = true;
>  		vma_shift = PAGE_SHIFT;
>  	}
>  
> +	if (vma_shift == CONT_PTE_SHIFT) {
> +		force_pte = true;
> +		vma_shift = PAGE_SHIFT;
> +	}
> +
>  	vma_pagesize = 1UL << vma_shift;
>  	if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
>  		fault_ipa &= ~(vma_pagesize - 1);

Yup, nice catch. However, I think we should take this opportunity to
rationalise the logic here, and catch future discrepancies (should
someone add contiguous PUD or something similarly silly). How about
something like this (untested):

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index cc323d96c9d4..d9a13a8a82e0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -787,14 +787,31 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		vma_shift = PAGE_SHIFT;
 	}
 
-	if (vma_shift == PUD_SHIFT &&
-	    !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
-	       vma_shift = PMD_SHIFT;
+	switch (vma_shift) {
+	case PUD_SHIFT:
+		if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
+			break;
+		fallthrough;
 
-	if (vma_shift == PMD_SHIFT &&
-	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
-		force_pte = true;
+	case CONT_PMD_SHIFT:
+		vma_shift = PMD_SHIFT;
+		fallthrough;
+
+	case PMD_SHIFT:
+		if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE))
+			break;
+		fallthrough;
+
+	case CONT_PTE_SHIFT:
 		vma_shift = PAGE_SHIFT;
+		force_pte = true;
+		fallthrough;
+
+	case PAGE_SHIFT:
+		break;
+
+	default:
+		WARN_ONCE(1, "Unknown vma_shift %d", vma_shift);
 	}
 
 	vma_pagesize = 1UL << vma_shift;


Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Gavin Shan <gshan@redhat.com>
Cc: will@kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/3] KVM: arm64: Failback on unsupported huge page sizes
Date: Sun, 25 Oct 2020 10:48:27 +0000	[thread overview]
Message-ID: <87y2juoa2s.wl-maz@kernel.org> (raw)
In-Reply-To: <20201025002739.5804-4-gshan@redhat.com>

On Sun, 25 Oct 2020 01:27:39 +0100,
Gavin Shan <gshan@redhat.com> wrote:
> 
> The huge page could be mapped through multiple contiguous PMDs or PTEs.
> The corresponding huge page sizes aren't supported by the page table
> walker currently.
> 
> This fails the unsupported huge page sizes to the near one. Otherwise,
> the guest can't boot successfully: CONT_PMD_SHIFT and CONT_PTE_SHIFT
> fail back to PMD_SHIFT and PAGE_SHIFT separately.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/arm64/kvm/mmu.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 0f51585adc04..81cbdc368246 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -793,12 +793,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		vma_shift = PMD_SHIFT;
>  #endif
>  
> +	if (vma_shift == CONT_PMD_SHIFT)
> +		vma_shift = PMD_SHIFT;
> +
>  	if (vma_shift == PMD_SHIFT &&
>  	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>  		force_pte = true;
>  		vma_shift = PAGE_SHIFT;
>  	}
>  
> +	if (vma_shift == CONT_PTE_SHIFT) {
> +		force_pte = true;
> +		vma_shift = PAGE_SHIFT;
> +	}
> +
>  	vma_pagesize = 1UL << vma_shift;
>  	if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
>  		fault_ipa &= ~(vma_pagesize - 1);

Yup, nice catch. However, I think we should take this opportunity to
rationalise the logic here, and catch future discrepancies (should
someone add contiguous PUD or something similarly silly). How about
something like this (untested):

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index cc323d96c9d4..d9a13a8a82e0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -787,14 +787,31 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 		vma_shift = PAGE_SHIFT;
 	}
 
-	if (vma_shift == PUD_SHIFT &&
-	    !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
-	       vma_shift = PMD_SHIFT;
+	switch (vma_shift) {
+	case PUD_SHIFT:
+		if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
+			break;
+		fallthrough;
 
-	if (vma_shift == PMD_SHIFT &&
-	    !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
-		force_pte = true;
+	case CONT_PMD_SHIFT:
+		vma_shift = PMD_SHIFT;
+		fallthrough;
+
+	case PMD_SHIFT:
+		if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE))
+			break;
+		fallthrough;
+
+	case CONT_PTE_SHIFT:
 		vma_shift = PAGE_SHIFT;
+		force_pte = true;
+		fallthrough;
+
+	case PAGE_SHIFT:
+		break;
+
+	default:
+		WARN_ONCE(1, "Unknown vma_shift %d", vma_shift);
 	}
 
 	vma_pagesize = 1UL << vma_shift;


Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2020-10-25 10:48 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-25  0:27 [PATCH 0/3] KVM: arm64: Failback on unsupported huge pages Gavin Shan
2020-10-25  0:27 ` Gavin Shan
2020-10-25  0:27 ` [PATCH 1/3] KVM: arm64: Check if 52-bits PA is enabled Gavin Shan
2020-10-25  0:27   ` Gavin Shan
2020-10-25  9:52   ` Marc Zyngier
2020-10-25  9:52     ` Marc Zyngier
2020-10-25 22:23     ` Gavin Shan
2020-10-25 22:23       ` Gavin Shan
2020-10-26  8:40       ` Will Deacon
2020-10-26  8:40         ` Will Deacon
2020-10-26  8:53       ` Marc Zyngier
2020-10-26  8:53         ` Marc Zyngier
2020-10-26 22:48         ` Gavin Shan
2020-10-26 22:48           ` Gavin Shan
2020-10-25  0:27 ` [PATCH 2/3] KVM: arm64: Don't map PUD huge page if it's not available Gavin Shan
2020-10-25  0:27   ` Gavin Shan
2020-10-25 10:05   ` Marc Zyngier
2020-10-25 10:05     ` Marc Zyngier
2020-10-25 22:27     ` Gavin Shan
2020-10-25 22:27       ` Gavin Shan
2020-10-25  0:27 ` [PATCH 3/3] KVM: arm64: Failback on unsupported huge page sizes Gavin Shan
2020-10-25  0:27   ` Gavin Shan
2020-10-25 10:48   ` Marc Zyngier [this message]
2020-10-25 10:48     ` Marc Zyngier
2020-10-25 23:04     ` Gavin Shan
2020-10-25 23:04       ` Gavin Shan
2020-10-26  8:55       ` Marc Zyngier
2020-10-26  8:55         ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87y2juoa2s.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=alexandru.elisei@arm.com \
    --cc=gshan@redhat.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.