All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexandru Elisei <alexandru.elisei@arm.com>
To: Haibo Xu <haibo.xu@linaro.org>
Cc: maz@kernel.org, will@kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH v3 09/16] KVM: arm64: Use separate function for the mapping size in user_mem_abort()
Date: Wed, 2 Dec 2020 16:29:00 +0000	[thread overview]
Message-ID: <81782b0e-cb7a-3f76-3579-ad0fddf37638@arm.com> (raw)
In-Reply-To: <CAJc+Z1F1aEU=-qiCdDD8GhS3bhFvuhgPqtqTbBHF3RZgwznHgA@mail.gmail.com>

Hi Haibo,

On 11/5/20 10:01 AM, Haibo Xu wrote:
> On Wed, 28 Oct 2020 at 01:26, Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>> user_mem_abort() is already a long and complex function, let's make it
>> slightly easier to understand by abstracting the algorithm for choosing the
>> stage 2 IPA entry size into its own function.
>>
>> This also makes it possible to reuse the code when guest SPE support will
>> be added.
>>
> Better to mention that there is "No functional change"!

That's a good point, I'll add it.

Thanks,

Alex

>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kvm/mmu.c | 55 ++++++++++++++++++++++++++------------------
>>  1 file changed, 33 insertions(+), 22 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 19aacc7d64de..c3c43555490d 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -738,12 +738,43 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>         return PAGE_SIZE;
>>  }
>>
>> +static short stage2_max_pageshift(struct kvm_memory_slot *memslot,
>> +                                 struct vm_area_struct *vma, hva_t hva,
>> +                                 bool *force_pte)
>> +{
>> +       short pageshift;
>> +
>> +       *force_pte = false;
>> +
>> +       if (is_vm_hugetlb_page(vma))
>> +               pageshift = huge_page_shift(hstate_vma(vma));
>> +       else
>> +               pageshift = PAGE_SHIFT;
>> +
>> +       if (memslot_is_logging(memslot) || (vma->vm_flags & VM_PFNMAP)) {
>> +               *force_pte = true;
>> +               pageshift = PAGE_SHIFT;
>> +       }
>> +
>> +       if (pageshift == PUD_SHIFT &&
>> +           !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
>> +               pageshift = PMD_SHIFT;
>> +
>> +       if (pageshift == PMD_SHIFT &&
>> +           !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>> +               *force_pte = true;
>> +               pageshift = PAGE_SHIFT;
>> +       }
>> +
>> +       return pageshift;
>> +}
>> +
>>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>                           struct kvm_memory_slot *memslot, unsigned long hva,
>>                           unsigned long fault_status)
>>  {
>>         int ret = 0;
>> -       bool write_fault, writable, force_pte = false;
>> +       bool write_fault, writable, force_pte;
>>         bool exec_fault;
>>         bool device = false;
>>         unsigned long mmu_seq;
>> @@ -776,27 +807,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>                 return -EFAULT;
>>         }
>>
>> -       if (is_vm_hugetlb_page(vma))
>> -               vma_shift = huge_page_shift(hstate_vma(vma));
>> -       else
>> -               vma_shift = PAGE_SHIFT;
>> -
>> -       if (logging_active ||
>> -           (vma->vm_flags & VM_PFNMAP)) {
>> -               force_pte = true;
>> -               vma_shift = PAGE_SHIFT;
>> -       }
>> -
>> -       if (vma_shift == PUD_SHIFT &&
>> -           !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
>> -              vma_shift = PMD_SHIFT;
>> -
>> -       if (vma_shift == PMD_SHIFT &&
>> -           !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>> -               force_pte = true;
>> -               vma_shift = PAGE_SHIFT;
>> -       }
>> -
>> +       vma_shift = stage2_max_pageshift(memslot, vma, hva, &force_pte);
>>         vma_pagesize = 1UL << vma_shift;
>>         if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
>>                 fault_ipa &= ~(vma_pagesize - 1);
>> --
>> 2.29.1
>>
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Alexandru Elisei <alexandru.elisei@arm.com>
To: Haibo Xu <haibo.xu@linaro.org>
Cc: maz@kernel.org, will@kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH v3 09/16] KVM: arm64: Use separate function for the mapping size in user_mem_abort()
Date: Wed, 2 Dec 2020 16:29:00 +0000	[thread overview]
Message-ID: <81782b0e-cb7a-3f76-3579-ad0fddf37638@arm.com> (raw)
In-Reply-To: <CAJc+Z1F1aEU=-qiCdDD8GhS3bhFvuhgPqtqTbBHF3RZgwznHgA@mail.gmail.com>

Hi Haibo,

On 11/5/20 10:01 AM, Haibo Xu wrote:
> On Wed, 28 Oct 2020 at 01:26, Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>> user_mem_abort() is already a long and complex function, let's make it
>> slightly easier to understand by abstracting the algorithm for choosing the
>> stage 2 IPA entry size into its own function.
>>
>> This also makes it possible to reuse the code when guest SPE support will
>> be added.
>>
> Better to mention that there is "No functional change"!

That's a good point, I'll add it.

Thanks,

Alex

>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  arch/arm64/kvm/mmu.c | 55 ++++++++++++++++++++++++++------------------
>>  1 file changed, 33 insertions(+), 22 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 19aacc7d64de..c3c43555490d 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -738,12 +738,43 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>         return PAGE_SIZE;
>>  }
>>
>> +static short stage2_max_pageshift(struct kvm_memory_slot *memslot,
>> +                                 struct vm_area_struct *vma, hva_t hva,
>> +                                 bool *force_pte)
>> +{
>> +       short pageshift;
>> +
>> +       *force_pte = false;
>> +
>> +       if (is_vm_hugetlb_page(vma))
>> +               pageshift = huge_page_shift(hstate_vma(vma));
>> +       else
>> +               pageshift = PAGE_SHIFT;
>> +
>> +       if (memslot_is_logging(memslot) || (vma->vm_flags & VM_PFNMAP)) {
>> +               *force_pte = true;
>> +               pageshift = PAGE_SHIFT;
>> +       }
>> +
>> +       if (pageshift == PUD_SHIFT &&
>> +           !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
>> +               pageshift = PMD_SHIFT;
>> +
>> +       if (pageshift == PMD_SHIFT &&
>> +           !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>> +               *force_pte = true;
>> +               pageshift = PAGE_SHIFT;
>> +       }
>> +
>> +       return pageshift;
>> +}
>> +
>>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>                           struct kvm_memory_slot *memslot, unsigned long hva,
>>                           unsigned long fault_status)
>>  {
>>         int ret = 0;
>> -       bool write_fault, writable, force_pte = false;
>> +       bool write_fault, writable, force_pte;
>>         bool exec_fault;
>>         bool device = false;
>>         unsigned long mmu_seq;
>> @@ -776,27 +807,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>                 return -EFAULT;
>>         }
>>
>> -       if (is_vm_hugetlb_page(vma))
>> -               vma_shift = huge_page_shift(hstate_vma(vma));
>> -       else
>> -               vma_shift = PAGE_SHIFT;
>> -
>> -       if (logging_active ||
>> -           (vma->vm_flags & VM_PFNMAP)) {
>> -               force_pte = true;
>> -               vma_shift = PAGE_SHIFT;
>> -       }
>> -
>> -       if (vma_shift == PUD_SHIFT &&
>> -           !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
>> -              vma_shift = PMD_SHIFT;
>> -
>> -       if (vma_shift == PMD_SHIFT &&
>> -           !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
>> -               force_pte = true;
>> -               vma_shift = PAGE_SHIFT;
>> -       }
>> -
>> +       vma_shift = stage2_max_pageshift(memslot, vma, hva, &force_pte);
>>         vma_pagesize = 1UL << vma_shift;
>>         if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
>>                 fault_ipa &= ~(vma_pagesize - 1);
>> --
>> 2.29.1
>>
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-12-02 16:27 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-27 17:26 [RFC PATCH v3 00/16] KVM: arm64: Add Statistical Profiling Extension (SPE) support Alexandru Elisei
2020-10-27 17:26 ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 01/16] KVM: arm64: Initialize VCPU mdcr_el2 before loading it Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-11-19 16:58   ` James Morse
2020-11-19 16:58     ` James Morse
2020-12-02 14:25     ` Alexandru Elisei
2020-12-02 14:25       ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 02/16] dt-bindings: ARM SPE: Highlight the need for PPI partitions on heterogeneous systems Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 03/16] KVM: arm64: Hide SPE from guests Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 04/16] arm64: Introduce CPU SPE feature Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-11-19 16:58   ` James Morse
2020-11-19 16:58     ` James Morse
2020-12-02 14:29     ` Alexandru Elisei
2020-12-02 14:29       ` Alexandru Elisei
2020-12-02 17:23       ` Will Deacon
2020-12-02 17:23         ` Will Deacon
2020-12-03 10:07         ` Alexandru Elisei
2020-12-03 10:07           ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 05/16] KVM: arm64: Introduce VCPU " Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 06/16] KVM: arm64: Introduce SPE primitives Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-11-19 16:58   ` James Morse
2020-11-19 16:58     ` James Morse
2020-12-02 15:13     ` Alexandru Elisei
2020-12-02 15:13       ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 07/16] KVM: arm64: Define SPE data structure for each VCPU Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 08/16] KVM: arm64: Add a new VCPU device control group for SPE Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-11-05  9:58   ` Haibo Xu
2020-11-05  9:58     ` Haibo Xu
2020-12-02 15:20     ` Alexandru Elisei
2020-12-02 15:20       ` Alexandru Elisei
2020-11-19 16:58   ` James Morse
2020-11-19 16:58     ` James Morse
2020-12-02 16:28     ` Alexandru Elisei
2020-12-02 16:28       ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 09/16] KVM: arm64: Use separate function for the mapping size in user_mem_abort() Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-11-05 10:01   ` Haibo Xu
2020-11-05 10:01     ` Haibo Xu
2020-12-02 16:29     ` Alexandru Elisei [this message]
2020-12-02 16:29       ` Alexandru Elisei
2020-10-27 17:26 ` [RFC PATCH v3 10/16] KVM: arm64: Add a new VM device control group for SPE Alexandru Elisei
2020-10-27 17:26   ` Alexandru Elisei
2020-11-05 10:10   ` Haibo Xu
2020-11-05 10:10     ` Haibo Xu
2020-12-02 16:35     ` Alexandru Elisei
2020-12-02 16:35       ` Alexandru Elisei
2020-11-19 16:59   ` James Morse
2020-11-19 16:59     ` James Morse
2021-03-23 14:27     ` Alexandru Elisei
2021-03-23 14:27       ` Alexandru Elisei
2020-10-27 17:27 ` [RFC PATCH v3 11/16] KVM: arm64: Add SPE system registers to VCPU context Alexandru Elisei
2020-10-27 17:27   ` Alexandru Elisei
2020-10-27 17:27 ` [RFC PATCH v3 12/16] KVM: arm64: VHE: Clear MDCR_EL2.E2PB in vcpu_put() Alexandru Elisei
2020-10-27 17:27   ` Alexandru Elisei
2020-10-27 17:27 ` [RFC PATCH v3 13/16] KVM: arm64: Switch SPE context on VM entry/exit Alexandru Elisei
2020-10-27 17:27   ` Alexandru Elisei
2020-10-27 17:27 ` [RFC PATCH v3 14/16] KVM: arm64: Emulate SPE buffer management interrupt Alexandru Elisei
2020-10-27 17:27   ` Alexandru Elisei
2020-10-27 17:27 ` [RFC PATCH v3 15/16] KVM: arm64: Enable SPE for guests Alexandru Elisei
2020-10-27 17:27   ` Alexandru Elisei
2020-10-27 17:27 ` [RFC PATCH v3 16/16] Documentation: arm64: Document ARM Neoverse-N1 erratum #1688567 Alexandru Elisei
2020-10-27 17:27   ` Alexandru Elisei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=81782b0e-cb7a-3f76-3579-ad0fddf37638@arm.com \
    --to=alexandru.elisei@arm.com \
    --cc=haibo.xu@linaro.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.