kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Gavin Shan <gshan@redhat.com>
To: Keqian Zhu <zhukeqian1@huawei.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
	kvmarm@lists.cs.columbia.edu
Cc: Marc Zyngier <maz@kernel.org>, wanghaibin.wang@huawei.com
Subject: Re: [PATCH v4 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO
Date: Thu, 22 Apr 2021 12:25:23 +1000	[thread overview]
Message-ID: <46606f3e-ef41-6520-6647-88c0f76a83e0@redhat.com> (raw)
In-Reply-To: <105a403a-e48b-15bc-44ff-0ff34f7d2194@huawei.com>

Hi Keqian,

On 4/21/21 4:36 PM, Keqian Zhu wrote:
> On 2021/4/21 15:52, Gavin Shan wrote:
>> On 4/16/21 12:03 AM, Keqian Zhu wrote:
>>> The MMIO region of a device maybe huge (GB level), try to use
>>> block mapping in stage2 to speedup both map and unmap.
>>>
>>> Compared to normal memory mapping, we should consider two more
>>> points when try block mapping for MMIO region:
>>>
>>> 1. For normal memory mapping, the PA(host physical address) and
>>> HVA have same alignment within PUD_SIZE or PMD_SIZE when we use
>>> the HVA to request hugepage, so we don't need to consider PA
>>> alignment when verifing block mapping. But for device memory
>>> mapping, the PA and HVA may have different alignment.
>>>
>>> 2. For normal memory mapping, we are sure hugepage size properly
>>> fit into vma, so we don't check whether the mapping size exceeds
>>> the boundary of vma. But for device memory mapping, we should pay
>>> attention to this.
>>>
>>> This adds get_vma_page_shift() to get page shift for both normal
>>> memory and device MMIO region, and check these two points when
>>> selecting block mapping size for MMIO region.
>>>
>>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>>> ---
>>>    arch/arm64/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++--------
>>>    1 file changed, 51 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>> index c59af5ca01b0..5a1cc7751e6d 100644
>>> --- a/arch/arm64/kvm/mmu.c
>>> +++ b/arch/arm64/kvm/mmu.c
>>> @@ -738,6 +738,35 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
>>>        return PAGE_SIZE;
>>>    }
>>>    +static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
>>> +{
>>> +    unsigned long pa;
>>> +
>>> +    if (is_vm_hugetlb_page(vma) && !(vma->vm_flags & VM_PFNMAP))
>>> +        return huge_page_shift(hstate_vma(vma));
>>> +
>>> +    if (!(vma->vm_flags & VM_PFNMAP))
>>> +        return PAGE_SHIFT;
>>> +
>>> +    VM_BUG_ON(is_vm_hugetlb_page(vma));
>>> +
>>
>> I don't understand how VM_PFNMAP is set for hugetlbfs related vma.
>> I think they are exclusive, meaning the flag is never set for
>> hugetlbfs vma. If it's true, VM_PFNMAP needn't be checked on hugetlbfs
>> vma and the VM_BUG_ON() becomes unnecessary.
> Yes, but we're not sure all drivers follow this rule. Add a BUG_ON() is
> a way to catch issue.
> 

I think I didn't make things clear. What I meant is VM_PFNMAP can't
be set for hugetlbfs VMAs. So the checks here can be simplified as
below if you agree:

     if (is_vm_hugetlb_page(vma))
         return huge_page_shift(hstate_vma(vma));

     if (!(vma->vm_flags & VM_PFNMAP))
         return PAGE_SHIFT;

     VM_BUG_ON(is_vm_hugetlb_page(vma));       /* Can be dropped */

>>
>>> +    pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start);
>>> +
>>> +#ifndef __PAGETABLE_PMD_FOLDED
>>> +    if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) &&
>>> +        ALIGN_DOWN(hva, PUD_SIZE) >= vma->vm_start &&
>>> +        ALIGN(hva, PUD_SIZE) <= vma->vm_end)
>>> +        return PUD_SHIFT;
>>> +#endif
>>> +
>>> +    if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) &&
>>> +        ALIGN_DOWN(hva, PMD_SIZE) >= vma->vm_start &&
>>> +        ALIGN(hva, PMD_SIZE) <= vma->vm_end)
>>> +        return PMD_SHIFT;
>>> +
>>> +    return PAGE_SHIFT;
>>> +}
>>> +
>>
>> There is "switch(...)" fallback mechanism in user_mem_abort(). PUD_SIZE/PMD_SIZE
>> can be downgraded accordingly if the addresses fails in the alignment check
>> by fault_supports_stage2_huge_mapping(). I think it would make user_mem_abort()
>> simplified if the logic can be moved to get_vma_page_shift().
>>
>> Another question if we need the check from fault_supports_stage2_huge_mapping()
>> if VM_PFNMAP area is going to be covered by block mapping. If so, the "switch(...)"
>> fallback mechanism needs to be part of get_vma_page_shift().
> Yes, Good suggestion. My idea is that we can keep this series simpler and do further
> optimization in another patch series. Do you mind to send a patch?
> 

Yeah, It's fine to keep this series as of being. There are 3 checks applied
here for MMIO region: hva/hpa/ipa and they're distributed in two functions,
making the code a bit hard to follow. I can post patch to simplify it after
your series gets merged :)

Thanks,
Gavin


  reply	other threads:[~2021-04-22  0:25 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-15 14:03 [PATCH v4 0/2] kvm/arm64: Try stage2 block mapping for host device MMIO Keqian Zhu
2021-04-15 14:03 ` [PATCH v4 1/2] kvm/arm64: Remove the creation time's mapping of MMIO regions Keqian Zhu
2021-04-21  6:38   ` Gavin Shan
2021-04-21  6:28     ` Keqian Zhu
2021-04-22  2:12       ` Gavin Shan
2021-04-22  7:41         ` Keqian Zhu
2021-04-23  1:35           ` Gavin Shan
2021-04-23  1:36             ` Keqian Zhu
2021-04-23  0:36   ` Gavin Shan
2021-04-15 14:03 ` [PATCH v4 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO Keqian Zhu
2021-04-15 14:08   ` Keqian Zhu
2021-04-16 14:44     ` Marc Zyngier
2021-04-17  1:05       ` Keqian Zhu
2021-04-21  7:52   ` Gavin Shan
2021-04-21  6:36     ` Keqian Zhu
2021-04-22  2:25       ` Gavin Shan [this message]
2021-04-22  6:51         ` Marc Zyngier
2021-04-23  0:42           ` Gavin Shan
2021-04-23  0:37   ` Gavin Shan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46606f3e-ef41-6520-6647-88c0f76a83e0@redhat.com \
    --to=gshan@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=wanghaibin.wang@huawei.com \
    --cc=zhukeqian1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).