From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACBAEC433B4 for ; Wed, 21 Apr 2021 06:37:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5FF7361423 for ; Wed, 21 Apr 2021 06:37:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236066AbhDUGhk (ORCPT ); Wed, 21 Apr 2021 02:37:40 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:17020 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbhDUGhj (ORCPT ); Wed, 21 Apr 2021 02:37:39 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FQ9kN3TrFzPshQ; Wed, 21 Apr 2021 14:34:04 +0800 (CST) Received: from [10.174.187.224] (10.174.187.224) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.498.0; Wed, 21 Apr 2021 14:36:56 +0800 Subject: Re: [PATCH v4 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO To: Gavin Shan , , , , References: <20210415140328.24200-1-zhukeqian1@huawei.com> <20210415140328.24200-3-zhukeqian1@huawei.com> <960e097d-818b-00bc-b2ee-0da17857f862@redhat.com> CC: Marc Zyngier , From: Keqian Zhu Message-ID: <105a403a-e48b-15bc-44ff-0ff34f7d2194@huawei.com> Date: Wed, 21 Apr 2021 14:36:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <960e097d-818b-00bc-b2ee-0da17857f862@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/4/21 15:52, Gavin Shan wrote: > Hi Keqian, > > On 4/16/21 12:03 AM, Keqian Zhu wrote: >> The MMIO region of a device maybe huge (GB level), try to use >> block mapping in stage2 to speedup both map and unmap. >> >> Compared to normal memory mapping, we should consider two more >> points when try block mapping for MMIO region: >> >> 1. For normal memory mapping, the PA(host physical address) and >> HVA have same alignment within PUD_SIZE or PMD_SIZE when we use >> the HVA to request hugepage, so we don't need to consider PA >> alignment when verifing block mapping. But for device memory >> mapping, the PA and HVA may have different alignment. >> >> 2. For normal memory mapping, we are sure hugepage size properly >> fit into vma, so we don't check whether the mapping size exceeds >> the boundary of vma. But for device memory mapping, we should pay >> attention to this. >> >> This adds get_vma_page_shift() to get page shift for both normal >> memory and device MMIO region, and check these two points when >> selecting block mapping size for MMIO region. >> >> Signed-off-by: Keqian Zhu >> --- >> arch/arm64/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 51 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index c59af5ca01b0..5a1cc7751e6d 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -738,6 +738,35 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot, >> return PAGE_SIZE; >> } >> +static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) >> +{ >> + unsigned long pa; >> + >> + if (is_vm_hugetlb_page(vma) && !(vma->vm_flags & VM_PFNMAP)) >> + return huge_page_shift(hstate_vma(vma)); >> + >> + if (!(vma->vm_flags & VM_PFNMAP)) >> + return PAGE_SHIFT; >> + >> + VM_BUG_ON(is_vm_hugetlb_page(vma)); >> + > > I don't understand how VM_PFNMAP is set for hugetlbfs related vma. > I think they are exclusive, meaning the flag is never set for > hugetlbfs vma. If it's true, VM_PFNMAP needn't be checked on hugetlbfs > vma and the VM_BUG_ON() becomes unnecessary. Yes, but we're not sure all drivers follow this rule. Add a BUG_ON() is a way to catch issue. > >> + pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start); >> + >> +#ifndef __PAGETABLE_PMD_FOLDED >> + if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PUD_SIZE) >= vma->vm_start && >> + ALIGN(hva, PUD_SIZE) <= vma->vm_end) >> + return PUD_SHIFT; >> +#endif >> + >> + if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PMD_SIZE) >= vma->vm_start && >> + ALIGN(hva, PMD_SIZE) <= vma->vm_end) >> + return PMD_SHIFT; >> + >> + return PAGE_SHIFT; >> +} >> + > > There is "switch(...)" fallback mechanism in user_mem_abort(). PUD_SIZE/PMD_SIZE > can be downgraded accordingly if the addresses fails in the alignment check > by fault_supports_stage2_huge_mapping(). I think it would make user_mem_abort() > simplified if the logic can be moved to get_vma_page_shift(). > > Another question if we need the check from fault_supports_stage2_huge_mapping() > if VM_PFNMAP area is going to be covered by block mapping. If so, the "switch(...)" > fallback mechanism needs to be part of get_vma_page_shift(). Yes, Good suggestion. My idea is that we can keep this series simpler and do further optimization in another patch series. Do you mind to send a patch? Thanks, Keqian From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BE08C433B4 for ; Wed, 21 Apr 2021 06:37:20 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 9B17061426 for ; Wed, 21 Apr 2021 06:37:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B17061426 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0F5C74B4A9; Wed, 21 Apr 2021 02:37:19 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tcyuQZ0nAyQv; Wed, 21 Apr 2021 02:37:15 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 582F14B4B5; Wed, 21 Apr 2021 02:37:15 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7F77E4B4AC for ; Wed, 21 Apr 2021 02:37:13 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YQKBj4Ql3Spd for ; Wed, 21 Apr 2021 02:37:09 -0400 (EDT) Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 068084B4A9 for ; Wed, 21 Apr 2021 02:37:09 -0400 (EDT) Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FQ9kN3TrFzPshQ; Wed, 21 Apr 2021 14:34:04 +0800 (CST) Received: from [10.174.187.224] (10.174.187.224) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.498.0; Wed, 21 Apr 2021 14:36:56 +0800 Subject: Re: [PATCH v4 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO To: Gavin Shan , , , , References: <20210415140328.24200-1-zhukeqian1@huawei.com> <20210415140328.24200-3-zhukeqian1@huawei.com> <960e097d-818b-00bc-b2ee-0da17857f862@redhat.com> From: Keqian Zhu Message-ID: <105a403a-e48b-15bc-44ff-0ff34f7d2194@huawei.com> Date: Wed, 21 Apr 2021 14:36:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <960e097d-818b-00bc-b2ee-0da17857f862@redhat.com> X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Cc: Marc Zyngier X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On 2021/4/21 15:52, Gavin Shan wrote: > Hi Keqian, > > On 4/16/21 12:03 AM, Keqian Zhu wrote: >> The MMIO region of a device maybe huge (GB level), try to use >> block mapping in stage2 to speedup both map and unmap. >> >> Compared to normal memory mapping, we should consider two more >> points when try block mapping for MMIO region: >> >> 1. For normal memory mapping, the PA(host physical address) and >> HVA have same alignment within PUD_SIZE or PMD_SIZE when we use >> the HVA to request hugepage, so we don't need to consider PA >> alignment when verifing block mapping. But for device memory >> mapping, the PA and HVA may have different alignment. >> >> 2. For normal memory mapping, we are sure hugepage size properly >> fit into vma, so we don't check whether the mapping size exceeds >> the boundary of vma. But for device memory mapping, we should pay >> attention to this. >> >> This adds get_vma_page_shift() to get page shift for both normal >> memory and device MMIO region, and check these two points when >> selecting block mapping size for MMIO region. >> >> Signed-off-by: Keqian Zhu >> --- >> arch/arm64/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 51 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index c59af5ca01b0..5a1cc7751e6d 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -738,6 +738,35 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot, >> return PAGE_SIZE; >> } >> +static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) >> +{ >> + unsigned long pa; >> + >> + if (is_vm_hugetlb_page(vma) && !(vma->vm_flags & VM_PFNMAP)) >> + return huge_page_shift(hstate_vma(vma)); >> + >> + if (!(vma->vm_flags & VM_PFNMAP)) >> + return PAGE_SHIFT; >> + >> + VM_BUG_ON(is_vm_hugetlb_page(vma)); >> + > > I don't understand how VM_PFNMAP is set for hugetlbfs related vma. > I think they are exclusive, meaning the flag is never set for > hugetlbfs vma. If it's true, VM_PFNMAP needn't be checked on hugetlbfs > vma and the VM_BUG_ON() becomes unnecessary. Yes, but we're not sure all drivers follow this rule. Add a BUG_ON() is a way to catch issue. > >> + pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start); >> + >> +#ifndef __PAGETABLE_PMD_FOLDED >> + if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PUD_SIZE) >= vma->vm_start && >> + ALIGN(hva, PUD_SIZE) <= vma->vm_end) >> + return PUD_SHIFT; >> +#endif >> + >> + if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PMD_SIZE) >= vma->vm_start && >> + ALIGN(hva, PMD_SIZE) <= vma->vm_end) >> + return PMD_SHIFT; >> + >> + return PAGE_SHIFT; >> +} >> + > > There is "switch(...)" fallback mechanism in user_mem_abort(). PUD_SIZE/PMD_SIZE > can be downgraded accordingly if the addresses fails in the alignment check > by fault_supports_stage2_huge_mapping(). I think it would make user_mem_abort() > simplified if the logic can be moved to get_vma_page_shift(). > > Another question if we need the check from fault_supports_stage2_huge_mapping() > if VM_PFNMAP area is going to be covered by block mapping. If so, the "switch(...)" > fallback mechanism needs to be part of get_vma_page_shift(). Yes, Good suggestion. My idea is that we can keep this series simpler and do further optimization in another patch series. Do you mind to send a patch? Thanks, Keqian _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4D49C433ED for ; Wed, 21 Apr 2021 06:39:46 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 565B161426 for ; Wed, 21 Apr 2021 06:39:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 565B161426 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From:CC: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Vptny+8BODyXNAUDeKWCt9W62gKeddFlkJBn3VF+x1c=; b=PZAlfLMTri80LBeTEmohStQNq xCIv2sFbTwYytvehHKYN5BECWcgFafHKXdfy8AwEeCNzdZzttCi0M4wSZIs8IuaDbsygQ645Fdh1Q 1JLDubjuiHXHYeiWp+x5Iy32UgxRLUEDxTRTQZqd8UMymVVG9JBkkjWpm5e/YB63NTevXTVrtW0AB B32TplVJKrbJ3g6fsm6byZt0Mhkvqc+my06ySRpIXMwWngzVgeIY1EBblDtcFsmx/P7s156A7sMkJ VBYsfS2RF3DI6xAVYRiXJx0NBdvxUoT9bGv8BclInTaOp7PWFh0fxfz7ZU7b7Tmr0sGa9P9khunP0 Feqw3t8yg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lZ6UA-00Doko-0C; Wed, 21 Apr 2021 06:37:18 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lZ6U6-00Dokb-U0 for linux-arm-kernel@desiato.infradead.org; Wed, 21 Apr 2021 06:37:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: Content-Type:In-Reply-To:MIME-Version:Date:Message-ID:From:CC:References:To: Subject:Sender:Reply-To:Content-ID:Content-Description; bh=cSvXctx8p9ljV8BxFB3W4FwoR4XYZAKqzS2HprmgL+w=; b=M1XHLwhjYyuGR9JrSIJ8xyCR8Z VC8vC98vaePK3PotzmcFCrG6SZm0+FN0tUSkB+OopfE5EODfKTj2LrH60VBiKwGAh4tLIVrSB4CwX yW3Eu1gPw3X7ddGS9LlSIQ/t8dtCb3TIGVNt7pouM/iTA0+Z/fS7DMV8L1Gc+FSmf8vC0dv7AHhb6 2NS3xLKa+snXLp73xl+oLPUC2kzju46HsaqH5kIUZ4DFX9I2neJm7rIwoRV1diGd93D2nMz/qmlsu 2QajcOG77DOs8kK044Ct+e4N091IY5hgEYmem2mFr6fLRRtAhcM3nf8Tu8b05l+5lCRwQgepLW34a ar70PrcA==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lZ6U0-00Ce4N-Pr for linux-arm-kernel@lists.infradead.org; Wed, 21 Apr 2021 06:37:13 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FQ9kN3TrFzPshQ; Wed, 21 Apr 2021 14:34:04 +0800 (CST) Received: from [10.174.187.224] (10.174.187.224) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.498.0; Wed, 21 Apr 2021 14:36:56 +0800 Subject: Re: [PATCH v4 2/2] kvm/arm64: Try stage2 block mapping for host device MMIO To: Gavin Shan , , , , References: <20210415140328.24200-1-zhukeqian1@huawei.com> <20210415140328.24200-3-zhukeqian1@huawei.com> <960e097d-818b-00bc-b2ee-0da17857f862@redhat.com> CC: Marc Zyngier , From: Keqian Zhu Message-ID: <105a403a-e48b-15bc-44ff-0ff34f7d2194@huawei.com> Date: Wed, 21 Apr 2021 14:36:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <960e097d-818b-00bc-b2ee-0da17857f862@redhat.com> X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210420_233709_206543_528F16A5 X-CRM114-Status: GOOD ( 26.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021/4/21 15:52, Gavin Shan wrote: > Hi Keqian, > > On 4/16/21 12:03 AM, Keqian Zhu wrote: >> The MMIO region of a device maybe huge (GB level), try to use >> block mapping in stage2 to speedup both map and unmap. >> >> Compared to normal memory mapping, we should consider two more >> points when try block mapping for MMIO region: >> >> 1. For normal memory mapping, the PA(host physical address) and >> HVA have same alignment within PUD_SIZE or PMD_SIZE when we use >> the HVA to request hugepage, so we don't need to consider PA >> alignment when verifing block mapping. But for device memory >> mapping, the PA and HVA may have different alignment. >> >> 2. For normal memory mapping, we are sure hugepage size properly >> fit into vma, so we don't check whether the mapping size exceeds >> the boundary of vma. But for device memory mapping, we should pay >> attention to this. >> >> This adds get_vma_page_shift() to get page shift for both normal >> memory and device MMIO region, and check these two points when >> selecting block mapping size for MMIO region. >> >> Signed-off-by: Keqian Zhu >> --- >> arch/arm64/kvm/mmu.c | 61 ++++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 51 insertions(+), 10 deletions(-) >> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index c59af5ca01b0..5a1cc7751e6d 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -738,6 +738,35 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot, >> return PAGE_SIZE; >> } >> +static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) >> +{ >> + unsigned long pa; >> + >> + if (is_vm_hugetlb_page(vma) && !(vma->vm_flags & VM_PFNMAP)) >> + return huge_page_shift(hstate_vma(vma)); >> + >> + if (!(vma->vm_flags & VM_PFNMAP)) >> + return PAGE_SHIFT; >> + >> + VM_BUG_ON(is_vm_hugetlb_page(vma)); >> + > > I don't understand how VM_PFNMAP is set for hugetlbfs related vma. > I think they are exclusive, meaning the flag is never set for > hugetlbfs vma. If it's true, VM_PFNMAP needn't be checked on hugetlbfs > vma and the VM_BUG_ON() becomes unnecessary. Yes, but we're not sure all drivers follow this rule. Add a BUG_ON() is a way to catch issue. > >> + pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start); >> + >> +#ifndef __PAGETABLE_PMD_FOLDED >> + if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PUD_SIZE) >= vma->vm_start && >> + ALIGN(hva, PUD_SIZE) <= vma->vm_end) >> + return PUD_SHIFT; >> +#endif >> + >> + if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) && >> + ALIGN_DOWN(hva, PMD_SIZE) >= vma->vm_start && >> + ALIGN(hva, PMD_SIZE) <= vma->vm_end) >> + return PMD_SHIFT; >> + >> + return PAGE_SHIFT; >> +} >> + > > There is "switch(...)" fallback mechanism in user_mem_abort(). PUD_SIZE/PMD_SIZE > can be downgraded accordingly if the addresses fails in the alignment check > by fault_supports_stage2_huge_mapping(). I think it would make user_mem_abort() > simplified if the logic can be moved to get_vma_page_shift(). > > Another question if we need the check from fault_supports_stage2_huge_mapping() > if VM_PFNMAP area is going to be covered by block mapping. If so, the "switch(...)" > fallback mechanism needs to be part of get_vma_page_shift(). Yes, Good suggestion. My idea is that we can keep this series simpler and do further optimization in another patch series. Do you mind to send a patch? Thanks, Keqian _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel