From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 782C5C43381 for ; Tue, 5 Mar 2019 11:33:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 41D4F20842 for ; Tue, 5 Mar 2019 11:33:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727965AbfCELdc (ORCPT ); Tue, 5 Mar 2019 06:33:32 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:46112 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727114AbfCELda (ORCPT ); Tue, 5 Mar 2019 06:33:30 -0500 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 10E4EDEFDC51207AC517; Tue, 5 Mar 2019 19:33:28 +0800 (CST) Received: from [127.0.0.1] (10.184.12.158) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 5 Mar 2019 19:33:22 +0800 Subject: Re: [RFC PATCH] KVM: arm64: Force a PTE mapping when logging is enabled To: Suzuki K Poulose , , CC: , , , , , , , References: <1551497728-12576-1-git-send-email-yuzenghui@huawei.com> <20190304171320.GA3984@en101> <32f302eb-ef89-7de4-36b4-3c3df907c732@arm.com> <865b8b0b-e42e-fe03-e3b4-ae2cc5b1b424@huawei.com> From: Zenghui Yu Message-ID: <2b60806d-77cd-98a2-e9b7-f2393f2592f7@huawei.com> Date: Tue, 5 Mar 2019 19:32:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Thunderbird/64.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.184.12.158] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/3/5 19:13, Suzuki K Poulose wrote: > Hi Zenghui, > > On 05/03/2019 11:09, Zenghui Yu wrote: >> Hi Marc, Suzuki, >> >> On 2019/3/5 1:34, Marc Zyngier wrote: >>> Hi Zenghui, Suzuki, >>> >>> On 04/03/2019 17:13, Suzuki K Poulose wrote: >>>> Hi Zenghui, >>>> >>>> On Sun, Mar 03, 2019 at 11:14:38PM +0800, Zenghui Yu wrote: >>>>> I think there're still some problems in this patch... Details below. >>>>> >>>>> On Sat, Mar 2, 2019 at 11:39 AM Zenghui Yu >>>>> wrote: >>>>>> >>>>>> The idea behind this is: we don't want to keep tracking of huge >>>>>> pages when >>>>>> logging_active is true, which will result in performance >>>>>> degradation.  We >>>>>> still need to set vma_pagesize to PAGE_SIZE, so that we can make >>>>>> use of it >>>>>> to force a PTE mapping. >>>> >>>> Yes, you're right. We are indeed ignoring the force_pte flag. >>>> >>>>>> >>>>>> Cc: Suzuki K Poulose >>>>>> Cc: Punit Agrawal >>>>>> Signed-off-by: Zenghui Yu >>>>>> >>>>>> --- >>>>>> Atfer looking into https://patchwork.codeaurora.org/patch/647985/ >>>>>> , the >>>>>> "vma_pagesize = PAGE_SIZE" logic was not intended to be deleted. >>>>>> As far >>>>>> as I can tell, we used to have "hugetlb" to force the PTE mapping, >>>>>> but >>>>>> we have "vma_pagesize" currently instead. We should set it >>>>>> properly for >>>>>> performance reasons (e.g, in VM migration). Did I miss something >>>>>> important? >>>>>> >>>>>> --- >>>>>>    virt/kvm/arm/mmu.c | 7 +++++++ >>>>>>    1 file changed, 7 insertions(+) >>>>>> >>>>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >>>>>> index 30251e2..7d41b16 100644 >>>>>> --- a/virt/kvm/arm/mmu.c >>>>>> +++ b/virt/kvm/arm/mmu.c >>>>>> @@ -1705,6 +1705,13 @@ static int user_mem_abort(struct kvm_vcpu >>>>>> *vcpu, phys_addr_t fault_ipa, >>>>>>                (vma_pagesize == PUD_SIZE && >>>>>> kvm_stage2_has_pmd(kvm))) && >>>>>>               !force_pte) { >>>>>>                   gfn = (fault_ipa & >>>>>> huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT; >>>>>> +       } else { >>>>>> +               /* >>>>>> +                * Fallback to PTE if it's not one of the stage2 >>>>>> +                * supported hugepage sizes or the corresponding >>>>>> level >>>>>> +                * doesn't exist, or logging is enabled. >>>>> >>>>> First, Instead of "logging is enabled", it should be "force_pte is >>>>> true", >>>>> since "force_pte" will be true when: >>>>> >>>>>           1) fault_supports_stage2_pmd_mappings() return false; or >>>>>           2) "logging is enabled" (e.g, in VM migration). >>>>> >>>>> Second, fallback some unsupported hugepage sizes (e.g, 64K hugepage >>>>> with >>>>> 4K pages) to PTE is somewhat strange. And it will then _unexpectedly_ >>>>> reach transparent_hugepage_adjust(), though no real adjustment will >>>>> happen >>>>> since commit fd2ef358282c ("KVM: arm/arm64: Ensure only THP is >>>>> candidate >>>>> for adjustment"). Keeping "vma_pagesize" there as it is will be >>>>> better, >>>>> right? >>>>> >>>>> So I'd just simplify the logic like: >>>> >>>> We could fix this right in the beginning. See patch below: >>>> >>>>> >>>>>           } else if (force_pte) { >>>>>                   vma_pagesize = PAGE_SIZE; >>>>>           } >>>>> >>>>> >>>>> Will send a V2 later and waiting for your comments :) >>>> >>>> >>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >>>> index 30251e2..529331e 100644 >>>> --- a/virt/kvm/arm/mmu.c >>>> +++ b/virt/kvm/arm/mmu.c >>>> @@ -1693,7 +1693,9 @@ static int user_mem_abort(struct kvm_vcpu >>>> *vcpu, phys_addr_t fault_ipa, >>>>            return -EFAULT; >>>>        } >>>> -    vma_pagesize = vma_kernel_pagesize(vma); >>>> +    /* If we are forced to map at page granularity, force the >>>> pagesize here */ >>>> +    vma_pagesize = force_pte ? PAGE_SIZE : vma_kernel_pagesize(vma); >>>> + >>>>        /* >>>>         * The stage2 has a minimum of 2 level table (For arm64 see >>>>         * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can >>>> @@ -1701,11 +1703,10 @@ static int user_mem_abort(struct kvm_vcpu >>>> *vcpu, phys_addr_t fault_ipa, >>>>         * As for PUD huge maps, we must make sure that we have at least >>>>         * 3 levels, i.e, PMD is not folded. >>>>         */ >>>> -    if ((vma_pagesize == PMD_SIZE || >>>> -         (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) && >>>> -        !force_pte) { >>>> +    if (vma_pagesize == PMD_SIZE || >>>> +        (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) >>>>            gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> >>>> PAGE_SHIFT; >>>> -    } >>>> + >>>>        up_read(¤t->mm->mmap_sem); >>>>        /* We need minimum second+third level pages */ >> >> A nicer implementation and easier to understand, thanks! >> >>> That's pretty interesting, because this is almost what we already have >>> in the NV code: >>> >>> https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/tree/virt/kvm/arm/mmu.c?h=kvm-arm64/nv-wip-v5.0-rc7#n1752 >>> >>> >>> (note that force_pte is gone in that branch). >> >> haha :-) sorry about that. I haven't looked into the NV code yet, so ... >> >> But I'm still wondering: should we fix this wrong mapping size problem >> before NV is introduced? Since this problem has not much to do with NV, >> and 5.0 has already been released with this problem (and 5.1 will >> without fix ...). > > Yes, we must fix it. I will soon send out a patch copying on it. > Its just that I find some more issues around forcing the PTE > mappings with PUD huge pages. I will send something out soon. Sounds good! zenghui From mboxrd@z Thu Jan 1 00:00:00 1970 From: Zenghui Yu Subject: Re: [RFC PATCH] KVM: arm64: Force a PTE mapping when logging is enabled Date: Tue, 5 Mar 2019 19:32:35 +0800 Message-ID: <2b60806d-77cd-98a2-e9b7-f2393f2592f7@huawei.com> References: <1551497728-12576-1-git-send-email-yuzenghui@huawei.com> <20190304171320.GA3984@en101> <32f302eb-ef89-7de4-36b4-3c3df907c732@arm.com> <865b8b0b-e42e-fe03-e3b4-ae2cc5b1b424@huawei.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Return-path: In-Reply-To: Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org To: Suzuki K Poulose , marc.zyngier@arm.com, zenghuiyu96@gmail.com Cc: christoffer.dall@arm.com, punit.agrawal@arm.com, julien.thierry@arm.com, linux-kernel@vger.kernel.org, james.morse@arm.com, wanghaibin.wang@huawei.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org List-Id: kvmarm@lists.cs.columbia.edu On 2019/3/5 19:13, Suzuki K Poulose wrote: > Hi Zenghui, > > On 05/03/2019 11:09, Zenghui Yu wrote: >> Hi Marc, Suzuki, >> >> On 2019/3/5 1:34, Marc Zyngier wrote: >>> Hi Zenghui, Suzuki, >>> >>> On 04/03/2019 17:13, Suzuki K Poulose wrote: >>>> Hi Zenghui, >>>> >>>> On Sun, Mar 03, 2019 at 11:14:38PM +0800, Zenghui Yu wrote: >>>>> I think there're still some problems in this patch... Details below. >>>>> >>>>> On Sat, Mar 2, 2019 at 11:39 AM Zenghui Yu >>>>> wrote: >>>>>> >>>>>> The idea behind this is: we don't want to keep tracking of huge >>>>>> pages when >>>>>> logging_active is true, which will result in performance >>>>>> degradation.  We >>>>>> still need to set vma_pagesize to PAGE_SIZE, so that we can make >>>>>> use of it >>>>>> to force a PTE mapping. >>>> >>>> Yes, you're right. We are indeed ignoring the force_pte flag. >>>> >>>>>> >>>>>> Cc: Suzuki K Poulose >>>>>> Cc: Punit Agrawal >>>>>> Signed-off-by: Zenghui Yu >>>>>> >>>>>> --- >>>>>> Atfer looking into https://patchwork.codeaurora.org/patch/647985/ >>>>>> , the >>>>>> "vma_pagesize = PAGE_SIZE" logic was not intended to be deleted. >>>>>> As far >>>>>> as I can tell, we used to have "hugetlb" to force the PTE mapping, >>>>>> but >>>>>> we have "vma_pagesize" currently instead. We should set it >>>>>> properly for >>>>>> performance reasons (e.g, in VM migration). Did I miss something >>>>>> important? >>>>>> >>>>>> --- >>>>>>    virt/kvm/arm/mmu.c | 7 +++++++ >>>>>>    1 file changed, 7 insertions(+) >>>>>> >>>>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >>>>>> index 30251e2..7d41b16 100644 >>>>>> --- a/virt/kvm/arm/mmu.c >>>>>> +++ b/virt/kvm/arm/mmu.c >>>>>> @@ -1705,6 +1705,13 @@ static int user_mem_abort(struct kvm_vcpu >>>>>> *vcpu, phys_addr_t fault_ipa, >>>>>>                (vma_pagesize == PUD_SIZE && >>>>>> kvm_stage2_has_pmd(kvm))) && >>>>>>               !force_pte) { >>>>>>                   gfn = (fault_ipa & >>>>>> huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT; >>>>>> +       } else { >>>>>> +               /* >>>>>> +                * Fallback to PTE if it's not one of the stage2 >>>>>> +                * supported hugepage sizes or the corresponding >>>>>> level >>>>>> +                * doesn't exist, or logging is enabled. >>>>> >>>>> First, Instead of "logging is enabled", it should be "force_pte is >>>>> true", >>>>> since "force_pte" will be true when: >>>>> >>>>>           1) fault_supports_stage2_pmd_mappings() return false; or >>>>>           2) "logging is enabled" (e.g, in VM migration). >>>>> >>>>> Second, fallback some unsupported hugepage sizes (e.g, 64K hugepage >>>>> with >>>>> 4K pages) to PTE is somewhat strange. And it will then _unexpectedly_ >>>>> reach transparent_hugepage_adjust(), though no real adjustment will >>>>> happen >>>>> since commit fd2ef358282c ("KVM: arm/arm64: Ensure only THP is >>>>> candidate >>>>> for adjustment"). Keeping "vma_pagesize" there as it is will be >>>>> better, >>>>> right? >>>>> >>>>> So I'd just simplify the logic like: >>>> >>>> We could fix this right in the beginning. See patch below: >>>> >>>>> >>>>>           } else if (force_pte) { >>>>>                   vma_pagesize = PAGE_SIZE; >>>>>           } >>>>> >>>>> >>>>> Will send a V2 later and waiting for your comments :) >>>> >>>> >>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >>>> index 30251e2..529331e 100644 >>>> --- a/virt/kvm/arm/mmu.c >>>> +++ b/virt/kvm/arm/mmu.c >>>> @@ -1693,7 +1693,9 @@ static int user_mem_abort(struct kvm_vcpu >>>> *vcpu, phys_addr_t fault_ipa, >>>>            return -EFAULT; >>>>        } >>>> -    vma_pagesize = vma_kernel_pagesize(vma); >>>> +    /* If we are forced to map at page granularity, force the >>>> pagesize here */ >>>> +    vma_pagesize = force_pte ? PAGE_SIZE : vma_kernel_pagesize(vma); >>>> + >>>>        /* >>>>         * The stage2 has a minimum of 2 level table (For arm64 see >>>>         * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can >>>> @@ -1701,11 +1703,10 @@ static int user_mem_abort(struct kvm_vcpu >>>> *vcpu, phys_addr_t fault_ipa, >>>>         * As for PUD huge maps, we must make sure that we have at least >>>>         * 3 levels, i.e, PMD is not folded. >>>>         */ >>>> -    if ((vma_pagesize == PMD_SIZE || >>>> -         (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) && >>>> -        !force_pte) { >>>> +    if (vma_pagesize == PMD_SIZE || >>>> +        (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) >>>>            gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> >>>> PAGE_SHIFT; >>>> -    } >>>> + >>>>        up_read(¤t->mm->mmap_sem); >>>>        /* We need minimum second+third level pages */ >> >> A nicer implementation and easier to understand, thanks! >> >>> That's pretty interesting, because this is almost what we already have >>> in the NV code: >>> >>> https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/tree/virt/kvm/arm/mmu.c?h=kvm-arm64/nv-wip-v5.0-rc7#n1752 >>> >>> >>> (note that force_pte is gone in that branch). >> >> haha :-) sorry about that. I haven't looked into the NV code yet, so ... >> >> But I'm still wondering: should we fix this wrong mapping size problem >> before NV is introduced? Since this problem has not much to do with NV, >> and 5.0 has already been released with this problem (and 5.1 will >> without fix ...). > > Yes, we must fix it. I will soon send out a patch copying on it. > Its just that I find some more issues around forcing the PTE > mappings with PUD huge pages. I will send something out soon. Sounds good! zenghui From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47356C43381 for ; Tue, 5 Mar 2019 12:10:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 10AAB2082C for ; Tue, 5 Mar 2019 12:10:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="r9tXa7xB"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="tFyj7c2j" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10AAB2082C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=C0+GQ5R/evawIqI/U7+F6DkXvjS9d0GKrQ4YmqPaX7M=; b=r9tXa7xBJnfe3bS7VTHwD2s9y H9H/59oo/D9EgdqTU/YKjZSSYriZYRMejuoLfmPHAIB15LP9wrvOqBxQHv/n+zZLgHFiGJQnm6gIH fNOmf5IZbVJDNBmpX7O5FB1ohoEBB+5v6bOqqktq8i7Hb+kUgBoZZXd2poOzyYP1e6JhNv9AzAxZw BpBlgMb3rlcz/T+cBvhzFA+2idYeD/NRncC/rEOZII+BRIulaGO8MYcfMMQUjybokxLLr6lnz1+eO AvL8wX2qtpfBtvIp++Yx4qYZcwAVG1N9nJ0cn2SzgkZsQYa/9q3zcS9c2GUee9vSI0bLHU7yNF0wh 8JI8PMQtg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h18uL-0004a0-UB; Tue, 05 Mar 2019 12:10:53 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h18uJ-0004PB-SG for linux-arm-kernel@bombadil.infradead.org; Tue, 05 Mar 2019 12:10:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:Content-Type: In-Reply-To:MIME-Version:Date:Message-ID:From:References:CC:To:Subject:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=W7EsEfoNwN29PEbh6Eqh5a+1ocZNogO7mdb861+R0BY=; b=tFyj7c2jUFu32CtlWvWIXP4qjB jmXYDINr9XRnQif7hLM5LyS5Ab4smU6MKQ7nH3Dgh1Ly0hhbzHhtAjvfzjPbY8lEAP/eAN6Q+uQ0e ZudkBU4LHaXji7OFsGNM6OMb54wihg88anbAUW7W8wxgby5pHe0JiD5loAHKnWXVL6bfnYrlDr4eQ ew/zSnhIMRqZq9m1l9kmvbf2gHUzELyxP16yRmHoYriLs+E1Rd/dhfCumESMknZod4hFMO5Ad1bpd 7HCBto0wGjnV64MqSdxIvyCCMsvc6PGzTJMelKMXHxVhiYyRJnBI/eWGC+h25dDTDwu5agoSfwbKr dGc++sCQ==; Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h18KK-0005Ur-MR for linux-arm-kernel@lists.infradead.org; Tue, 05 Mar 2019 11:33:41 +0000 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 10E4EDEFDC51207AC517; Tue, 5 Mar 2019 19:33:28 +0800 (CST) Received: from [127.0.0.1] (10.184.12.158) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 5 Mar 2019 19:33:22 +0800 Subject: Re: [RFC PATCH] KVM: arm64: Force a PTE mapping when logging is enabled To: Suzuki K Poulose , , References: <1551497728-12576-1-git-send-email-yuzenghui@huawei.com> <20190304171320.GA3984@en101> <32f302eb-ef89-7de4-36b4-3c3df907c732@arm.com> <865b8b0b-e42e-fe03-e3b4-ae2cc5b1b424@huawei.com> From: Zenghui Yu Message-ID: <2b60806d-77cd-98a2-e9b7-f2393f2592f7@huawei.com> Date: Tue, 5 Mar 2019 19:32:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Thunderbird/64.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Originating-IP: [10.184.12.158] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190305_063341_083541_FE7155A4 X-CRM114-Status: GOOD ( 19.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.thierry@arm.com, punit.agrawal@arm.com, linux-kernel@vger.kernel.org, christoffer.dall@arm.com, james.morse@arm.com, wanghaibin.wang@huawei.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org CgpPbiAyMDE5LzMvNSAxOToxMywgU3V6dWtpIEsgUG91bG9zZSB3cm90ZToKPiBIaSBaZW5naHVp LAo+IAo+IE9uIDA1LzAzLzIwMTkgMTE6MDksIFplbmdodWkgWXUgd3JvdGU6Cj4+IEhpIE1hcmMs IFN1enVraSwKPj4KPj4gT24gMjAxOS8zLzUgMTozNCwgTWFyYyBaeW5naWVyIHdyb3RlOgo+Pj4g SGkgWmVuZ2h1aSwgU3V6dWtpLAo+Pj4KPj4+IE9uIDA0LzAzLzIwMTkgMTc6MTMsIFN1enVraSBL IFBvdWxvc2Ugd3JvdGU6Cj4+Pj4gSGkgWmVuZ2h1aSwKPj4+Pgo+Pj4+IE9uIFN1biwgTWFyIDAz LCAyMDE5IGF0IDExOjE0OjM4UE0gKzA4MDAsIFplbmdodWkgWXUgd3JvdGU6Cj4+Pj4+IEkgdGhp bmsgdGhlcmUncmUgc3RpbGwgc29tZSBwcm9ibGVtcyBpbiB0aGlzIHBhdGNoLi4uIERldGFpbHMg YmVsb3cuCj4+Pj4+Cj4+Pj4+IE9uIFNhdCwgTWFyIDIsIDIwMTkgYXQgMTE6MzkgQU0gWmVuZ2h1 aSBZdSA8eXV6ZW5naHVpQGh1YXdlaS5jb20+IAo+Pj4+PiB3cm90ZToKPj4+Pj4+Cj4+Pj4+PiBU aGUgaWRlYSBiZWhpbmQgdGhpcyBpczogd2UgZG9uJ3Qgd2FudCB0byBrZWVwIHRyYWNraW5nIG9m IGh1Z2UgCj4+Pj4+PiBwYWdlcyB3aGVuCj4+Pj4+PiBsb2dnaW5nX2FjdGl2ZSBpcyB0cnVlLCB3 aGljaCB3aWxsIHJlc3VsdCBpbiBwZXJmb3JtYW5jZSAKPj4+Pj4+IGRlZ3JhZGF0aW9uLsKgIFdl Cj4+Pj4+PiBzdGlsbCBuZWVkIHRvIHNldCB2bWFfcGFnZXNpemUgdG8gUEFHRV9TSVpFLCBzbyB0 aGF0IHdlIGNhbiBtYWtlIAo+Pj4+Pj4gdXNlIG9mIGl0Cj4+Pj4+PiB0byBmb3JjZSBhIFBURSBt YXBwaW5nLgo+Pj4+Cj4+Pj4gWWVzLCB5b3UncmUgcmlnaHQuIFdlIGFyZSBpbmRlZWQgaWdub3Jp bmcgdGhlIGZvcmNlX3B0ZSBmbGFnLgo+Pj4+Cj4+Pj4+Pgo+Pj4+Pj4gQ2M6IFN1enVraSBLIFBv dWxvc2UgPHN1enVraS5wb3Vsb3NlQGFybS5jb20+Cj4+Pj4+PiBDYzogUHVuaXQgQWdyYXdhbCA8 cHVuaXQuYWdyYXdhbEBhcm0uY29tPgo+Pj4+Pj4gU2lnbmVkLW9mZi1ieTogWmVuZ2h1aSBZdSA8 eXV6ZW5naHVpQGh1YXdlaS5jb20+Cj4+Pj4+Pgo+Pj4+Pj4gLS0tCj4+Pj4+PiBBdGZlciBsb29r aW5nIGludG8gaHR0cHM6Ly9wYXRjaHdvcmsuY29kZWF1cm9yYS5vcmcvcGF0Y2gvNjQ3OTg1LyAK Pj4+Pj4+ICwgdGhlCj4+Pj4+PiAidm1hX3BhZ2VzaXplID0gUEFHRV9TSVpFIiBsb2dpYyB3YXMg bm90IGludGVuZGVkIHRvIGJlIGRlbGV0ZWQuIAo+Pj4+Pj4gQXMgZmFyCj4+Pj4+PiBhcyBJIGNh biB0ZWxsLCB3ZSB1c2VkIHRvIGhhdmUgImh1Z2V0bGIiIHRvIGZvcmNlIHRoZSBQVEUgbWFwcGlu ZywgCj4+Pj4+PiBidXQKPj4+Pj4+IHdlIGhhdmUgInZtYV9wYWdlc2l6ZSIgY3VycmVudGx5IGlu c3RlYWQuIFdlIHNob3VsZCBzZXQgaXQgCj4+Pj4+PiBwcm9wZXJseSBmb3IKPj4+Pj4+IHBlcmZv cm1hbmNlIHJlYXNvbnMgKGUuZywgaW4gVk0gbWlncmF0aW9uKS4gRGlkIEkgbWlzcyBzb21ldGhp bmcgCj4+Pj4+PiBpbXBvcnRhbnQ/Cj4+Pj4+Pgo+Pj4+Pj4gLS0tCj4+Pj4+PiDCoMKgIHZpcnQv a3ZtL2FybS9tbXUuYyB8IDcgKysrKysrKwo+Pj4+Pj4gwqDCoCAxIGZpbGUgY2hhbmdlZCwgNyBp bnNlcnRpb25zKCspCj4+Pj4+Pgo+Pj4+Pj4gZGlmZiAtLWdpdCBhL3ZpcnQva3ZtL2FybS9tbXUu YyBiL3ZpcnQva3ZtL2FybS9tbXUuYwo+Pj4+Pj4gaW5kZXggMzAyNTFlMi4uN2Q0MWIxNiAxMDA2 NDQKPj4+Pj4+IC0tLSBhL3ZpcnQva3ZtL2FybS9tbXUuYwo+Pj4+Pj4gKysrIGIvdmlydC9rdm0v YXJtL21tdS5jCj4+Pj4+PiBAQCAtMTcwNSw2ICsxNzA1LDEzIEBAIHN0YXRpYyBpbnQgdXNlcl9t ZW1fYWJvcnQoc3RydWN0IGt2bV92Y3B1IAo+Pj4+Pj4gKnZjcHUsIHBoeXNfYWRkcl90IGZhdWx0 X2lwYSwKPj4+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKHZtYV9wYWdlc2l6ZSA9 PSBQVURfU0laRSAmJiAKPj4+Pj4+IGt2bV9zdGFnZTJfaGFzX3BtZChrdm0pKSkgJiYKPj4+Pj4+ IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICFmb3JjZV9wdGUpIHsKPj4+Pj4+IMKgwqDCoMKg wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZ2ZuID0gKGZhdWx0X2lwYSAmIAo+Pj4+Pj4gaHVn ZV9wYWdlX21hc2soaHN0YXRlX3ZtYSh2bWEpKSkgPj4gUEFHRV9TSElGVDsKPj4+Pj4+ICvCoMKg wqDCoMKgwqAgfSBlbHNlIHsKPj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIC8q Cj4+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICogRmFsbGJhY2sgdG8gUFRF IGlmIGl0J3Mgbm90IG9uZSBvZiB0aGUgc3RhZ2UyCj4+Pj4+PiArwqDCoMKgwqDCoMKgwqDCoMKg wqDCoMKgwqDCoMKgICogc3VwcG9ydGVkIGh1Z2VwYWdlIHNpemVzIG9yIHRoZSBjb3JyZXNwb25k aW5nIAo+Pj4+Pj4gbGV2ZWwKPj4+Pj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg KiBkb2Vzbid0IGV4aXN0LCBvciBsb2dnaW5nIGlzIGVuYWJsZWQuCj4+Pj4+Cj4+Pj4+IEZpcnN0 LCBJbnN0ZWFkIG9mICJsb2dnaW5nIGlzIGVuYWJsZWQiLCBpdCBzaG91bGQgYmUgImZvcmNlX3B0 ZSBpcyAKPj4+Pj4gdHJ1ZSIsCj4+Pj4+IHNpbmNlICJmb3JjZV9wdGUiIHdpbGwgYmUgdHJ1ZSB3 aGVuOgo+Pj4+Pgo+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqAgMSkgZmF1bHRfc3VwcG9ydHNfc3Rh Z2UyX3BtZF9tYXBwaW5ncygpIHJldHVybiBmYWxzZTsgb3IKPj4+Pj4gwqDCoMKgwqDCoMKgwqDC oMKgIDIpICJsb2dnaW5nIGlzIGVuYWJsZWQiIChlLmcsIGluIFZNIG1pZ3JhdGlvbikuCj4+Pj4+ Cj4+Pj4+IFNlY29uZCwgZmFsbGJhY2sgc29tZSB1bnN1cHBvcnRlZCBodWdlcGFnZSBzaXplcyAo ZS5nLCA2NEsgaHVnZXBhZ2UgCj4+Pj4+IHdpdGgKPj4+Pj4gNEsgcGFnZXMpIHRvIFBURSBpcyBz b21ld2hhdCBzdHJhbmdlLiBBbmQgaXQgd2lsbCB0aGVuIF91bmV4cGVjdGVkbHlfCj4+Pj4+IHJl YWNoIHRyYW5zcGFyZW50X2h1Z2VwYWdlX2FkanVzdCgpLCB0aG91Z2ggbm8gcmVhbCBhZGp1c3Rt ZW50IHdpbGwgCj4+Pj4+IGhhcHBlbgo+Pj4+PiBzaW5jZSBjb21taXQgZmQyZWYzNTgyODJjICgi S1ZNOiBhcm0vYXJtNjQ6IEVuc3VyZSBvbmx5IFRIUCBpcyAKPj4+Pj4gY2FuZGlkYXRlCj4+Pj4+ IGZvciBhZGp1c3RtZW50IikuIEtlZXBpbmcgInZtYV9wYWdlc2l6ZSIgdGhlcmUgYXMgaXQgaXMg d2lsbCBiZSAKPj4+Pj4gYmV0dGVyLAo+Pj4+PiByaWdodD8KPj4+Pj4KPj4+Pj4gU28gSSdkIGp1 c3Qgc2ltcGxpZnkgdGhlIGxvZ2ljIGxpa2U6Cj4+Pj4KPj4+PiBXZSBjb3VsZCBmaXggdGhpcyBy aWdodCBpbiB0aGUgYmVnaW5uaW5nLiBTZWUgcGF0Y2ggYmVsb3c6Cj4+Pj4KPj4+Pj4KPj4+Pj4g wqDCoMKgwqDCoMKgwqDCoMKgIH0gZWxzZSBpZiAoZm9yY2VfcHRlKSB7Cj4+Pj4+IMKgwqDCoMKg wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdm1hX3BhZ2VzaXplID0gUEFHRV9TSVpFOwo+Pj4+ PiDCoMKgwqDCoMKgwqDCoMKgwqAgfQo+Pj4+Pgo+Pj4+Pgo+Pj4+PiBXaWxsIHNlbmQgYSBWMiBs YXRlciBhbmQgd2FpdGluZyBmb3IgeW91ciBjb21tZW50cyA6KQo+Pj4+Cj4+Pj4KPj4+PiBkaWZm IC0tZ2l0IGEvdmlydC9rdm0vYXJtL21tdS5jIGIvdmlydC9rdm0vYXJtL21tdS5jCj4+Pj4gaW5k ZXggMzAyNTFlMi4uNTI5MzMxZSAxMDA2NDQKPj4+PiAtLS0gYS92aXJ0L2t2bS9hcm0vbW11LmMK Pj4+PiArKysgYi92aXJ0L2t2bS9hcm0vbW11LmMKPj4+PiBAQCAtMTY5Myw3ICsxNjkzLDkgQEAg c3RhdGljIGludCB1c2VyX21lbV9hYm9ydChzdHJ1Y3Qga3ZtX3ZjcHUgCj4+Pj4gKnZjcHUsIHBo eXNfYWRkcl90IGZhdWx0X2lwYSwKPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gLUVG QVVMVDsKPj4+PiDCoMKgwqDCoMKgwqAgfQo+Pj4+IC3CoMKgwqAgdm1hX3BhZ2VzaXplID0gdm1h X2tlcm5lbF9wYWdlc2l6ZSh2bWEpOwo+Pj4+ICvCoMKgwqAgLyogSWYgd2UgYXJlIGZvcmNlZCB0 byBtYXAgYXQgcGFnZSBncmFudWxhcml0eSwgZm9yY2UgdGhlIAo+Pj4+IHBhZ2VzaXplIGhlcmUg Ki8KPj4+PiArwqDCoMKgIHZtYV9wYWdlc2l6ZSA9IGZvcmNlX3B0ZSA/IFBBR0VfU0laRSA6IHZt YV9rZXJuZWxfcGFnZXNpemUodm1hKTsKPj4+PiArCj4+Pj4gwqDCoMKgwqDCoMKgIC8qCj4+Pj4g wqDCoMKgwqDCoMKgwqAgKiBUaGUgc3RhZ2UyIGhhcyBhIG1pbmltdW0gb2YgMiBsZXZlbCB0YWJs ZSAoRm9yIGFybTY0IHNlZQo+Pj4+IMKgwqDCoMKgwqDCoMKgICoga3ZtX2FybV9zZXR1cF9zdGFn ZTIoKSkuIEhlbmNlLCB3ZSBhcmUgZ3VhcmFudGVlZCB0aGF0IHdlIGNhbgo+Pj4+IEBAIC0xNzAx LDExICsxNzAzLDEwIEBAIHN0YXRpYyBpbnQgdXNlcl9tZW1fYWJvcnQoc3RydWN0IGt2bV92Y3B1 IAo+Pj4+ICp2Y3B1LCBwaHlzX2FkZHJfdCBmYXVsdF9pcGEsCj4+Pj4gwqDCoMKgwqDCoMKgwqAg KiBBcyBmb3IgUFVEIGh1Z2UgbWFwcywgd2UgbXVzdCBtYWtlIHN1cmUgdGhhdCB3ZSBoYXZlIGF0 IGxlYXN0Cj4+Pj4gwqDCoMKgwqDCoMKgwqAgKiAzIGxldmVscywgaS5lLCBQTUQgaXMgbm90IGZv bGRlZC4KPj4+PiDCoMKgwqDCoMKgwqDCoCAqLwo+Pj4+IC3CoMKgwqAgaWYgKCh2bWFfcGFnZXNp emUgPT0gUE1EX1NJWkUgfHwKPj4+PiAtwqDCoMKgwqDCoMKgwqDCoCAodm1hX3BhZ2VzaXplID09 IFBVRF9TSVpFICYmIGt2bV9zdGFnZTJfaGFzX3BtZChrdm0pKSkgJiYKPj4+PiAtwqDCoMKgwqDC oMKgwqAgIWZvcmNlX3B0ZSkgewo+Pj4+ICvCoMKgwqAgaWYgKHZtYV9wYWdlc2l6ZSA9PSBQTURf U0laRSB8fAo+Pj4+ICvCoMKgwqDCoMKgwqDCoCAodm1hX3BhZ2VzaXplID09IFBVRF9TSVpFICYm IGt2bV9zdGFnZTJfaGFzX3BtZChrdm0pKSkKPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoCBnZm4g PSAoZmF1bHRfaXBhICYgaHVnZV9wYWdlX21hc2soaHN0YXRlX3ZtYSh2bWEpKSkgPj4gCj4+Pj4g UEFHRV9TSElGVDsKPj4+PiAtwqDCoMKgIH0KPj4+PiArCj4+Pj4gwqDCoMKgwqDCoMKgIHVwX3Jl YWQoJmN1cnJlbnQtPm1tLT5tbWFwX3NlbSk7Cj4+Pj4gwqDCoMKgwqDCoMKgIC8qIFdlIG5lZWQg bWluaW11bSBzZWNvbmQrdGhpcmQgbGV2ZWwgcGFnZXMgKi8KPj4KPj4gQSBuaWNlciBpbXBsZW1l bnRhdGlvbiBhbmQgZWFzaWVyIHRvIHVuZGVyc3RhbmQsIHRoYW5rcyEKPj4KPj4+IFRoYXQncyBw cmV0dHkgaW50ZXJlc3RpbmcsIGJlY2F1c2UgdGhpcyBpcyBhbG1vc3Qgd2hhdCB3ZSBhbHJlYWR5 IGhhdmUKPj4+IGluIHRoZSBOViBjb2RlOgo+Pj4KPj4+IGh0dHBzOi8vZ2l0Lmtlcm5lbC5vcmcv cHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L21hei9hcm0tcGxhdGZvcm1zLmdpdC90cmVlL3ZpcnQv a3ZtL2FybS9tbXUuYz9oPWt2bS1hcm02NC9udi13aXAtdjUuMC1yYzcjbjE3NTIgCj4+Pgo+Pj4K Pj4+IChub3RlIHRoYXQgZm9yY2VfcHRlIGlzIGdvbmUgaW4gdGhhdCBicmFuY2gpLgo+Pgo+PiBo YWhhIDotKSBzb3JyeSBhYm91dCB0aGF0LiBJIGhhdmVuJ3QgbG9va2VkIGludG8gdGhlIE5WIGNv ZGUgeWV0LCBzbyAuLi4KPj4KPj4gQnV0IEknbSBzdGlsbCB3b25kZXJpbmc6IHNob3VsZCB3ZSBm aXggdGhpcyB3cm9uZyBtYXBwaW5nIHNpemUgcHJvYmxlbQo+PiBiZWZvcmUgTlYgaXMgaW50cm9k dWNlZD8gU2luY2UgdGhpcyBwcm9ibGVtIGhhcyBub3QgbXVjaCB0byBkbyB3aXRoIE5WLAo+PiBh bmQgNS4wIGhhcyBhbHJlYWR5IGJlZW4gcmVsZWFzZWQgd2l0aCB0aGlzIHByb2JsZW0gKGFuZCA1 LjEgd2lsbAo+PiB3aXRob3V0IGZpeCAuLi4pLgo+IAo+IFllcywgd2UgbXVzdCBmaXggaXQuIEkg d2lsbCBzb29uIHNlbmQgb3V0IGEgcGF0Y2ggY29weWluZyBvbiBpdC4KPiBJdHMganVzdCB0aGF0 IEkgZmluZCBzb21lIG1vcmUgaXNzdWVzIGFyb3VuZCBmb3JjaW5nIHRoZSBQVEUKPiBtYXBwaW5n cyB3aXRoIFBVRCBodWdlIHBhZ2VzLiBJIHdpbGwgc2VuZCBzb21ldGhpbmcgb3V0IHNvb24uCgpT b3VuZHMgZ29vZCEKCgp6ZW5naHVpCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fCmxpbnV4LWFybS1rZXJuZWwgbWFpbGluZyBsaXN0CmxpbnV4LWFybS1r ZXJuZWxAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8vbGlzdHMuaW5mcmFkZWFkLm9yZy9tYWls bWFuL2xpc3RpbmZvL2xpbnV4LWFybS1rZXJuZWwK