From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30A69C432BE for ; Tue, 27 Jul 2021 17:45:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A68F160F8F for ; Tue, 27 Jul 2021 17:45:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A68F160F8F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 00FC86B0036; Tue, 27 Jul 2021 13:45:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F01578D0001; Tue, 27 Jul 2021 13:45:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEF496B006C; Tue, 27 Jul 2021 13:45:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id C253C6B0036 for ; Tue, 27 Jul 2021 13:45:26 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 742411ADBC for ; Tue, 27 Jul 2021 17:45:26 +0000 (UTC) X-FDA: 78409094652.12.CA03D50 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id E5F74F00323E for ; Tue, 27 Jul 2021 17:45:25 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 44CB11FB; Tue, 27 Jul 2021 10:45:25 -0700 (PDT) Received: from [192.168.0.110] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F0803F70D; Tue, 27 Jul 2021 10:45:23 -0700 (PDT) Subject: Re: [PATCH v2 5/6] KVM: arm64: Use get_page() instead of kvm_get_pfn() To: Marc Zyngier , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-mm@kvack.org Cc: Sean Christopherson , Matthew Wilcox , Paolo Bonzini , Will Deacon , Quentin Perret , James Morse , Suzuki K Poulose , kernel-team@android.com References: <20210726153552.1535838-1-maz@kernel.org> <20210726153552.1535838-6-maz@kernel.org> From: Alexandru Elisei Message-ID: <21cf5bb7-e70c-345b-be9e-ea009823c255@arm.com> Date: Tue, 27 Jul 2021 18:46:27 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: <20210726153552.1535838-6-maz@kernel.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E5F74F00323E Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Stat-Signature: z9pc91iwkguiffo68gxbejhpzomok61y X-HE-Tag: 1627407925-517696 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Marc, On 7/26/21 4:35 PM, Marc Zyngier wrote: > When mapping a THP, we are guaranteed that the page isn't reserved, > and we can safely avoid the kvm_is_reserved_pfn() call. > > Replace kvm_get_pfn() with get_page(pfn_to_page()). > > Signed-off-by: Marc Zyngier > --- > arch/arm64/kvm/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index ebb28dd4f2c9..b303aa143592 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -840,7 +840,7 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, > *ipap &= PMD_MASK; > kvm_release_pfn_clean(pfn); > pfn &= ~(PTRS_PER_PMD - 1); > - kvm_get_pfn(pfn); > + get_page(pfn_to_page(pfn)); > *pfnp = pfn; > > return PMD_SIZE; I am not very familiar with the mm subsystem, but I did my best to review this change. kvm_get_pfn() uses get_page(pfn) if !PageReserved(pfn_to_page(pfn)). I looked at the documentation for the PG_reserved page flag, and for normal memory, what looked to me like the most probable situation where that can be set for a transparent hugepage was for the zero page. Looked at mm/huge_memory.c, and huge_zero_pfn is allocated via alloc_pages(__GFP_ZERO) (and other flags), which doesn't call SetPageReserved(). I looked at how a huge page can be mapped from handle_mm_fault and from khugepaged, and it also looks to like both are using using alloc_pages() to allocate a new hugepage. I also did a grep for SetPageReserved(), and there are very few places where that is called, and none looked like they have anything to do with hugepages. As far as I can tell, this change is correct, but I think someone who is familiar with mm would be better suited for reviewing this patch.