From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AC1BC74A5B for ; Fri, 17 Mar 2023 10:58:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A1736B007E; Fri, 17 Mar 2023 06:58:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72A016B0080; Fri, 17 Mar 2023 06:58:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F0D96B0081; Fri, 17 Mar 2023 06:58:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4BEDF6B007E for ; Fri, 17 Mar 2023 06:58:27 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 22017A02B6 for ; Fri, 17 Mar 2023 10:58:27 +0000 (UTC) X-FDA: 80578091454.04.C7B3A4D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 5A8E9180004 for ; Fri, 17 Mar 2023 10:58:25 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679050705; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+3NujFEuDznznAnsPUuZmo48W1iW1AcX1wXuqBIeai8=; b=FueDZPz5OtRJ3eCFxNSGR83Tv1jrP5TUseygrB9vxxV0gqJnjHecOS3/k+r2qmnfHjIkIQ 4MhF21Ne6sXjnycOD2XpoNlwWTdi8VyAdIUtTUaY5UFmJfzcyYsUfRLVV3ovmF9XATTYYC 4xXct8lo/UTZ7Avb7MEsS0Q1vXAaIjI= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679050705; a=rsa-sha256; cv=none; b=BmQs4Fr0wgijDjWhNaLH4jsE8BFoduKT2DYfHe19ZCI8IMNjtQpABuO8ZW0jXQubMFWqkj gNw3NcehSvJYolO9KIuvVmEuRq50fx57JrZPdcLcL48a86eK5NmXpZsFzUEP7239Tp3HgZ 6nPECOecgH4eEaCmkVZK2IBC+C5A+AU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6294A19F0; Fri, 17 Mar 2023 03:59:08 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C80C3F885; Fri, 17 Mar 2023 03:58:23 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Yin, Fengwei" , Yu Zhao Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 3/6] mm: Introduce try_vma_alloc_zeroed_movable_folio() Date: Fri, 17 Mar 2023 10:57:59 +0000 Message-Id: <20230317105802.2634004-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230317105802.2634004-1-ryan.roberts@arm.com> References: <20230317105802.2634004-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5A8E9180004 X-Stat-Signature: rzph1rzk9kazxua19gt3hzm96474s76d X-Rspam-User: X-HE-Tag: 1679050705-680288 X-HE-Meta: U2FsdGVkX19n8ocMPvhQvhI0ex88HYmrQUFqRfGc5eDwALpyLMt7zaWJNbNqXm7npOWiNiZle6hR8SuajbYrYyODmVBYUh8tIeK6K9rFu2RYIMp/ONJFpI77P+YYeIpGIqpmj2OmWuevPXmsJEM6iP1/e7ho3Fv+s67WGlxWWwmA6tgpn/o5yAaHzst5MciM03EbHkZDF/tar30rQ4YE6y4A6J53QJIRY2tOlEtQvzfcH3OzBaifylyoECLu+kBf31KIyWi4mcf+sPyFZQ0SaBF0XjPlzRK5wZU6SDy6DSKdDcxhDfD8mH/vLmgQ7obw2Hw7LRcHUCAX1TJCnon5cBPqFFYgSRcNKYZCYimxWKg0fmr/0iXIZvSfA9WNDr8d+tXs/cmYxUqBC7S1HVn/CULeTO4pL8W8xTlz/GufAEN/1stO9QfrW9OIwM7lj11ZVUgbvwS0h+mTPX9shJZ1noK/96htD5DcCGhwMm4pA3YAw1j1K/56QZb/02tDaYW88snqws6jSVm6CemP4k7Hm2NGjVbZ94pEaxdV6P5ypEmaUgtXAx9rprD/VkYAM3dKqBUDpvvvEsBjgmcvVzABm7CkjBMmPuIGJBq+qs5sMnOPRiyJZrste0cVVRHAhOC4YTp3mcneyFIdGI2246/+79r7iFGVsuqsuzY3GPCFvWLbpBHycR9ou/N0A+FTaoWP6cBZwxijlX8GFpJZPSr7gKw4qPftk02bk2MjfqPowdVkxUhT1mI6bOnHy86nzdOJgI9c+sZ13gUgeKQlBTdYF9fgxhogkxIH7Zz2mAK1aeyJg7iguUFSgfxUwaaSpp/uMKSJkqFulfgzrXYOf84P0r0IPibdl66vULcKELUC+ujKFFQdy/xGIjEN8XegpH4qLYN07PdXJVQQcgIthg30JSrBKiZY4T4DmhHIaY+s8777OHtBDYZyDT/Fj5dnSXCQSTBFEqL3xE8uujlRNzv WupomARH /Alcw2mRwFAZIunlQ7+ip32pciiNN+UaMpv6y/3pbfsTEkvzkG9TGani+AKbehWpDi5RXXnuGrf69Z7f9iPX1Wu4BJhY2VLY5npdKF5KIrZounUsSGu3QNlbZ1lPjq/dtRGWxJ2MW9/9Pcfnf1egJlgQyZtiZJv544KYlnlzEmvspcx8ZUOJ5fIe0mVBk4ggMQ7WT/VGDKA4q/nIjwJcDibSbnvqOow4cdOym X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Like vma_alloc_zeroed_movable_folio(), except it will opportunistically attempt to allocate high-order folios, retrying with lower orders all the way to order-0, until success. The user must check what they got with folio_order(). This will be used to oportunistically allocate large folios for anonymous memory with a sensible fallback under pressure. For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent high latency due to reclaim, instead preferring to just try for a lower order. The same approach is used by the readahead code when allocating large folios. Signed-off-by: Ryan Roberts --- mm/memory.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 8798da968686..c9e09415ee18 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3024,6 +3024,27 @@ static inline void wp_page_reuse(struct vm_fault *vmf) count_vm_event(PGREUSE); } +/* + * Opportunistically attempt to allocate high-order folios, retrying with lower + * orders all the way to order-0, until success. The user must check what they + * got with folio_order(). + */ +static struct folio *try_vma_alloc_zeroed_movable_folio( + struct vm_area_struct *vma, + unsigned long vaddr, int order) +{ + struct folio *folio; + gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN; + + for (; order > 0; order--) { + folio = vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order); + if (folio) + return folio; + } + + return vma_alloc_zeroed_movable_folio(vma, vaddr, 0, 0); +} + /* * Handle the case of a page which we actually need to copy to a new page, * either due to COW or unsharing. @@ -3061,8 +3082,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) goto oom; if (is_zero_pfn(pte_pfn(vmf->orig_pte))) { - new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, - 0, 0); + new_folio = try_vma_alloc_zeroed_movable_folio(vma, + vmf->address, 0); if (!new_folio) goto oom; } else { @@ -4050,7 +4071,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* Allocate our own private page. */ if (unlikely(anon_vma_prepare(vma))) goto oom; - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, 0, 0); + folio = try_vma_alloc_zeroed_movable_folio(vma, vmf->address, 0); if (!folio) goto oom; -- 2.25.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB645C74A5B for ; Fri, 17 Mar 2023 10:59:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fae/ekIXZ9AKoKbt9onZ2q/sZ4NsY9+Tmyup07hMPfU=; b=a4Uk+231RbJujJ +dzjsSZZLVMc1n5EHlBSehAyTxZOiJZ8dcDYD+FdL5B15EufVn6ukgLwoK/QdU7nP4/qgFRtTDhLG 1ygszqzPj9VZfgd9+Go7paMUvxNfGn1tfGHOBAj2jLHu/WDPK/XdhwNCN0Ofn3K5Q4am6X2pI6ELX MBDILtpJGPtpisDGRyUzb+2onpgeK8JjgmuxwSFAk14Hbd1bkInBq81oQelSMVb99dLl4DAhD2ycZ 566skPDNYLtT6d3imtl+L9SspB0NfnVkJHlyyOpkshNzsi1BeyxGNooVFtJ0VqeHVUE6vXT4H0whz /m5Yo5kN3TFTOo9U1/6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd7n8-001yAg-1v; Fri, 17 Mar 2023 10:58:34 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd7n0-001y4G-1S for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 10:58:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6294A19F0; Fri, 17 Mar 2023 03:59:08 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C80C3F885; Fri, 17 Mar 2023 03:58:23 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Yin, Fengwei" , Yu Zhao Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 3/6] mm: Introduce try_vma_alloc_zeroed_movable_folio() Date: Fri, 17 Mar 2023 10:57:59 +0000 Message-Id: <20230317105802.2634004-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230317105802.2634004-1-ryan.roberts@arm.com> References: <20230317105802.2634004-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230317_035826_535878_76C361C0 X-CRM114-Status: GOOD ( 14.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Like vma_alloc_zeroed_movable_folio(), except it will opportunistically attempt to allocate high-order folios, retrying with lower orders all the way to order-0, until success. The user must check what they got with folio_order(). This will be used to oportunistically allocate large folios for anonymous memory with a sensible fallback under pressure. For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent high latency due to reclaim, instead preferring to just try for a lower order. The same approach is used by the readahead code when allocating large folios. Signed-off-by: Ryan Roberts --- mm/memory.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 8798da968686..c9e09415ee18 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3024,6 +3024,27 @@ static inline void wp_page_reuse(struct vm_fault *vmf) count_vm_event(PGREUSE); } +/* + * Opportunistically attempt to allocate high-order folios, retrying with lower + * orders all the way to order-0, until success. The user must check what they + * got with folio_order(). + */ +static struct folio *try_vma_alloc_zeroed_movable_folio( + struct vm_area_struct *vma, + unsigned long vaddr, int order) +{ + struct folio *folio; + gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN; + + for (; order > 0; order--) { + folio = vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order); + if (folio) + return folio; + } + + return vma_alloc_zeroed_movable_folio(vma, vaddr, 0, 0); +} + /* * Handle the case of a page which we actually need to copy to a new page, * either due to COW or unsharing. @@ -3061,8 +3082,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) goto oom; if (is_zero_pfn(pte_pfn(vmf->orig_pte))) { - new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, - 0, 0); + new_folio = try_vma_alloc_zeroed_movable_folio(vma, + vmf->address, 0); if (!new_folio) goto oom; } else { @@ -4050,7 +4071,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* Allocate our own private page. */ if (unlikely(anon_vma_prepare(vma))) goto oom; - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, 0, 0); + folio = try_vma_alloc_zeroed_movable_folio(vma, vmf->address, 0); if (!folio) goto oom; -- 2.25.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel