From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93718C433E0 for ; Fri, 12 Mar 2021 15:43:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17C0E64F8F for ; Fri, 12 Mar 2021 15:43:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 17C0E64F8F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 87C8D6B0075; Fri, 12 Mar 2021 10:43:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86BB16B007B; Fri, 12 Mar 2021 10:43:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BE986B0078; Fri, 12 Mar 2021 10:43:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id 46BF56B0074 for ; Fri, 12 Mar 2021 10:43:34 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 10052181D7EDF for ; Fri, 12 Mar 2021 15:43:34 +0000 (UTC) X-FDA: 77911641948.07.F00663A Received: from outbound-smtp02.blacknight.com (outbound-smtp02.blacknight.com [81.17.249.8]) by imf19.hostedemail.com (Postfix) with ESMTP id 7E97D90009E8 for ; Fri, 12 Mar 2021 15:43:30 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id 9F456BAB57 for ; Fri, 12 Mar 2021 15:43:31 +0000 (GMT) Received: (qmail 19736 invoked from network); 12 Mar 2021 15:43:31 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 12 Mar 2021 15:43:31 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Matthew Wilcox , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users Date: Fri, 12 Mar 2021 15:43:24 +0000 Message-Id: <20210312154331.32229-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Stat-Signature: iqmnki9zcz7zc7g44e9gdzpae6iwk33r X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7E97D90009E8 Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=outbound-smtp02.blacknight.com; client-ip=81.17.249.8 X-HE-DKIM-Result: none/none X-HE-Tag: 1615563810-342842 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This series is based on top of Matthew Wilcox's series "Rationalise __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to test and are not using Andrew's tree as a baseline, I suggest using the following git tree git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebas= e-v4r2 Note to Chuck and Jesper -- as this is a cross-subsystem series, you may want to send the sunrpc and page_pool pre-requisites (patches 4 and 6) directly to the subsystem maintainers. While sunrpc is low-risk, I'm vaguely aware that there are other prototype series on netdev that affect page_pool. The conflict should be obvious in linux-next. Changelog since v3 o Rebase on top of Matthew's series consolidating the alloc_pages API o Rename alloced to allocated o Split out preparation patch for prepare_alloc_pages o Defensive check for bulk allocation or <=3D 0 pages o Call single page allocation path only if no pages were allocated o Minor cosmetic cleanups o Reorder patch dependencies by subsystem. As this is a cross-subsystem series, the mm patches have to be merged before the sunrpc and net users. Changelog since v2 o Prep new pages with IRQs enabled o Minor documentation update Changelog since v1 o Parenthesise binary and boolean comparisons o Add reviewed-bys o Rebase to 5.12-rc2 This series introduces a bulk order-0 page allocator with sunrpc and the network page pool being the first users. The implementation is not particularly efficient and the intention is to iron out what the semantic= s of the API should have for users. Once the semantics are ironed out, it can be made more efficient. Despite that, this is a performance-related for users that require multiple pages for an operation without multiple round-trips to the page allocator. Quoting the last patch for the high-speed networking use-case. For XDP-redirect workload with 100G mlx5 driver (that use page_pool) redirecting xdp_frame packets into a veth, that does XDP_PASS to create an SKB from the xdp_frame, which then cannot return the page to the page_pool. In this case, we saw[1] an improvement of 18.8% from using the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps). Both users in this series are corner cases (NFS and high-speed networks) so it is unlikely that most users will see any benefit in the short term. Potential other users are batch allocations for page cache readahead, fault around and SLUB allocations when high-order pages are unavailable. It's unknown how much benefit would be seen by converting multiple page allocation calls to a single batch or what difference it ma= y make to headline performance. It's a chicken and egg problem given that the potential benefit cannot be investigated without an implementation to test against. Light testing passed, I'm relying on Chuck and Jesper to test the target users more aggressively but both report performance improvements with the initial RFC. Patch 1 moves GFP flag initialision to prepare_alloc_pages Patch 2 renames a variable name that is particularly unpopular Patch 3 adds a bulk page allocator Patch 4 is a sunrpc cleanup that is a pre-requisite. Patch 5 is the sunrpc user. Chuck also has a patch which further caches pages but is not included in this series. It's not directly related to the bulk allocator and as it caches pages, it might have other concerns (e.g. does it need a shrinker?) Patch 6 is a preparation patch only for the network user Patch 7 converts the net page pool to the bulk allocator for order-0 page= s. include/linux/gfp.h | 12 ++++ mm/page_alloc.c | 149 +++++++++++++++++++++++++++++++++++++----- net/core/page_pool.c | 101 +++++++++++++++++----------- net/sunrpc/svc_xprt.c | 47 +++++++++---- 4 files changed, 240 insertions(+), 69 deletions(-) --=20 2.26.2