From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D9BFC433E9 for ; Sat, 13 Mar 2021 13:31:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7461A64EF6 for ; Sat, 13 Mar 2021 13:31:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233840AbhCMNba (ORCPT ); Sat, 13 Mar 2021 08:31:30 -0500 Received: from outbound-smtp33.blacknight.com ([81.17.249.66]:44530 "EHLO outbound-smtp33.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233784AbhCMNbB (ORCPT ); Sat, 13 Mar 2021 08:31:01 -0500 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp33.blacknight.com (Postfix) with ESMTPS id 76D9516C109 for ; Sat, 13 Mar 2021 13:31:00 +0000 (GMT) Received: (qmail 31774 invoked from network); 13 Mar 2021 13:31:00 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 13 Mar 2021 13:31:00 -0000 Date: Sat, 13 Mar 2021 13:30:58 +0000 From: Mel Gorman To: Alexander Duyck Cc: Andrew Morton , Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Matthew Wilcox , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code path Message-ID: <20210313133058.GZ3697@techsingularity.net> References: <20210312154331.32229-1-mgorman@techsingularity.net> <20210312154331.32229-8-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 12, 2021 at 11:44:09AM -0800, Alexander Duyck wrote: > > - /* FUTURE development: > > - * > > - * Current slow-path essentially falls back to single page > > - * allocations, which doesn't improve performance. This code > > - * need bulk allocation support from the page allocator code. > > - */ > > - > > - /* Cache was empty, do real allocation */ > > -#ifdef CONFIG_NUMA > > - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); > > -#else > > - page = alloc_pages(gfp, pool->p.order); > > -#endif > > - if (!page) > > + if (unlikely(!__alloc_pages_bulk(gfp, pp_nid, NULL, bulk, &page_list))) > > return NULL; > > > > + /* First page is extracted and returned to caller */ > > + first_page = list_first_entry(&page_list, struct page, lru); > > + list_del(&first_page->lru); > > + > > This seems kind of broken to me. If you pull the first page and then > cannot map it you end up returning NULL even if you placed a number of > pages in the cache. > I think you're right but I'm punting this to Jesper to fix. He's more familiar with this particular code and can verify the performance is still ok for high speed networks. -- Mel Gorman SUSE Labs