From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59D9DC4338F for ; Mon, 16 Aug 2021 02:35:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E73466142A for ; Mon, 16 Aug 2021 02:35:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E73466142A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 423896B0071; Sun, 15 Aug 2021 22:35:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D4238D0002; Sun, 15 Aug 2021 22:35:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C2AC8D0001; Sun, 15 Aug 2021 22:35:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 114AF6B0071 for ; Sun, 15 Aug 2021 22:35:49 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9178E181AEF2A for ; Mon, 16 Aug 2021 02:35:48 +0000 (UTC) X-FDA: 78479378376.40.DF1EC65 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 49E601007C4E for ; Mon, 16 Aug 2021 02:35:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=vMl2VNdGk09tKhD3GllFTX36GUewTvfnz70jPsRuisE=; b=I3HcwZSzqquYk2qrb9TMsXbbdU OFiZRP8wFNRyufteckPxWtUxpKV1lJeMBslJpcH1rVMm6iV61ZJ7PjM/Fjylv67sH3QLhnLTgIVwA hhGdH08VdNX20AFLVNGNjTVN22axpgchV23w17JTqVdUD4QtYs+acCcY3SUlb5zRhw7ijpKwboajS KdPL7lDSSchtlK3+6lgG/9dfDpgKabc5CPbVcSfsKSvR4JggPgobfdgZUS2noRgb9tqn1mVy1rRHX ba9T8YGprwqE8WRX0sY+72or6f3XnESvJhl5QM4yVFhsXVkMZNkh2omwaOBbc//tlkDHGljqrnowE B4hTnZsw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mFST8-000sTw-Bg; Mon, 16 Aug 2021 02:35:28 +0000 Date: Mon, 16 Aug 2021 03:35:18 +0100 From: Matthew Wilcox To: David Howells Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: Re: [PATCH v14 084/138] mm/page_alloc: Add folio allocation functions Message-ID: References: <20210715033704.692967-85-willy@infradead.org> <20210715033704.692967-1-willy@infradead.org> <1814546.1628632283@warthog.procyon.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1814546.1628632283@warthog.procyon.org.uk> X-Rspamd-Queue-Id: 49E601007C4E Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=I3HcwZSz; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam01 X-Stat-Signature: pydag7swbmr7hhpa5woqu3ng1b4ofbm1 X-HE-Tag: 1629081348-675254 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 10, 2021 at 10:51:23PM +0100, David Howells wrote: > Matthew Wilcox (Oracle) wrote: > > > +struct folio *folio_alloc(gfp_t gfp, unsigned order) > > +{ > > + struct page *page = alloc_pages(gfp | __GFP_COMP, order); > > + > > + if (page && order > 1) > > + prep_transhuge_page(page); > > Ummm... Shouldn't order==1 pages (two page folios) be prep'd also? No. The deferred list is stored in the second tail page, so there's nowhere to store one if there are only two pages. The free_transhuge_page() dtor only handles the deferred list, so it's fine to skip setting the DTOR in the page too. > Would it be better to just jump to alloc_pages() if order <= 1? E.g.: > > struct folio *folio_alloc(gfp_t gfp, unsigned order) > { > struct page *page; > > if (order <= 1) > return (struct folio *)alloc_pages(gfp | __GFP_COMP, order); > > page = alloc_pages(gfp | __GFP_COMP, order); > if (page) > prep_transhuge_page(page); > return (struct folio *)page; > } That doesn't look simpler to me?