From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0DFEC433FE for ; Tue, 1 Nov 2022 18:24:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230143AbiKASYa (ORCPT ); Tue, 1 Nov 2022 14:24:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230092AbiKASYW (ORCPT ); Tue, 1 Nov 2022 14:24:22 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 181755591; Tue, 1 Nov 2022 11:24:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ZFxAvxF3BXzjfcr4Dq0hCuyqZjStoXM9emThx0IeM0I=; b=sPmCbv43606i5ys3libSiEcyzr 5DijuOa/Qb5MX7BmfBzjTXyGgFBEeW7yHy3DzOb0+50t55As90xM7yA66bKqtFwTJDECdA3HjGgQU 8ttV4XW3piqAjiWUorS/5dbLSuc1GJefJrE2QwX0zKWMb99DJTSHMnb8zr3ej5TkCQVYKjwac9AOS MYqIwXAY910Xt+WSy/AG43B6IG1k0HkREnWu8Utr6YKlzPKS362KIW+y3+JztBa3u7BEKl2CqTAEq izMmJDvLJHURQEhBDJ/kYmkzOkQj0DfC2OmIVXXS3QLEwYdgRnBvQGbB9v4kS3DpaFIoThuQ10Rtn nDwCRz1g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1opvvz-004pdI-Bo; Tue, 01 Nov 2022 18:24:23 +0000 Date: Tue, 1 Nov 2022 18:24:23 +0000 From: Matthew Wilcox To: "Vishal Moola (Oracle)" Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, miklos@szeredi.hu Subject: Re: [PATCH 2/5] fuse: Convert fuse_try_move_page() to use folios Message-ID: References: <20221101175326.13265-1-vishal.moola@gmail.com> <20221101175326.13265-3-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221101175326.13265-3-vishal.moola@gmail.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 01, 2022 at 10:53:23AM -0700, Vishal Moola (Oracle) wrote: > Converts the function to try to move folios instead of pages. Also > converts fuse_check_page() to fuse_get_folio() since this is its only > caller. This change removes 15 calls to compound_head(). This all looks good. I wonder if we should't add an assertion that the page we're trying to steal is !large? It seems to me that there are assumptions in this part of fuse that it's only dealing with order-0 pages, and if someone gives it a page that's part of a large folio, it's going to be messy. Miklos, any thoughts? > Signed-off-by: Vishal Moola (Oracle) > --- > fs/fuse/dev.c | 55 ++++++++++++++++++++++++++------------------------- > 1 file changed, 28 insertions(+), 27 deletions(-) > > diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c > index 26817a2db463..204c332cd343 100644 > --- a/fs/fuse/dev.c > +++ b/fs/fuse/dev.c > @@ -764,11 +764,11 @@ static int fuse_copy_do(struct fuse_copy_state *cs, void **val, unsigned *size) > return ncpy; > } > > -static int fuse_check_page(struct page *page) > +static int fuse_check_folio(struct folio *folio) > { > - if (page_mapcount(page) || > - page->mapping != NULL || > - (page->flags & PAGE_FLAGS_CHECK_AT_PREP & > + if (folio_mapped(folio) || > + folio->mapping != NULL || > + (folio->flags & PAGE_FLAGS_CHECK_AT_PREP & > ~(1 << PG_locked | > 1 << PG_referenced | > 1 << PG_uptodate | > @@ -778,7 +778,7 @@ static int fuse_check_page(struct page *page) > 1 << PG_reclaim | > 1 << PG_waiters | > LRU_GEN_MASK | LRU_REFS_MASK))) { > - dump_page(page, "fuse: trying to steal weird page"); > + dump_page(&folio->page, "fuse: trying to steal weird page"); > return 1; > } > return 0; > @@ -787,11 +787,11 @@ static int fuse_check_page(struct page *page) > static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) > { > int err; > - struct page *oldpage = *pagep; > - struct page *newpage; > + struct folio *oldfolio = page_folio(*pagep); > + struct folio *newfolio; > struct pipe_buffer *buf = cs->pipebufs; > > - get_page(oldpage); > + folio_get(oldfolio); > err = unlock_request(cs->req); > if (err) > goto out_put_old; > @@ -814,35 +814,36 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) > if (!pipe_buf_try_steal(cs->pipe, buf)) > goto out_fallback; > > - newpage = buf->page; > + newfolio = page_folio(buf->page); > > - if (!PageUptodate(newpage)) > - SetPageUptodate(newpage); > + if (!folio_test_uptodate(newfolio)) > + folio_mark_uptodate(newfolio); > > - ClearPageMappedToDisk(newpage); > + folio_clear_mappedtodisk(newfolio); > > - if (fuse_check_page(newpage) != 0) > + if (fuse_check_folio(newfolio) != 0) > goto out_fallback_unlock; > > /* > * This is a new and locked page, it shouldn't be mapped or > * have any special flags on it > */ > - if (WARN_ON(page_mapped(oldpage))) > + if (WARN_ON(folio_mapped(oldfolio))) > goto out_fallback_unlock; > - if (WARN_ON(page_has_private(oldpage))) > + if (WARN_ON(folio_has_private(oldfolio))) > goto out_fallback_unlock; > - if (WARN_ON(PageDirty(oldpage) || PageWriteback(oldpage))) > + if (WARN_ON(folio_test_dirty(oldfolio) || > + folio_test_writeback(oldfolio))) > goto out_fallback_unlock; > - if (WARN_ON(PageMlocked(oldpage))) > + if (WARN_ON(folio_test_mlocked(oldfolio))) > goto out_fallback_unlock; > > - replace_page_cache_folio(page_folio(oldpage), page_folio(newpage)); > + replace_page_cache_folio(oldfolio, newfolio); > > - get_page(newpage); > + folio_get(newfolio); > > if (!(buf->flags & PIPE_BUF_FLAG_LRU)) > - lru_cache_add(newpage); > + folio_add_lru(newfolio); > > /* > * Release while we have extra ref on stolen page. Otherwise > @@ -855,28 +856,28 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) > if (test_bit(FR_ABORTED, &cs->req->flags)) > err = -ENOENT; > else > - *pagep = newpage; > + *pagep = &newfolio->page; > spin_unlock(&cs->req->waitq.lock); > > if (err) { > - unlock_page(newpage); > - put_page(newpage); > + folio_unlock(newfolio); > + folio_put(newfolio); > goto out_put_old; > } > > - unlock_page(oldpage); > + folio_unlock(oldfolio); > /* Drop ref for ap->pages[] array */ > - put_page(oldpage); > + folio_put(oldfolio); > cs->len = 0; > > err = 0; > out_put_old: > /* Drop ref obtained in this function */ > - put_page(oldpage); > + folio_put(oldfolio); > return err; > > out_fallback_unlock: > - unlock_page(newpage); > + folio_unlock(newfolio); > out_fallback: > cs->pg = buf->page; > cs->offset = buf->offset; > -- > 2.38.1 > >