From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE163C433EF for ; Fri, 4 Feb 2022 19:59:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0AFFC8D0013; Fri, 4 Feb 2022 14:59:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D6A28D000F; Fri, 4 Feb 2022 14:59:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 848786B0096; Fri, 4 Feb 2022 14:59:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id F3B298D0005 for ; Fri, 4 Feb 2022 14:59:07 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B74F39A285 for ; Fri, 4 Feb 2022 19:59:07 +0000 (UTC) X-FDA: 79106161134.25.DC873E9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 4CD561C0004 for ; Fri, 4 Feb 2022 19:59:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3f3cABEO7ZU76rIj5Z6RSC1I2hQp6Gt7nT0OzvXmhCQ=; b=g/TdzEVQbwCBPzSUkESe4MCIMY VHBF5Vtrmldq+KV76C53D6M2cppL5DR3pTyq3teiwm/eQLswLZmlXwEAolNl3KE+k3QOCyYYRsJUj 14JlqAaR+6WPJWETml0oPDbmLV6YUq3vInWI93V0SAw/nIVG5PRLZ+1Z7g3B0UmeO+98Z8TWe9j6W EbNKU6cXuCS2JNaJo4HJYU8qyqFFOtGiBAT2R7TcByhJFoJbRR8doqI274fq4/vyHM5C1+cckykS7 p+Ma/N1Q2PhIMNdkjKYppvUNDPY1an01CisWSu7ubeACjL5elHKj1yuhkI/RAiCM9ZiSX1b0ZUa5l 42Fu5bvg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jZ-007LpM-UD; Fri, 04 Feb 2022 19:59:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 57/75] mm/rmap: Turn page_lock_anon_vma_read() into folio_lock_anon_vma_read() Date: Fri, 4 Feb 2022 19:58:34 +0000 Message-Id: <20220204195852.1751729-58-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4CD561C0004 X-Rspam-User: nil Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="g/TdzEVQ"; dmarc=none; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: qxtp69soa485mcamwwjpucbc5woneoks X-HE-Tag: 1644004747-174485 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add back page_lock_anon_vma_read() as a wrapper. This saves a few calls to compound_head(). If any callers were passing a tail page before, this would have failed to lock the anon VMA as page->mapping is not valid for tail pages. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 1 + mm/folio-compat.c | 5 +++++ mm/memory-failure.c | 3 ++- mm/rmap.c | 12 ++++++------ 4 files changed, 14 insertions(+), 7 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 85d17a38642c..71798112a575 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -269,6 +269,7 @@ void remove_migration_ptes(struct folio *src, struct = folio *dst, bool locked); * Called by memory-failure.c to kill processes. */ struct anon_vma *page_lock_anon_vma_read(struct page *page); +struct anon_vma *folio_lock_anon_vma_read(struct folio *folio); void page_unlock_anon_vma_read(struct anon_vma *anon_vma); int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); =20 diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 3804fd8c1f20..e04fba5e45e5 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -185,3 +185,8 @@ void page_mlock(struct page *page) { folio_mlock(page_folio(page)); } + +struct anon_vma *page_lock_anon_vma_read(struct page *page) +{ + return folio_lock_anon_vma_read(page_folio(page)); +} diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 1c7a71b5248e..ed1a47d9c35d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -487,12 +487,13 @@ static struct task_struct *task_early_kill(struct t= ask_struct *tsk, static void collect_procs_anon(struct page *page, struct list_head *to_k= ill, int force_early) { + struct folio *folio =3D page_folio(page); struct vm_area_struct *vma; struct task_struct *tsk; struct anon_vma *av; pgoff_t pgoff; =20 - av =3D page_lock_anon_vma_read(page); + av =3D folio_lock_anon_vma_read(folio); if (av =3D=3D NULL) /* Not actually mapped anymore */ return; =20 diff --git a/mm/rmap.c b/mm/rmap.c index ffc1b2f0cf24..ba65d5d3eb5a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -526,28 +526,28 @@ struct anon_vma *page_get_anon_vma(struct page *pag= e) * atomic op -- the trylock. If we fail the trylock, we fall back to get= ting a * reference like with page_get_anon_vma() and then block on the mutex. */ -struct anon_vma *page_lock_anon_vma_read(struct page *page) +struct anon_vma *folio_lock_anon_vma_read(struct folio *folio) { struct anon_vma *anon_vma =3D NULL; struct anon_vma *root_anon_vma; unsigned long anon_mapping; =20 rcu_read_lock(); - anon_mapping =3D (unsigned long)READ_ONCE(page->mapping); + anon_mapping =3D (unsigned long)READ_ONCE(folio->mapping); if ((anon_mapping & PAGE_MAPPING_FLAGS) !=3D PAGE_MAPPING_ANON) goto out; - if (!page_mapped(page)) + if (!folio_mapped(folio)) goto out; =20 anon_vma =3D (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON); root_anon_vma =3D READ_ONCE(anon_vma->root); if (down_read_trylock(&root_anon_vma->rwsem)) { /* - * If the page is still mapped, then this anon_vma is still + * If the folio is still mapped, then this anon_vma is still * its anon_vma, and holding the mutex ensures that it will * not go away, see anon_vma_free(). */ - if (!page_mapped(page)) { + if (!folio_mapped(folio)) { up_read(&root_anon_vma->rwsem); anon_vma =3D NULL; } @@ -560,7 +560,7 @@ struct anon_vma *page_lock_anon_vma_read(struct page = *page) goto out; } =20 - if (!page_mapped(page)) { + if (!folio_mapped(folio)) { rcu_read_unlock(); put_anon_vma(anon_vma); return NULL; --=20 2.34.1