From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0686C433DB for ; Thu, 11 Mar 2021 14:46:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21AA964FE5 for ; Thu, 11 Mar 2021 14:46:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21AA964FE5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 693F38D02C0; Thu, 11 Mar 2021 09:46:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 66B288D02B2; Thu, 11 Mar 2021 09:46:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 531F58D02C0; Thu, 11 Mar 2021 09:46:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 332038D02B2 for ; Thu, 11 Mar 2021 09:46:30 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E3EE76C37 for ; Thu, 11 Mar 2021 14:46:29 +0000 (UTC) X-FDA: 77907869298.22.2260868 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id EFCC7E0011F5 for ; Thu, 11 Mar 2021 14:46:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=hy7ALj/J1rQvdVOF3M7n1V5Y6apOBeWxycg4H+4BgN8=; b=AMi9vEiEpzJPWphuoI+85+T4jJ 3AN9UFMe0jBLbFWhITE8FkUztRnENJPSua5SCtlfz1DPF3QHo4pQxLS0PWZsczMDPJy1LAAE0EQQC sPIlLGT8RlD0nXRUb7Z3Q+Es1ExiEAdnA29XA7Kc9489NzgL5Py4yfMmESkX5iXFBWqjvabQ6RtP5 XJz1soF6vsHpaWSfKV/yPhPWZKzX2frUOip251ypWJNavaGUjWldvxK/EvcKsY9rnUl6bYqNndABV k+5IqtzAAEJY6s9PwyyMDkcy99SgGkenkPZnBItZo6dmQ7+oIvrgiLkHxY9ViiVlPb/OTH5X4Uk1k qYRM7upA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lKMZZ-007RGO-3p; Thu, 11 Mar 2021 14:46:00 +0000 Date: Thu, 11 Mar 2021 14:45:57 +0000 From: Matthew Wilcox To: Hugh Dickins Cc: Naoya Horiguchi , Andrew Morton , Michal Hocko , Oscar Salvador , Tony Luck , "Aneesh Kumar K.V" , Naoya Horiguchi , Jue Wang , Greg Thelen , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v1] mm, hwpoison: enable error handling on shmem thp Message-ID: <20210311144557.GV3479805@casper.infradead.org> References: <20210209062128.453814-1-nao.horiguchi@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: 3c5qbec16z638y97n1fh8ztpf6pwf7w3 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EFCC7E0011F5 Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615473986-678154 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 10, 2021 at 11:22:18PM -0800, Hugh Dickins wrote: > But something we found on the way, > we do have a patch for: add_to_kill() uses page_address_in_vma(), but > that has not been used on file THP tails before - fix appended at the > end below, so as not to waste your time on that bit. > [PATCH] mm: fix page_address_in_vma() on file THP tails > From: Jue Wang > > Anon THP tails were already supported, but memory-failure now needs to use > page_address_in_vma() on file THP tails, which its page->mapping check did > not permit: fix it. > > Signed-off-by: Jue Wang > Signed-off-by: Hugh Dickins > --- > > mm/rmap.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > --- 5.12-rc2/mm/rmap.c 2021-02-28 16:58:57.950450151 -0800 > +++ linux/mm/rmap.c 2021-03-10 20:29:21.591475177 -0800 > @@ -717,11 +717,11 @@ unsigned long page_address_in_vma(struct > if (!vma->anon_vma || !page__anon_vma || > vma->anon_vma->root != page__anon_vma->root) > return -EFAULT; > - } else if (page->mapping) { > - if (!vma->vm_file || vma->vm_file->f_mapping != page->mapping) > - return -EFAULT; > - } else > + } else if (!vma->vm_file) { > + return -EFAULT; > + } else if (vma->vm_file->f_mapping != compound_head(page)->mapping) { > return -EFAULT; > + } This is a common bug I'm seeing; people just assume they can dereference page->mapping. Below is how I would convert this patch into folios, but it doesn't remove the problem that people just use page->mapping. Do we want to set ourselves the goal of making this bug impossible? That is, eventually, do something like this ... struct folio { union { struct page page; struct { unsigned long flags; struct list_head lru; struct address_space *mapping; pgoff_t index; unsigned long private; }; }; }; ... and remove the names 'mapping' & 'index' from struct page. If so, now would be a good time to prepare for that, so we write folio->mapping instead of folio->page.mapping. diff --git a/mm/rmap.c b/mm/rmap.c index 3142ea1dd071..fcad8c6a417a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -707,9 +707,11 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) */ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) { + struct folio *folio = page_folio(page); unsigned long address; - if (PageAnon(page)) { - struct anon_vma *page__anon_vma = page_anon_vma(page); + + if (FolioAnon(folio)) { + struct anon_vma *page__anon_vma = folio_anon_vma(folio); /* * Note: swapoff's unuse_vma() is more efficient with this * check, and needs it to match anon_vma when KSM is active. @@ -717,9 +719,10 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) if (!vma->anon_vma || !page__anon_vma || vma->anon_vma->root != page__anon_vma->root) return -EFAULT; - } else if (page->mapping) { - if (!vma->vm_file || vma->vm_file->f_mapping != page->mapping) - return -EFAULT; + } else if (!vma->vm_file) { + return -EFAULT; + } else if (vma->vm_file->f_mapping != folio->page.mapping) { + return -EFAULT; } else return -EFAULT; address = __vma_address(page, vma); diff --git a/mm/util.c b/mm/util.c index 9ab72cfa4aa1..91ac4fc90a07 100644 --- a/mm/util.c +++ b/mm/util.c @@ -635,11 +635,11 @@ void kvfree_sensitive(const void *addr, size_t len) } EXPORT_SYMBOL(kvfree_sensitive); -static inline void *__page_rmapping(struct page *page) +static inline void *folio_rmapping(struct folio *folio) { unsigned long mapping; - mapping = (unsigned long)page->mapping; + mapping = (unsigned long)folio->page.mapping; mapping &= ~PAGE_MAPPING_FLAGS; return (void *)mapping; @@ -648,8 +648,7 @@ static inline void *__page_rmapping(struct page *page) /* Neutral page->mapping pointer to address_space or anon_vma or other */ void *page_rmapping(struct page *page) { - page = compound_head(page); - return __page_rmapping(page); + return folio_rmapping(page_folio(page)); } /* @@ -675,15 +674,12 @@ bool page_mapped(struct page *page) } EXPORT_SYMBOL(page_mapped); -struct anon_vma *page_anon_vma(struct page *page) +struct anon_vma *folio_anon_vma(struct folio *folio) { - unsigned long mapping; - - page = compound_head(page); - mapping = (unsigned long)page->mapping; + unsigned long mapping = (unsigned long)folio->page.mapping; if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON) return NULL; - return __page_rmapping(page); + return folio_rmapping(folio); } struct address_space *folio_mapping(struct folio *folio)