From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63F46C433E0 for ; Tue, 16 Mar 2021 03:21:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D1B0665094 for ; Tue, 16 Mar 2021 03:21:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D1B0665094 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DDFAF6B006C; Mon, 15 Mar 2021 23:21:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D68E76B006E; Mon, 15 Mar 2021 23:21:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBB0A6B0070; Mon, 15 Mar 2021 23:21:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 9A5FB6B006C for ; Mon, 15 Mar 2021 23:21:50 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 50A526D71 for ; Tue, 16 Mar 2021 03:21:50 +0000 (UTC) X-FDA: 77924287980.09.BC8583B Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf30.hostedemail.com (Postfix) with ESMTP id ADCA9E0011C4 for ; Tue, 16 Mar 2021 03:21:48 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R801e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=zhongjiang-ali@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0US4IWG9_1615864901; Received: from L-X1DSLVDL-1420.local(mailfrom:zhongjiang-ali@linux.alibaba.com fp:SMTPD_---0US4IWG9_1615864901) by smtp.aliyun-inc.com(127.0.0.1); Tue, 16 Mar 2021 11:21:42 +0800 Message-ID: Date: Tue, 16 Mar 2021 11:21:41 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:87.0) Gecko/20100101 Thunderbird/87.0 Subject: Re: [PATCH v3 05/11] mm, fsdax: Refactor memory-failure handler for dax mapping Content-Language: en-US To: Shiyang Ruan , linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-nvdimm@lists.01.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, dm-devel@redhat.com Cc: darrick.wong@oracle.com, dan.j.williams@intel.com, david@fromorbit.com, hch@lst.de, agk@redhat.com, snitzer@redhat.com, rgoldwyn@suse.de, qi.fuli@fujitsu.com, y-goto@fujitsu.com References: <20210208105530.3072869-1-ruansy.fnst@cn.fujitsu.com> <20210208105530.3072869-6-ruansy.fnst@cn.fujitsu.com> From: zhong jiang In-Reply-To: <20210208105530.3072869-6-ruansy.fnst@cn.fujitsu.com> Content-Type: text/plain; charset=UTF-8; format=flowed X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: ADCA9E0011C4 X-Stat-Signature: n6cyrmkwd3hpbfjoyjs6yebd7h6kxssq Received-SPF: none (linux.alibaba.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=out30-132.freemail.mail.aliyun.com; client-ip=115.124.30.132 X-HE-DKIM-Result: none/none X-HE-Tag: 1615864908-317883 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021/2/8 6:55 =E4=B8=8B=E5=8D=88, Shiyang Ruan wrote: > The current memory_failure_dev_pagemap() can only handle single-mapped > dax page for fsdax mode. The dax page could be mapped by multiple file= s > and offsets if we let reflink feature & fsdax mode work together. So, > we refactor current implementation to support handle memory failure on > each file and offset. > > Signed-off-by: Shiyang Ruan > --- > fs/dax.c | 21 ++++++++++ > include/linux/dax.h | 1 + > include/linux/mm.h | 9 +++++ > mm/memory-failure.c | 98 ++++++++++++++++++++++++++++++++++----------= - > 4 files changed, 105 insertions(+), 24 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 26d5dcd2d69e..c64c3a0e76a6 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -378,6 +378,27 @@ static struct page *dax_busy_page(void *entry) > return NULL; > } > =20 > +/* > + * dax_load_pfn - Load pfn of the DAX entry corresponding to a page > + * @mapping: The file whose entry we want to load > + * @index: The offset where the DAX entry located in > + * > + * Return: pfn of the DAX entry > + */ > +unsigned long dax_load_pfn(struct address_space *mapping, unsigned lon= g index) > +{ > + XA_STATE(xas, &mapping->i_pages, index); > + void *entry; > + unsigned long pfn; > + > + xas_lock_irq(&xas); > + entry =3D xas_load(&xas); > + pfn =3D dax_to_pfn(entry); > + xas_unlock_irq(&xas); > + > + return pfn; > +} > + > /* > * dax_lock_mapping_entry - Lock the DAX entry corresponding to a pag= e > * @page: The page whose entry we want to lock > diff --git a/include/linux/dax.h b/include/linux/dax.h > index b52f084aa643..89e56ceeffc7 100644 > --- a/include/linux/dax.h > +++ b/include/linux/dax.h > @@ -150,6 +150,7 @@ int dax_writeback_mapping_range(struct address_spac= e *mapping, > =20 > struct page *dax_layout_busy_page(struct address_space *mapping); > struct page *dax_layout_busy_page_range(struct address_space *mapping= , loff_t start, loff_t end); > +unsigned long dax_load_pfn(struct address_space *mapping, unsigned lon= g index); > dax_entry_t dax_lock_page(struct page *page); > void dax_unlock_page(struct page *page, dax_entry_t cookie); > #else > diff --git a/include/linux/mm.h b/include/linux/mm.h > index ecdf8a8cd6ae..ab52bc633d84 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1157,6 +1157,14 @@ static inline bool is_device_private_page(const = struct page *page) > page->pgmap->type =3D=3D MEMORY_DEVICE_PRIVATE; > } > =20 > +static inline bool is_device_fsdax_page(const struct page *page) > +{ > + return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && > + IS_ENABLED(CONFIG_FS_DAX) && > + is_zone_device_page(page) && > + page->pgmap->type =3D=3D MEMORY_DEVICE_FS_DAX; > +} > + > static inline bool is_pci_p2pdma_page(const struct page *page) > { > return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && > @@ -3045,6 +3053,7 @@ enum mf_flags { > MF_MUST_KILL =3D 1 << 2, > MF_SOFT_OFFLINE =3D 1 << 3, > }; > +extern int mf_dax_mapping_kill_procs(struct address_space *mapping, pg= off_t index, int flags); > extern int memory_failure(unsigned long pfn, int flags); > extern void memory_failure_queue(unsigned long pfn, int flags); > extern void memory_failure_queue_kick(int cpu); > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index e9481632fcd1..158fe0c8e602 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -56,6 +56,7 @@ > #include > #include > #include > +#include > #include "internal.h" > #include "ras/ras_event.h" > =20 > @@ -120,6 +121,13 @@ static int hwpoison_filter_dev(struct page *p) > if (PageSlab(p)) > return -EINVAL; > =20 > + if (pfn_valid(page_to_pfn(p))) { > + if (is_device_fsdax_page(p)) > + return 0; > + else > + return -EINVAL; > + } > + > mapping =3D page_mapping(p); > if (mapping =3D=3D NULL || mapping->host =3D=3D NULL) > return -EINVAL; > @@ -286,10 +294,9 @@ void shake_page(struct page *p, int access) > } > EXPORT_SYMBOL_GPL(shake_page); > =20 > -static unsigned long dev_pagemap_mapping_shift(struct page *page, > - struct vm_area_struct *vma) > +static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *= vma, > + unsigned long address) > { > - unsigned long address =3D vma_address(page, vma); > pgd_t *pgd; > p4d_t *p4d; > pud_t *pud; > @@ -329,9 +336,8 @@ static unsigned long dev_pagemap_mapping_shift(stru= ct page *page, > * Schedule a process for later kill. > * Uses GFP_ATOMIC allocations to avoid potential recursions in the V= M. > */ > -static void add_to_kill(struct task_struct *tsk, struct page *p, > - struct vm_area_struct *vma, > - struct list_head *to_kill) > +static void add_to_kill(struct task_struct *tsk, struct page *p, pgoff= _t pgoff, > + struct vm_area_struct *vma, struct list_head *to_kill) > { > struct to_kill *tk; > =20 > @@ -342,9 +348,12 @@ static void add_to_kill(struct task_struct *tsk, s= truct page *p, > } > =20 > tk->addr =3D page_address_in_vma(p, vma); > - if (is_zone_device_page(p)) > - tk->size_shift =3D dev_pagemap_mapping_shift(p, vma); > - else > + if (is_zone_device_page(p)) { > + if (is_device_fsdax_page(p)) > + tk->addr =3D vma->vm_start + > + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > + tk->size_shift =3D dev_pagemap_mapping_shift(vma, tk->addr); > + } else > tk->size_shift =3D page_shift(compound_head(p)); > =20 > /* > @@ -492,7 +501,7 @@ static void collect_procs_anon(struct page *page, s= truct list_head *to_kill, > if (!page_mapped_in_vma(page, vma)) > continue; > if (vma->vm_mm =3D=3D t->mm) > - add_to_kill(t, page, vma, to_kill); > + add_to_kill(t, page, 0, vma, to_kill); > } > } > read_unlock(&tasklist_lock); > @@ -502,24 +511,19 @@ static void collect_procs_anon(struct page *page,= struct list_head *to_kill, > /* > * Collect processes when the error hit a file mapped page. > */ > -static void collect_procs_file(struct page *page, struct list_head *to= _kill, > - int force_early) > +static void collect_procs_file(struct page *page, struct address_space= *mapping, > + pgoff_t pgoff, struct list_head *to_kill, int force_early) > { > struct vm_area_struct *vma; > struct task_struct *tsk; > - struct address_space *mapping =3D page->mapping; > - pgoff_t pgoff; > =20 > i_mmap_lock_read(mapping); > read_lock(&tasklist_lock); > - pgoff =3D page_to_pgoff(page); > for_each_process(tsk) { > struct task_struct *t =3D task_early_kill(tsk, force_early); > - > if (!t) > continue; > - vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, > - pgoff) { > + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { > /* > * Send early kill signal to tasks where a vma covers > * the page but the corrupted page is not necessarily > @@ -528,7 +532,7 @@ static void collect_procs_file(struct page *page, s= truct list_head *to_kill, > * to be informed of all such data corruptions. > */ > if (vma->vm_mm =3D=3D t->mm) > - add_to_kill(t, page, vma, to_kill); > + add_to_kill(t, page, pgoff, vma, to_kill); > } > } > read_unlock(&tasklist_lock); > @@ -547,7 +551,8 @@ static void collect_procs(struct page *page, struct= list_head *tokill, > if (PageAnon(page)) > collect_procs_anon(page, tokill, force_early); > else > - collect_procs_file(page, tokill, force_early); > + collect_procs_file(page, page_mapping(page), page_to_pgoff(page), > + tokill, force_early); > } > =20 > static const char *action_name[] =3D { > @@ -1214,6 +1219,50 @@ static int try_to_split_thp_page(struct page *pa= ge, const char *msg) > return 0; > } > =20 > +int mf_dax_mapping_kill_procs(struct address_space *mapping, pgoff_t i= ndex, int flags) > +{ > + const bool unmap_success =3D true; > + unsigned long pfn, size =3D 0; > + struct to_kill *tk; > + LIST_HEAD(to_kill); > + int rc =3D -EBUSY; > + loff_t start; > + > + /* load the pfn of the dax mapping file */ > + pfn =3D dax_load_pfn(mapping, index); > + if (!pfn) > + return rc; > + /* > + * Unlike System-RAM there is no possibility to swap in a > + * different physical page at a given virtual address, so all > + * userspace consumption of ZONE_DEVICE memory necessitates > + * SIGBUS (i.e. MF_MUST_KILL) > + */ > + flags |=3D MF_ACTION_REQUIRED | MF_MUST_KILL; MF_ACTION_REQUIRED only kill the current execution context. A page can be= shared when reflink file be mapped by different process. We can not kill all pro= cess shared the page. Other process still can access the posioned page ? Thanks, zhong jiang > + collect_procs_file(pfn_to_page(pfn), mapping, index, &to_kill, > + flags & MF_ACTION_REQUIRED); > + > + list_for_each_entry(tk, &to_kill, nd) > + if (tk->size_shift) > + size =3D max(size, 1UL << tk->size_shift); > + if (size) { > + /* > + * Unmap the largest mapping to avoid breaking up > + * device-dax mappings which are constant size. The > + * actual size of the mapping being torn down is > + * communicated in siginfo, see kill_proc() > + */ > + start =3D (index << PAGE_SHIFT) & ~(size - 1); > + unmap_mapping_range(mapping, start, start + size, 0); > + } > + > + kill_procs(&to_kill, flags & MF_MUST_KILL, !unmap_success, > + pfn, flags); > + rc =3D 0; > + return rc; > +} > +EXPORT_SYMBOL_GPL(mf_dax_mapping_kill_procs); > + > static int memory_failure_hugetlb(unsigned long pfn, int flags) > { > struct page *p =3D pfn_to_page(pfn); > @@ -1297,7 +1346,7 @@ static int memory_failure_dev_pagemap(unsigned lo= ng pfn, int flags, > const bool unmap_success =3D true; > unsigned long size =3D 0; > struct to_kill *tk; > - LIST_HEAD(tokill); > + LIST_HEAD(to_kill); > int rc =3D -EBUSY; > loff_t start; > dax_entry_t cookie; > @@ -1345,9 +1394,10 @@ static int memory_failure_dev_pagemap(unsigned l= ong pfn, int flags, > * SIGBUS (i.e. MF_MUST_KILL) > */ > flags |=3D MF_ACTION_REQUIRED | MF_MUST_KILL; > - collect_procs(page, &tokill, flags & MF_ACTION_REQUIRED); > + collect_procs_file(page, page->mapping, page->index, &to_kill, > + flags & MF_ACTION_REQUIRED); > =20 > - list_for_each_entry(tk, &tokill, nd) > + list_for_each_entry(tk, &to_kill, nd) > if (tk->size_shift) > size =3D max(size, 1UL << tk->size_shift); > if (size) { > @@ -1360,7 +1410,7 @@ static int memory_failure_dev_pagemap(unsigned lo= ng pfn, int flags, > start =3D (page->index << PAGE_SHIFT) & ~(size - 1); > unmap_mapping_range(page->mapping, start, start + size, 0); > } > - kill_procs(&tokill, flags & MF_MUST_KILL, !unmap_success, pfn, flags)= ; > + kill_procs(&to_kill, flags & MF_MUST_KILL, !unmap_success, pfn, flags= ); > rc =3D 0; > unlock: > dax_unlock_page(page, cookie);