From: Dan Williams <dan.j.williams@intel.com> To: Jan Kara <jack@suse.cz> Cc: "Matthew Wilcox" <mawilcox@microsoft.com>, linux-nvdimm <linux-nvdimm@lists.01.org>, "Linux MM" <linux-mm@kvack.org>, "Jérôme Glisse" <jglisse@redhat.com>, linux-fsdevel <linux-fsdevel@vger.kernel.org>, "Naoya Horiguchi" <n-horiguchi@ah.jp.nec.com>, "Christoph Hellwig" <hch@lst.de> Subject: Re: [PATCH v4 11/12] mm, memory_failure: Teach memory_failure() about dev_pagemap pages Date: Mon, 11 Jun 2018 09:45:40 -0700 [thread overview] Message-ID: <CAPcyv4iwEOKKO92AcV=0R_-cuH9FzRO98=NVNX-sa4Fe2A3K2Q@mail.gmail.com> (raw) In-Reply-To: <20180611155013.tt4sykwh2dp2vq2e@quack2.suse.cz> On Mon, Jun 11, 2018 at 8:50 AM, Jan Kara <jack@suse.cz> wrote: > On Fri 08-06-18 16:51:19, Dan Williams wrote: >> mce: Uncorrected hardware memory error in user-access at af34214200 >> {1}[Hardware Error]: It has been corrected by h/w and requires no further action >> mce: [Hardware Error]: Machine check events logged >> {1}[Hardware Error]: event severity: corrected >> Memory failure: 0xaf34214: reserved kernel page still referenced by 1 users >> [..] >> Memory failure: 0xaf34214: recovery action for reserved kernel page: Failed >> mce: Memory error not recovered >> >> In contrast to typical memory, dev_pagemap pages may be dax mapped. With >> dax there is no possibility to map in another page dynamically since dax >> establishes 1:1 physical address to file offset associations. Also >> dev_pagemap pages associated with NVDIMM / persistent memory devices can >> internal remap/repair addresses with poison. While memory_failure() >> assumes that it can discard typical poisoned pages and keep them >> unmapped indefinitely, dev_pagemap pages may be returned to service >> after the error is cleared. >> >> Teach memory_failure() to detect and handle MEMORY_DEVICE_HOST >> dev_pagemap pages that have poison consumed by userspace. Mark the >> memory as UC instead of unmapping it completely to allow ongoing access >> via the device driver (nd_pmem). Later, nd_pmem will grow support for >> marking the page back to WB when the error is cleared. > > ... > >> +static unsigned long dax_mapping_size(struct page *page) >> +{ >> + struct address_space *mapping = page->mapping; >> + pgoff_t pgoff = page_to_pgoff(page); >> + struct vm_area_struct *vma; >> + unsigned long size = 0; >> + >> + i_mmap_lock_read(mapping); >> + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { >> + unsigned long address = vma_address(page, vma); >> + pgd_t *pgd; >> + p4d_t *p4d; >> + pud_t *pud; >> + pmd_t *pmd; >> + pte_t *pte; >> + >> + pgd = pgd_offset(vma->vm_mm, address); >> + if (!pgd_present(*pgd)) >> + continue; >> + p4d = p4d_offset(pgd, address); >> + if (!p4d_present(*p4d)) >> + continue; >> + pud = pud_offset(p4d, address); >> + if (!pud_present(*pud)) >> + continue; >> + if (pud_devmap(*pud)) { >> + size = PUD_SIZE; >> + break; >> + } >> + pmd = pmd_offset(pud, address); >> + if (!pmd_present(*pmd)) >> + continue; >> + if (pmd_devmap(*pmd)) { >> + size = PMD_SIZE; >> + break; >> + } >> + pte = pte_offset_map(pmd, address); >> + if (!pte_present(*pte)) >> + continue; >> + if (pte_devmap(*pte)) { >> + size = PAGE_SIZE; >> + break; >> + } >> + } >> + i_mmap_unlock_read(mapping); >> + >> + return size; >> +} > > Correct me if I'm wrong but cannot the same pfn be mapped by different VMAs > with different granularity? I recall that if we have a fully allocated PMD > entry in the radix tree we can hand out 4k entries from inside of it just > fine... Oh, I thought we broke up the 2M entry when that happened. > So whether dax_mapping_size() returns 4k or 2MB would be random? > Why don't we use the entry size in the radix tree when we have done all the > work and looked it up there to lock it anyway? Device-dax has no use case to populate the radix. I think this means that we need to track the mapping size in the memory_failure() path per vma that has the pfn mapped. I'd prefer that over teaching device-dax to populate the radix, or teaching fs-dax to break up huge pages when another vma wants 4K. _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm
WARNING: multiple messages have this Message-ID (diff)
From: Dan Williams <dan.j.williams@intel.com> To: Jan Kara <jack@suse.cz> Cc: linux-nvdimm <linux-nvdimm@lists.01.org>, "Christoph Hellwig" <hch@lst.de>, "Jérôme Glisse" <jglisse@redhat.com>, "Matthew Wilcox" <mawilcox@microsoft.com>, "Naoya Horiguchi" <n-horiguchi@ah.jp.nec.com>, "Ross Zwisler" <ross.zwisler@linux.intel.com>, "Linux MM" <linux-mm@kvack.org>, linux-fsdevel <linux-fsdevel@vger.kernel.org> Subject: Re: [PATCH v4 11/12] mm, memory_failure: Teach memory_failure() about dev_pagemap pages Date: Mon, 11 Jun 2018 09:45:40 -0700 [thread overview] Message-ID: <CAPcyv4iwEOKKO92AcV=0R_-cuH9FzRO98=NVNX-sa4Fe2A3K2Q@mail.gmail.com> (raw) In-Reply-To: <20180611155013.tt4sykwh2dp2vq2e@quack2.suse.cz> On Mon, Jun 11, 2018 at 8:50 AM, Jan Kara <jack@suse.cz> wrote: > On Fri 08-06-18 16:51:19, Dan Williams wrote: >> mce: Uncorrected hardware memory error in user-access at af34214200 >> {1}[Hardware Error]: It has been corrected by h/w and requires no further action >> mce: [Hardware Error]: Machine check events logged >> {1}[Hardware Error]: event severity: corrected >> Memory failure: 0xaf34214: reserved kernel page still referenced by 1 users >> [..] >> Memory failure: 0xaf34214: recovery action for reserved kernel page: Failed >> mce: Memory error not recovered >> >> In contrast to typical memory, dev_pagemap pages may be dax mapped. With >> dax there is no possibility to map in another page dynamically since dax >> establishes 1:1 physical address to file offset associations. Also >> dev_pagemap pages associated with NVDIMM / persistent memory devices can >> internal remap/repair addresses with poison. While memory_failure() >> assumes that it can discard typical poisoned pages and keep them >> unmapped indefinitely, dev_pagemap pages may be returned to service >> after the error is cleared. >> >> Teach memory_failure() to detect and handle MEMORY_DEVICE_HOST >> dev_pagemap pages that have poison consumed by userspace. Mark the >> memory as UC instead of unmapping it completely to allow ongoing access >> via the device driver (nd_pmem). Later, nd_pmem will grow support for >> marking the page back to WB when the error is cleared. > > ... > >> +static unsigned long dax_mapping_size(struct page *page) >> +{ >> + struct address_space *mapping = page->mapping; >> + pgoff_t pgoff = page_to_pgoff(page); >> + struct vm_area_struct *vma; >> + unsigned long size = 0; >> + >> + i_mmap_lock_read(mapping); >> + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { >> + unsigned long address = vma_address(page, vma); >> + pgd_t *pgd; >> + p4d_t *p4d; >> + pud_t *pud; >> + pmd_t *pmd; >> + pte_t *pte; >> + >> + pgd = pgd_offset(vma->vm_mm, address); >> + if (!pgd_present(*pgd)) >> + continue; >> + p4d = p4d_offset(pgd, address); >> + if (!p4d_present(*p4d)) >> + continue; >> + pud = pud_offset(p4d, address); >> + if (!pud_present(*pud)) >> + continue; >> + if (pud_devmap(*pud)) { >> + size = PUD_SIZE; >> + break; >> + } >> + pmd = pmd_offset(pud, address); >> + if (!pmd_present(*pmd)) >> + continue; >> + if (pmd_devmap(*pmd)) { >> + size = PMD_SIZE; >> + break; >> + } >> + pte = pte_offset_map(pmd, address); >> + if (!pte_present(*pte)) >> + continue; >> + if (pte_devmap(*pte)) { >> + size = PAGE_SIZE; >> + break; >> + } >> + } >> + i_mmap_unlock_read(mapping); >> + >> + return size; >> +} > > Correct me if I'm wrong but cannot the same pfn be mapped by different VMAs > with different granularity? I recall that if we have a fully allocated PMD > entry in the radix tree we can hand out 4k entries from inside of it just > fine... Oh, I thought we broke up the 2M entry when that happened. > So whether dax_mapping_size() returns 4k or 2MB would be random? > Why don't we use the entry size in the radix tree when we have done all the > work and looked it up there to lock it anyway? Device-dax has no use case to populate the radix. I think this means that we need to track the mapping size in the memory_failure() path per vma that has the pfn mapped. I'd prefer that over teaching device-dax to populate the radix, or teaching fs-dax to break up huge pages when another vma wants 4K.
next prev parent reply other threads:[~2018-06-11 16:45 UTC|newest] Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-06-08 23:50 [PATCH v4 00/12] mm: Teach memory_failure() about ZONE_DEVICE pages Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 01/12] device-dax: Convert to vmf_insert_mixed and vm_fault_t Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 02/12] device-dax: Cleanup vm_fault de-reference chains Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-11 17:12 ` Laurent Dufour 2018-06-11 17:12 ` Laurent Dufour 2018-06-11 17:14 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 03/12] device-dax: Enable page_mapping() Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 04/12] device-dax: Set page->index Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 05/12] filesystem-dax: " Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 06/12] mm, madvise_inject_error: Let memory_failure() optionally take a page reference Dan Williams 2018-06-08 23:50 ` Dan Williams 2018-06-08 23:50 ` [PATCH v4 07/12] x86/mm/pat: Prepare {reserve, free}_memtype() for "decoy" addresses Dan Williams 2018-06-08 23:50 ` [v4,07/12] " Dan Williams 2018-06-08 23:51 ` [PATCH v4 08/12] x86/memory_failure: Introduce {set, clear}_mce_nospec() Dan Williams 2018-06-08 23:51 ` Dan Williams 2018-06-08 23:51 ` [v4,08/12] " Dan Williams 2018-06-08 23:51 ` [PATCH v4 09/12] mm, memory_failure: Pass page size to kill_proc() Dan Williams 2018-06-08 23:51 ` Dan Williams 2018-06-08 23:51 ` [PATCH v4 10/12] filesystem-dax: Introduce dax_lock_page() Dan Williams 2018-06-08 23:51 ` Dan Williams 2018-06-11 15:41 ` Jan Kara 2018-06-11 15:41 ` Jan Kara 2018-06-11 16:48 ` Dan Williams 2018-06-11 16:48 ` Dan Williams 2018-06-12 18:07 ` Ross Zwisler 2018-06-12 18:07 ` Ross Zwisler 2018-07-04 15:20 ` Dan Williams 2018-07-04 15:20 ` Dan Williams 2018-07-04 15:17 ` Dan Williams 2018-07-04 15:17 ` Dan Williams 2018-06-12 18:15 ` Ross Zwisler 2018-06-12 18:15 ` Ross Zwisler 2018-07-04 15:11 ` Dan Williams 2018-07-04 15:11 ` Dan Williams 2018-06-08 23:51 ` [PATCH v4 11/12] mm, memory_failure: Teach memory_failure() about dev_pagemap pages Dan Williams 2018-06-08 23:51 ` Dan Williams 2018-06-08 23:51 ` Dan Williams 2018-06-11 15:50 ` Jan Kara 2018-06-11 15:50 ` Jan Kara 2018-06-11 16:45 ` Dan Williams [this message] 2018-06-11 16:45 ` Dan Williams 2018-06-12 20:14 ` Ross Zwisler 2018-06-12 20:14 ` Ross Zwisler 2018-06-12 20:14 ` Ross Zwisler 2018-06-12 23:38 ` Dan Williams 2018-06-12 23:38 ` Dan Williams 2018-06-08 23:51 ` [PATCH v4 12/12] libnvdimm, pmem: Restore page attributes when clearing errors Dan Williams
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to='CAPcyv4iwEOKKO92AcV=0R_-cuH9FzRO98=NVNX-sa4Fe2A3K2Q@mail.gmail.com' \ --to=dan.j.williams@intel.com \ --cc=hch@lst.de \ --cc=jack@suse.cz \ --cc=jglisse@redhat.com \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nvdimm@lists.01.org \ --cc=mawilcox@microsoft.com \ --cc=n-horiguchi@ah.jp.nec.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.