From: Christoph Hellwig <hch@infradead.org>
To: Jane Chu <jane.chu@oracle.com>
Cc: david@fromorbit.com, djwong@kernel.org, dan.j.williams@intel.com,
hch@infradead.org, vishal.l.verma@intel.com,
dave.jiang@intel.com, agk@redhat.com, snitzer@redhat.com,
dm-devel@redhat.com, ira.weiny@intel.com, willy@infradead.org,
vgoyal@redhat.com, linux-fsdevel@vger.kernel.org,
nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org,
linux-xfs@vger.kernel.org, x86@kernel.org
Subject: Re: [PATCH v7 4/6] dax: add DAX_RECOVERY flag and .recovery_write dev_pgmap_ops
Date: Tue, 5 Apr 2022 22:19:58 -0700 [thread overview]
Message-ID: <Yk0i/pODntZ7lbDo@infradead.org> (raw)
In-Reply-To: <20220405194747.2386619-5-jane.chu@oracle.com>
[-- Attachment #1: Type: text/plain, Size: 2281 bytes --]
On Tue, Apr 05, 2022 at 01:47:45PM -0600, Jane Chu wrote:
> Introduce DAX_RECOVERY flag to dax_direct_access(). The flag is
> not set by default in dax_direct_access() such that the helper
> does not translate a pmem range to kernel virtual address if the
> range contains uncorrectable errors. When the flag is set,
> the helper ignores the UEs and return kernel virtual adderss so
> that the caller may get on with data recovery via write.
>
> Also introduce a new dev_pagemap_ops .recovery_write function.
> The function is applicable to FSDAX device only. The device
> page backend driver provides .recovery_write function if the
> device has underlying mechanism to clear the uncorrectable
> errors on the fly.
I know Dan suggested it, but I still think dev_pagemap_ops is the very
wrong choice here. It is about VM callbacks to ZONE_DEVICE owners
independent of what pagemap type they are. .recovery_write on the
other hand is completely specific to the DAX write path and has no
MM interactions at all.
> /* see "strong" declaration in tools/testing/nvdimm/pmem-dax.c */
> __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff,
> - long nr_pages, void **kaddr, pfn_t *pfn)
> + long nr_pages, int flags, void **kaddr, pfn_t *pfn)
> {
> resource_size_t offset = PFN_PHYS(pgoff) + pmem->data_offset;
> + sector_t sector = PFN_PHYS(pgoff) >> SECTOR_SHIFT;
> + unsigned int num = PFN_PHYS(nr_pages) >> SECTOR_SHIFT;
> + struct badblocks *bb = &pmem->bb;
> + sector_t first_bad;
> + int num_bad;
> + bool bad_in_range;
> + long actual_nr;
> +
> + if (!bb->count)
> + bad_in_range = false;
> + else
> + bad_in_range = !!badblocks_check(bb, sector, num, &first_bad, &num_bad);
>
> - if (unlikely(is_bad_pmem(&pmem->bb, PFN_PHYS(pgoff) / 512,
> - PFN_PHYS(nr_pages))))
> + if (bad_in_range && !(flags & DAX_RECOVERY))
> return -EIO;
The use of bad_in_range here seems a litle convoluted. See the attached
patch on how I would structure the function to avoid the variable and
have the reocvery code in a self-contained chunk.
> - map_len = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size),
> - &kaddr, NULL);
> + nrpg = PHYS_PFN(size);
> + map_len = dax_direct_access(dax_dev, pgoff, nrpg, 0, &kaddr, NULL);
Overly long line here.
[-- Attachment #2: diff --]
[-- Type: text/plain, Size: 2351 bytes --]
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index b868a88a0d589..377e4d59aa90f 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -272,42 +272,40 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff,
struct badblocks *bb = &pmem->bb;
sector_t first_bad;
int num_bad;
- bool bad_in_range;
- long actual_nr;
-
- if (!bb->count)
- bad_in_range = false;
- else
- bad_in_range = !!badblocks_check(bb, sector, num, &first_bad, &num_bad);
-
- if (bad_in_range && !(flags & DAX_RECOVERY))
- return -EIO;
if (kaddr)
*kaddr = pmem->virt_addr + offset;
if (pfn)
*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
- if (!bad_in_range) {
+ if (bb->count &&
+ badblocks_check(bb, sector, num, &first_bad, &num_bad)) {
+ long actual_nr;
+
+ if (!(flags & DAX_RECOVERY))
+ return -EIO;
/*
- * If badblock is present but not in the range, limit known good range
- * to the requested range.
+ * Set the recovery stride to the kernel page size because the
+ * underlying driver and firmware clear poison functions don't
+ * appear to handle large chunk (such as
+ * 2MiB) reliably.
*/
- if (bb->count)
- return nr_pages;
- return PHYS_PFN(pmem->size - pmem->pfn_pad - offset);
+ actual_nr = PHYS_PFN(
+ PAGE_ALIGN((first_bad - sector) << SECTOR_SHIFT));
+ dev_dbg(pmem->bb.dev, "start sector(%llu), nr_pages(%ld), first_bad(%llu), actual_nr(%ld)\n",
+ sector, nr_pages, first_bad, actual_nr);
+ if (actual_nr)
+ return actual_nr;
+ return 1;
}
/*
- * In case poison is found in the given range and DAX_RECOVERY flag is set,
- * recovery stride is set to kernel page size because the underlying driver and
- * firmware clear poison functions don't appear to handle large chunk (such as
- * 2MiB) reliably.
+ * If badblock is present but not in the range, limit known good range
+ * to the requested range.
*/
- actual_nr = PHYS_PFN(PAGE_ALIGN((first_bad - sector) << SECTOR_SHIFT));
- dev_dbg(pmem->bb.dev, "start sector(%llu), nr_pages(%ld), first_bad(%llu), actual_nr(%ld)\n",
- sector, nr_pages, first_bad, actual_nr);
- return (actual_nr == 0) ? 1 : actual_nr;
+ if (bb->count)
+ return nr_pages;
+ return PHYS_PFN(pmem->size - pmem->pfn_pad - offset);
}
static const struct block_device_operations pmem_fops = {
next prev parent reply other threads:[~2022-04-06 7:10 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-05 19:47 [PATCH v7 0/6] DAX poison recovery Jane Chu
2022-04-05 19:47 ` [PATCH v7 1/6] x86/mm: fix comment Jane Chu
2022-04-11 22:07 ` Dan Williams
2022-04-12 9:53 ` Borislav Petkov
2022-04-14 1:00 ` Jane Chu
2022-04-14 8:44 ` Borislav Petkov
2022-04-14 21:54 ` Jane Chu
2022-04-05 19:47 ` [PATCH v7 2/6] x86/mce: relocate set{clear}_mce_nospec() functions Jane Chu
2022-04-06 5:01 ` Christoph Hellwig
2022-04-11 22:20 ` Dan Williams
2022-04-14 0:56 ` Jane Chu
2022-04-05 19:47 ` [PATCH v7 3/6] mce: fix set_mce_nospec to always unmap the whole page Jane Chu
2022-04-06 5:02 ` Christoph Hellwig
2022-04-11 23:27 ` Dan Williams
2022-04-13 23:36 ` Jane Chu
2022-04-14 2:32 ` Dan Williams
2022-04-15 16:18 ` Jane Chu
2022-04-12 10:07 ` Borislav Petkov
2022-04-13 23:41 ` Jane Chu
2022-04-05 19:47 ` [PATCH v7 4/6] dax: add DAX_RECOVERY flag and .recovery_write dev_pgmap_ops Jane Chu
2022-04-06 5:19 ` Christoph Hellwig [this message]
2022-04-06 17:32 ` Jane Chu
2022-04-06 17:45 ` Jane Chu
2022-04-07 5:30 ` Christoph Hellwig
2022-04-11 23:55 ` Dan Williams
2022-04-14 0:48 ` Jane Chu
2022-04-14 0:47 ` Jane Chu
2022-04-12 0:08 ` Dan Williams
2022-04-14 0:50 ` Jane Chu
2022-04-12 4:57 ` Dan Williams
2022-04-12 5:02 ` Christoph Hellwig
2022-04-14 0:51 ` Jane Chu
2022-04-05 19:47 ` [PATCH v7 5/6] pmem: refactor pmem_clear_poison() Jane Chu
2022-04-06 5:04 ` Christoph Hellwig
2022-04-06 17:34 ` Jane Chu
2022-04-12 4:26 ` Dan Williams
2022-04-14 0:55 ` Jane Chu
2022-04-14 2:02 ` Dan Williams
2022-04-05 19:47 ` [PATCH v7 6/6] pmem: implement pmem_recovery_write() Jane Chu
2022-04-06 5:21 ` Christoph Hellwig
2022-04-06 17:33 ` Jane Chu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yk0i/pODntZ7lbDo@infradead.org \
--to=hch@infradead.org \
--cc=agk@redhat.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=dm-devel@redhat.com \
--cc=ira.weiny@intel.com \
--cc=jane.chu@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=nvdimm@lists.linux.dev \
--cc=snitzer@redhat.com \
--cc=vgoyal@redhat.com \
--cc=vishal.l.verma@intel.com \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).