From: Vivek Goyal <vgoyal@redhat.com> To: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, hch@infradead.org, dan.j.williams@intel.com Cc: dm-devel@redhat.com, Christoph Hellwig <hch@lst.de> Subject: [PATCH v5 1/8] pmem: Add functions for reading/writing page to/from pmem Date: Tue, 18 Feb 2020 16:48:34 -0500 [thread overview] Message-ID: <20200218214841.10076-2-vgoyal@redhat.com> (raw) In-Reply-To: <20200218214841.10076-1-vgoyal@redhat.com> This splits pmem_do_bvec() into pmem_do_read() and pmem_do_write(). pmem_do_write() will be used by pmem zero_page_range() as well. Hence sharing the same code. Suggested-by: Christoph Hellwig <hch@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> --- drivers/nvdimm/pmem.c | 86 +++++++++++++++++++++++++------------------ 1 file changed, 50 insertions(+), 36 deletions(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 4eae441f86c9..075b11682192 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -136,9 +136,25 @@ static blk_status_t read_pmem(struct page *page, unsigned int off, return BLK_STS_OK; } -static blk_status_t pmem_do_bvec(struct pmem_device *pmem, struct page *page, - unsigned int len, unsigned int off, unsigned int op, - sector_t sector) +static blk_status_t pmem_do_read(struct pmem_device *pmem, + struct page *page, unsigned int page_off, + sector_t sector, unsigned int len) +{ + blk_status_t rc; + phys_addr_t pmem_off = sector * 512 + pmem->data_offset; + void *pmem_addr = pmem->virt_addr + pmem_off; + + if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) + return BLK_STS_IOERR; + + rc = read_pmem(page, page_off, pmem_addr, len); + flush_dcache_page(page); + return rc; +} + +static blk_status_t pmem_do_write(struct pmem_device *pmem, + struct page *page, unsigned int page_off, + sector_t sector, unsigned int len) { blk_status_t rc = BLK_STS_OK; bool bad_pmem = false; @@ -148,34 +164,25 @@ static blk_status_t pmem_do_bvec(struct pmem_device *pmem, struct page *page, if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) bad_pmem = true; - if (!op_is_write(op)) { - if (unlikely(bad_pmem)) - rc = BLK_STS_IOERR; - else { - rc = read_pmem(page, off, pmem_addr, len); - flush_dcache_page(page); - } - } else { - /* - * Note that we write the data both before and after - * clearing poison. The write before clear poison - * handles situations where the latest written data is - * preserved and the clear poison operation simply marks - * the address range as valid without changing the data. - * In this case application software can assume that an - * interrupted write will either return the new good - * data or an error. - * - * However, if pmem_clear_poison() leaves the data in an - * indeterminate state we need to perform the write - * after clear poison. - */ - flush_dcache_page(page); - write_pmem(pmem_addr, page, off, len); - if (unlikely(bad_pmem)) { - rc = pmem_clear_poison(pmem, pmem_off, len); - write_pmem(pmem_addr, page, off, len); - } + /* + * Note that we write the data both before and after + * clearing poison. The write before clear poison + * handles situations where the latest written data is + * preserved and the clear poison operation simply marks + * the address range as valid without changing the data. + * In this case application software can assume that an + * interrupted write will either return the new good + * data or an error. + * + * However, if pmem_clear_poison() leaves the data in an + * indeterminate state we need to perform the write + * after clear poison. + */ + flush_dcache_page(page); + write_pmem(pmem_addr, page, page_off, len); + if (unlikely(bad_pmem)) { + rc = pmem_clear_poison(pmem, pmem_off, len); + write_pmem(pmem_addr, page, page_off, len); } return rc; @@ -197,8 +204,12 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio) do_acct = nd_iostat_start(bio, &start); bio_for_each_segment(bvec, bio, iter) { - rc = pmem_do_bvec(pmem, bvec.bv_page, bvec.bv_len, - bvec.bv_offset, bio_op(bio), iter.bi_sector); + if (op_is_write(bio_op(bio))) + rc = pmem_do_write(pmem, bvec.bv_page, bvec.bv_offset, + iter.bi_sector, bvec.bv_len); + else + rc = pmem_do_read(pmem, bvec.bv_page, bvec.bv_offset, + iter.bi_sector, bvec.bv_len); if (rc) { bio->bi_status = rc; break; @@ -223,9 +234,12 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, struct pmem_device *pmem = bdev->bd_queue->queuedata; blk_status_t rc; - rc = pmem_do_bvec(pmem, page, hpage_nr_pages(page) * PAGE_SIZE, - 0, op, sector); - + if (op_is_write(op)) + rc = pmem_do_write(pmem, page, 0, sector, + hpage_nr_pages(page) * PAGE_SIZE); + else + rc = pmem_do_read(pmem, page, 0, sector, + hpage_nr_pages(page) * PAGE_SIZE); /* * The ->rw_page interface is subtle and tricky. The core * retries on any error, so we can only invoke page_endio() in -- 2.20.1 _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
WARNING: multiple messages have this Message-ID (diff)
From: Vivek Goyal <vgoyal@redhat.com> To: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, hch@infradead.org, dan.j.williams@intel.com Cc: dm-devel@redhat.com, vishal.l.verma@intel.com, vgoyal@redhat.com, Christoph Hellwig <hch@lst.de> Subject: [PATCH v5 1/8] pmem: Add functions for reading/writing page to/from pmem Date: Tue, 18 Feb 2020 16:48:34 -0500 [thread overview] Message-ID: <20200218214841.10076-2-vgoyal@redhat.com> (raw) In-Reply-To: <20200218214841.10076-1-vgoyal@redhat.com> This splits pmem_do_bvec() into pmem_do_read() and pmem_do_write(). pmem_do_write() will be used by pmem zero_page_range() as well. Hence sharing the same code. Suggested-by: Christoph Hellwig <hch@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> --- drivers/nvdimm/pmem.c | 86 +++++++++++++++++++++++++------------------ 1 file changed, 50 insertions(+), 36 deletions(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 4eae441f86c9..075b11682192 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -136,9 +136,25 @@ static blk_status_t read_pmem(struct page *page, unsigned int off, return BLK_STS_OK; } -static blk_status_t pmem_do_bvec(struct pmem_device *pmem, struct page *page, - unsigned int len, unsigned int off, unsigned int op, - sector_t sector) +static blk_status_t pmem_do_read(struct pmem_device *pmem, + struct page *page, unsigned int page_off, + sector_t sector, unsigned int len) +{ + blk_status_t rc; + phys_addr_t pmem_off = sector * 512 + pmem->data_offset; + void *pmem_addr = pmem->virt_addr + pmem_off; + + if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) + return BLK_STS_IOERR; + + rc = read_pmem(page, page_off, pmem_addr, len); + flush_dcache_page(page); + return rc; +} + +static blk_status_t pmem_do_write(struct pmem_device *pmem, + struct page *page, unsigned int page_off, + sector_t sector, unsigned int len) { blk_status_t rc = BLK_STS_OK; bool bad_pmem = false; @@ -148,34 +164,25 @@ static blk_status_t pmem_do_bvec(struct pmem_device *pmem, struct page *page, if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) bad_pmem = true; - if (!op_is_write(op)) { - if (unlikely(bad_pmem)) - rc = BLK_STS_IOERR; - else { - rc = read_pmem(page, off, pmem_addr, len); - flush_dcache_page(page); - } - } else { - /* - * Note that we write the data both before and after - * clearing poison. The write before clear poison - * handles situations where the latest written data is - * preserved and the clear poison operation simply marks - * the address range as valid without changing the data. - * In this case application software can assume that an - * interrupted write will either return the new good - * data or an error. - * - * However, if pmem_clear_poison() leaves the data in an - * indeterminate state we need to perform the write - * after clear poison. - */ - flush_dcache_page(page); - write_pmem(pmem_addr, page, off, len); - if (unlikely(bad_pmem)) { - rc = pmem_clear_poison(pmem, pmem_off, len); - write_pmem(pmem_addr, page, off, len); - } + /* + * Note that we write the data both before and after + * clearing poison. The write before clear poison + * handles situations where the latest written data is + * preserved and the clear poison operation simply marks + * the address range as valid without changing the data. + * In this case application software can assume that an + * interrupted write will either return the new good + * data or an error. + * + * However, if pmem_clear_poison() leaves the data in an + * indeterminate state we need to perform the write + * after clear poison. + */ + flush_dcache_page(page); + write_pmem(pmem_addr, page, page_off, len); + if (unlikely(bad_pmem)) { + rc = pmem_clear_poison(pmem, pmem_off, len); + write_pmem(pmem_addr, page, page_off, len); } return rc; @@ -197,8 +204,12 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio) do_acct = nd_iostat_start(bio, &start); bio_for_each_segment(bvec, bio, iter) { - rc = pmem_do_bvec(pmem, bvec.bv_page, bvec.bv_len, - bvec.bv_offset, bio_op(bio), iter.bi_sector); + if (op_is_write(bio_op(bio))) + rc = pmem_do_write(pmem, bvec.bv_page, bvec.bv_offset, + iter.bi_sector, bvec.bv_len); + else + rc = pmem_do_read(pmem, bvec.bv_page, bvec.bv_offset, + iter.bi_sector, bvec.bv_len); if (rc) { bio->bi_status = rc; break; @@ -223,9 +234,12 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, struct pmem_device *pmem = bdev->bd_queue->queuedata; blk_status_t rc; - rc = pmem_do_bvec(pmem, page, hpage_nr_pages(page) * PAGE_SIZE, - 0, op, sector); - + if (op_is_write(op)) + rc = pmem_do_write(pmem, page, 0, sector, + hpage_nr_pages(page) * PAGE_SIZE); + else + rc = pmem_do_read(pmem, page, 0, sector, + hpage_nr_pages(page) * PAGE_SIZE); /* * The ->rw_page interface is subtle and tricky. The core * retries on any error, so we can only invoke page_endio() in -- 2.20.1
next prev parent reply other threads:[~2020-02-18 21:49 UTC|newest] Thread overview: 103+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-02-18 21:48 [PATCH v5 0/8] dax/pmem: Provide a dax operation to zero range of memory Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal [this message] 2020-02-18 21:48 ` [PATCH v5 1/8] pmem: Add functions for reading/writing page to/from pmem Vivek Goyal 2020-02-18 21:48 ` [PATCH v5 2/8] drivers/pmem: Allow pmem_clear_poison() to accept arbitrary offset and len Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-02-20 16:17 ` Christoph Hellwig 2020-02-20 16:17 ` Christoph Hellwig 2020-02-20 21:35 ` Jeff Moyer 2020-02-20 21:35 ` Jeff Moyer 2020-02-20 21:57 ` Vivek Goyal 2020-02-20 21:57 ` Vivek Goyal 2020-02-21 18:32 ` Jeff Moyer 2020-02-21 18:32 ` Jeff Moyer 2020-02-21 20:17 ` Vivek Goyal 2020-02-21 20:17 ` Vivek Goyal 2020-02-21 21:00 ` Dan Williams 2020-02-21 21:00 ` Dan Williams 2020-02-21 21:24 ` Vivek Goyal 2020-02-21 21:24 ` Vivek Goyal 2020-02-21 21:30 ` Dan Williams 2020-02-21 21:30 ` Dan Williams 2020-02-21 21:33 ` Jeff Moyer 2020-02-21 21:33 ` Jeff Moyer 2020-02-23 23:03 ` Dave Chinner 2020-02-23 23:03 ` Dave Chinner 2020-02-24 0:40 ` Dan Williams 2020-02-24 0:40 ` Dan Williams 2020-02-24 13:50 ` Jeff Moyer 2020-02-24 13:50 ` Jeff Moyer 2020-02-24 20:48 ` Dan Williams 2020-02-24 20:48 ` Dan Williams 2020-02-24 21:53 ` Jeff Moyer 2020-02-24 21:53 ` Jeff Moyer 2020-02-25 0:26 ` Dan Williams 2020-02-25 0:26 ` Dan Williams 2020-02-25 20:32 ` Jeff Moyer 2020-02-25 20:32 ` Jeff Moyer 2020-02-25 21:52 ` Dan Williams 2020-02-25 21:52 ` Dan Williams 2020-02-25 23:26 ` Jane Chu 2020-02-25 23:26 ` Jane Chu 2020-02-24 15:38 ` Vivek Goyal 2020-02-24 15:38 ` Vivek Goyal 2020-02-27 3:02 ` Dave Chinner 2020-02-27 3:02 ` Dave Chinner 2020-02-27 4:19 ` Dan Williams 2020-02-27 4:19 ` Dan Williams 2020-02-28 1:30 ` Dave Chinner 2020-02-28 1:30 ` Dave Chinner 2020-02-28 3:28 ` Dan Williams 2020-02-28 3:28 ` Dan Williams 2020-02-28 14:05 ` Christoph Hellwig 2020-02-28 14:05 ` Christoph Hellwig 2020-02-28 16:26 ` Dan Williams 2020-02-28 16:26 ` Dan Williams 2020-02-24 20:13 ` Vivek Goyal 2020-02-24 20:13 ` Vivek Goyal 2020-02-24 20:52 ` Dan Williams 2020-02-24 20:52 ` Dan Williams 2020-02-24 21:15 ` Vivek Goyal 2020-02-24 21:15 ` Vivek Goyal 2020-02-24 21:32 ` Dan Williams 2020-02-24 21:32 ` Dan Williams 2020-02-25 13:36 ` Vivek Goyal 2020-02-25 13:36 ` Vivek Goyal 2020-02-25 16:25 ` Dan Williams 2020-02-25 16:25 ` Dan Williams 2020-02-25 20:08 ` Vivek Goyal 2020-02-25 20:08 ` Vivek Goyal 2020-02-25 22:49 ` Dan Williams 2020-02-25 22:49 ` Dan Williams 2020-02-26 13:51 ` Vivek Goyal 2020-02-26 13:51 ` Vivek Goyal 2020-02-26 16:57 ` Vivek Goyal 2020-02-26 16:57 ` Vivek Goyal 2020-02-27 3:11 ` Dave Chinner 2020-02-27 3:11 ` Dave Chinner 2020-02-27 15:25 ` Vivek Goyal 2020-02-27 15:25 ` Vivek Goyal 2020-02-28 1:50 ` Dave Chinner 2020-02-28 1:50 ` Dave Chinner 2020-02-18 21:48 ` [PATCH v5 3/8] pmem: Enable pmem_do_write() to deal with arbitrary ranges Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-02-20 16:17 ` Christoph Hellwig 2020-02-20 16:17 ` Christoph Hellwig 2020-02-18 21:48 ` [PATCH v5 4/8] dax, pmem: Add a dax operation zero_page_range Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-03-31 19:38 ` Dan Williams 2020-03-31 19:38 ` Dan Williams 2020-04-01 13:15 ` Vivek Goyal 2020-04-01 13:15 ` Vivek Goyal 2020-04-01 16:14 ` Vivek Goyal 2020-04-01 16:14 ` Vivek Goyal 2020-02-18 21:48 ` [PATCH v5 5/8] s390,dcssblk,dax: Add dax zero_page_range operation to dcssblk driver Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-02-18 21:48 ` [PATCH v5 6/8] dm,dax: Add dax zero_page_range operation Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-02-18 21:48 ` [PATCH v5 7/8] dax,iomap: Start using dax native zero_page_range() Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-02-18 21:48 ` [PATCH v5 8/8] dax,iomap: Add helper dax_iomap_zero() to zero a range Vivek Goyal 2020-02-18 21:48 ` Vivek Goyal 2020-04-25 11:31 ` [PATCH v5 8/8] dax, iomap: " neolift9
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200218214841.10076-2-vgoyal@redhat.com \ --to=vgoyal@redhat.com \ --cc=dan.j.williams@intel.com \ --cc=dm-devel@redhat.com \ --cc=hch@infradead.org \ --cc=hch@lst.de \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-nvdimm@lists.01.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.