linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	Mike Snitzer <snitzer@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	linux-xfs@vger.kernel.org, nvdimm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org
Subject: [PATCH] dax: remove silly single-page limitation in dax_zero_page_range
Date: Wed, 22 Sep 2021 18:09:15 -0700	[thread overview]
Message-ID: <20210923010915.GQ570615@magnolia> (raw)

From: Darrick J. Wong <djwong@kernel.org>

It's totally silly that the dax zero_page_range implementations are
required to accept a page count, but one of the four implementations
silently ignores the page count and the wrapper itself errors out if you
try to do more than one page.

Fix the nvdimm implementation to loop over the page count and remove the
artificial limitation.

Signed-off-by: Darrick J. Wong <djwong@kernel.org>
---
 drivers/dax/super.c   |    7 -------
 drivers/nvdimm/pmem.c |   14 +++++++++++---
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/dax/super.c b/drivers/dax/super.c
index fc89e91beea7..ca61a01f9ccd 100644
--- a/drivers/dax/super.c
+++ b/drivers/dax/super.c
@@ -353,13 +353,6 @@ int dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff,
 {
 	if (!dax_alive(dax_dev))
 		return -ENXIO;
-	/*
-	 * There are no callers that want to zero more than one page as of now.
-	 * Once users are there, this check can be removed after the
-	 * device mapper code has been updated to split ranges across targets.
-	 */
-	if (nr_pages != 1)
-		return -EIO;
 
 	return dax_dev->ops->zero_page_range(dax_dev, pgoff, nr_pages);
 }
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 72de88ff0d30..3ef40bf74168 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -288,10 +288,18 @@ static int pmem_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff,
 				    size_t nr_pages)
 {
 	struct pmem_device *pmem = dax_get_private(dax_dev);
+	int ret = 0;
 
-	return blk_status_to_errno(pmem_do_write(pmem, ZERO_PAGE(0), 0,
-				   PFN_PHYS(pgoff) >> SECTOR_SHIFT,
-				   PAGE_SIZE));
+	for (; nr_pages > 0 && ret == 0; pgoff++, nr_pages--) {
+		blk_status_t status;
+
+		status = pmem_do_write(pmem, ZERO_PAGE(0), 0,
+				       PFN_PHYS(pgoff) >> SECTOR_SHIFT,
+				       PAGE_SIZE);
+		ret = blk_status_to_errno(status);
+	}
+
+	return ret;
 }
 
 static long pmem_dax_direct_access(struct dax_device *dax_dev,

             reply	other threads:[~2021-09-23  1:09 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-23  1:09 Darrick J. Wong [this message]
2021-09-23  2:47 ` [PATCH] dax: remove silly single-page limitation in dax_zero_page_range Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210923010915.GQ570615@magnolia \
    --to=djwong@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=hch@lst.de \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=snitzer@redhat.com \
    --cc=vishal.l.verma@intel.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).