From: Paolo Bonzini <pbonzini@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: snitzer@redhat.com, david@fromorbit.com, dm-devel@redhat.com,
xfs@oss.sgi.com, hch@lst.de, martin.petersen@oracle.com,
axboe@kernel.dk
Subject: [PATCH v2 2/3] block: reorganize rounding of max_discard_sectors
Date: Mon, 2 Jul 2012 15:20:24 +0200 [thread overview]
Message-ID: <1341235225-27551-3-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1341235225-27551-1-git-send-email-pbonzini@redhat.com>
Mostly a preparation for the next patch.
In principle this fixes an infinite loop if max_discard_sectors < granularity,
but that really shouldn't happen.
Cc: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
block/blk-lib.c | 9 +++++----
1 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/blk-lib.c b/block/blk-lib.c
index 2b461b4..16b06f6 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -44,6 +44,7 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
struct request_queue *q = bdev_get_queue(bdev);
int type = REQ_WRITE | REQ_DISCARD;
unsigned int max_discard_sectors;
+ unsigned int granularity;
struct bio_batch bb;
struct bio *bio;
int ret = 0;
@@ -54,18 +55,18 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
if (!blk_queue_discard(q))
return -EOPNOTSUPP;
+ /* Zero-sector (unknown) and one-sector granularities are the same. */
+ granularity = max(q->limits.discard_granularity >> 9, 1U);
+
/*
* Ensure that max_discard_sectors is of the proper
* granularity
*/
max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);
+ max_discard_sectors = round_down(max_discard_sectors, granularity);
if (unlikely(!max_discard_sectors)) {
/* Avoid infinite loop below. Being cautious never hurts. */
return -EOPNOTSUPP;
- } else if (q->limits.discard_granularity) {
- unsigned int disc_sects = q->limits.discard_granularity >> 9;
-
- max_discard_sectors &= ~(disc_sects - 1);
}
if (flags & BLKDEV_DISCARD_SECURE) {
--
1.7.1
next prev parent reply other threads:[~2012-07-02 13:20 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-02 13:20 [PATCH v2 0/3] block: improvements for discard alignment Paolo Bonzini
2012-07-02 13:20 ` [PATCH v2 1/3] block: add sysfs entry for discard_alignment Paolo Bonzini
2012-07-03 2:34 ` [dm-devel] " Vivek Goyal
2012-07-03 3:59 ` Mike Snitzer
2012-07-03 11:51 ` [dm-devel] " Paolo Bonzini
2012-07-03 14:00 ` Vivek Goyal
2012-07-03 14:21 ` Paolo Bonzini
2012-07-03 14:39 ` Vivek Goyal
2012-07-03 14:40 ` Paolo Bonzini
2012-07-03 14:45 ` Vivek Goyal
2012-07-02 13:20 ` Paolo Bonzini [this message]
2012-07-03 2:49 ` [dm-devel] [PATCH v2 2/3] block: reorganize rounding of max_discard_sectors Vivek Goyal
2012-07-03 11:47 ` Paolo Bonzini
2012-07-02 13:20 ` [PATCH v2 3/3] block: split discard into aligned requests Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1341235225-27551-3-git-send-email-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=axboe@kernel.dk \
--cc=david@fromorbit.com \
--cc=dm-devel@redhat.com \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=snitzer@redhat.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).