linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 7/8] dm thin: use generic helper to set max_discard_sectors
@ 2013-04-13 13:39 Namjae Jeon
  0 siblings, 0 replies; only message in thread
From: Namjae Jeon @ 2013-04-13 13:39 UTC (permalink / raw)
  To: dwmw2, axboe, shli, Paul.Clements, npiggin, neilb, cjb, adrian.hunter
  Cc: linux-mtd, nbd-general, linux-raid, linux-mmc, linux-kernel,
	Namjae Jeon, Namjae Jeon, Vivek Trivedi

From: Namjae Jeon <namjae.jeon@samsung.com>

It is better to use blk_queue_max_discard_sectors helper
function to set max_discard_sectors as it checks
max_discard_sectors upper limit UINT_MAX >> 9

similar issue was reported for mmc in below link
https://lkml.org/lkml/2013/4/1/292

If multiple discard requests get merged, merged discard request's
size exceeds 4GB, there is possibility that merged discard request's
__data_len field may overflow.

This patch fixes this issue.

Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com>
Signed-off-by: Vivek Trivedi <t.vivek@samsung.com>
---
 drivers/md/dm-thin.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 905b75f..237295a 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -2513,7 +2513,8 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits)
 	struct pool *pool = pt->pool;
 	struct queue_limits *data_limits;
 
-	limits->max_discard_sectors = pool->sectors_per_block;
+	blk_queue_max_discard_sectors(bdev_get_queue(pt->data_dev->bdev),
+					pool->sectors_per_block);
 
 	/*
 	 * discard_granularity is just a hint, and not enforced.
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2013-04-13 13:40 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-13 13:39 [PATCH 7/8] dm thin: use generic helper to set max_discard_sectors Namjae Jeon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).