All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 1/1] raid0: prevent unaligned discard requests
@ 2014-07-16 19:04 Eivind Sarto
  0 siblings, 0 replies; only message in thread
From: Eivind Sarto @ 2014-07-16 19:04 UTC (permalink / raw)
  To: linux-raid, NeilBrown

[-- Attachment #1: Type: text/plain, Size: 548 bytes --]

This is a simple patch that prevents blkdev_issue_discard() from issuing mis-aligned REQ_DISCARD to raid0.
Currently, raid0 only sets max_discard_sectors equal to chunk size.  But that will only break a larger discard request
into multiple chunk sized requests.  It will not align those requests.  If the original (big) request is not chunk-aligned,
blkdev_issue_discard() will issue mis-aligned chunk-sized discards to raid0, and raid0 will have to split them all.
The patch comment had example block traces that illustrates the problem.



[-- Attachment #2: raid0_prevent_unaligned_discard_requests.patch --]
[-- Type: application/octet-stream, Size: 2478 bytes --]

Author: Eivind Sarto <esarto@fusionio.com>
Date: Tue Jun  3 14:07:29 2014

raid0: prevent unaligned discard requests

Raid0 sets max_discard_sectors equal to chunk_size.  However, that does
make blkdev_issue_discard() to issue chunk-aligned REQ_DISCARD to raid0.
If blkdev_issue_discard() is called with a non-chunk-aligned sector, it will
break the request into multiple chunk-sized bio requests, but all the bio
requests to raid0 will stay mis-aligned (and raid0 will need to split them all).

This patch sets the device queue discard granularity.
It will make blkdev_issue_discard() always break REQ_DISCARD requests into
aligned requests to raid0, after the first partial/mis-aligned request has been
issued.

Here is a trace of a discard request to a raid0 with a 32k chunk, before and 
after the patch.
Before: # blkdiscard -v -l 131072 -o 8192 /dev/md0
  9,0    0        1     0.000000000 16681  Q   D 16 + 64 [blkdiscard]
  9,0    0        2     0.000001780 16681  X   D 16 / 64 [blkdiscard]
  9,0    0        3     0.000040254 16681  Q   D 80 + 64 [blkdiscard]
  9,0    0        4     0.000040726 16681  X   D 80 / 128 [blkdiscard]
  9,0    0        5     0.000060695 16681  Q   D 144 + 64 [blkdiscard]
  9,0    0        6     0.000060971 16681  X   D 144 / 192 [blkdiscard]
  9,0    0        7     0.000072825 16681  Q   D 208 + 64 [blkdiscard]
  9,0    0        8     0.000073059 16681  X   D 208 / 256 [blkdiscard]
After: # blkdiscard -v -l 131072 -o 8192 /dev/md0
  9,0    0        1     3.681411377 13326  Q   D 16 + 48 [blkdiscard]
  9,0    0        2     3.681441401 13326  Q   D 64 + 64 [blkdiscard]
  9,0    0        3     3.681450093 13326  Q   D 128 + 64 [blkdiscard]
  9,0    0        4     3.681455989 13326  Q   D 192 + 64 [blkdiscard]
  9,0    0        5     3.681474018 13326  Q   D 256 + 16 [blkdiscard]

Signed-off-by: Eivind Sarto <esarto@fusionio.com>

--- a/drivers/md/raid0.c	2014-06-02 11:26:29.000000000 -0700
+++ b/drivers/md/raid0.c	2014-06-03 11:56:05.000000000 -0700
@@ -440,6 +440,8 @@ static int raid0_run(struct mddev *mddev
 	blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors);
 	blk_queue_max_write_same_sectors(mddev->queue, mddev->chunk_sectors);
 	blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors);
+	/* prevent unaligned discard requests */
+	mddev->queue->limits.discard_granularity = mddev->chunk_sectors << 9;
 
 	/* if private is not null, we are here after takeover */
 	if (mddev->private == NULL) {

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2014-07-16 19:04 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-16 19:04 [patch 1/1] raid0: prevent unaligned discard requests Eivind Sarto

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.