All of lore.kernel.org
 help / color / mirror / Atom feed
* [dm-devel] [PATCH] dm-raid: set discard_granularity non-zero if possible
@ 2020-12-16 19:53 Stephan Bärwolf
  2021-01-04 19:38 ` [dm-devel] " Mike Snitzer
  0 siblings, 1 reply; 2+ messages in thread
From: Stephan Bärwolf @ 2020-12-16 19:53 UTC (permalink / raw)
  To: dm-devel

[-- Attachment #1: Type: text/plain, Size: 1553 bytes --]

Hi

I hope this address is the right place for this patch.
It is supposed to fix the triggering of block/blklib.c:51 WARN_ON_ONCE(..) when using LVM2 raid1 with SSD-PVs.
Since commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f and without this patchthere are tons of printks logging "Error: discard_granularity is 0." to kmsg.
Also there is no discard/TRIM happening anymore...

This is a rough patch for WARNING-issue

"block/blk-lib.c:51 __blkdev_issue_discard+0x1f6/0x250"
[...] "Error: discard_granularity is 0." [...]
introduced in commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f
("block: check queue's limits.discard_granularity in __blkdev_issue_discard()")

in conjunction with LVM2 raid1 volumes on discardable (SSD) backing.
It seems until now, LVM-raid1 reported "discard_granularity" as 0,
as well as "max_discard_sectors" as 0. (see "lsblk --discard").

The idea here is to fix the issue by calculating "max_discard_sectors"
as the minimum over all involved block devices. (We use the meta-data
for this to work here.)
For calculating the "discard_granularity" we would have to calculate the
lcm (least common multiple) of all discard_granularities of all involved
block devices and finally round up to next power of 2.

However, since all "discard_granularity" are powers of 2, this algorithm
will simplify to just determining the maximum and filtering for "0"-cases.

Signed-off-by: Stephan Baerwolf <stephan@matrixstorm.com>
---
drivers/md/dm-raid.c | 32 ++++++++++++++++++++++++++++++--
1 file changed, 30 insertions(+), 2 deletions(-)




[-- Attachment #2: 0001-dm-raid-set-discard_granularity-non-zero-if-possible.patch --]
[-- Type: text/x-patch, Size: 1694 bytes --]

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 8d2b835d7a10..4c769fd93ced 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -3734,8 +3734,36 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	 * RAID0/4/5/6 don't and process large discard bios properly.
 	 */
 	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
-		limits->discard_granularity = chunk_size_bytes;
-		limits->max_discard_sectors = rs->md.chunk_sectors;
+        /* HACK */
+        if (chunk_size_bytes==0) {
+            unsigned int i, chunk_sectors=(UINT_MAX >>  SECTOR_SHIFT);
+            struct request_queue *q = NULL;
+
+            DMINFO("chunk_size is 0 for raid1 - preventing issue with TRIM");
+
+            for (i=0;i<rs->raid_disks;i++) {
+                q=bdev_get_queue(rs->dev[i].meta_dev->bdev);
+                if (chunk_sectors >  q->limits.max_discard_sectors) {
+                    chunk_sectors = q->limits.max_discard_sectors;
+                }
+                if (chunk_size_bytes < q->limits.discard_granularity) {
+                    chunk_size_bytes = q->limits.discard_granularity;
+                }
+
+                /* lcm(x,y,...,0) = 0 */
+                if (q->limits.discard_granularity == 0) {
+                    chunk_size_bytes = 0;
+                    break;
+                }
+            }
+
+            limits->discard_granularity = chunk_size_bytes;
+            limits->max_discard_sectors = chunk_sectors;
+        /* end of HACK (but not of if) */
+        } else {
+            limits->discard_granularity = chunk_size_bytes;
+            limits->max_discard_sectors = rs->md.chunk_sectors;
+        }
 	}
 }
 


[-- Attachment #3: Type: text/plain, Size: 93 bytes --]

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [dm-devel] dm-raid: set discard_granularity non-zero if possible
  2020-12-16 19:53 [dm-devel] [PATCH] dm-raid: set discard_granularity non-zero if possible Stephan Bärwolf
@ 2021-01-04 19:38 ` Mike Snitzer
  0 siblings, 0 replies; 2+ messages in thread
From: Mike Snitzer @ 2021-01-04 19:38 UTC (permalink / raw)
  To: Stephan Bärwolf; +Cc: dm-devel

On Wed, Dec 16 2020 at  2:53pm -0500,
Stephan Bärwolf <stephan@matrixstorm.com> wrote:

> Hi
> 
> I hope this address is the right place for this patch.
> It is supposed to fix the triggering of block/blklib.c:51 WARN_ON_ONCE(..) when using LVM2 raid1 with SSD-PVs.
> Since commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f and without this patchthere are tons of printks logging "Error: discard_granularity is 0." to kmsg.
> Also there is no discard/TRIM happening anymore...
> 
> This is a rough patch for WARNING-issue
> 
> "block/blk-lib.c:51 __blkdev_issue_discard+0x1f6/0x250"
> [...] "Error: discard_granularity is 0." [...]
> introduced in commit b35fd7422c2f8e04496f5a770bd4e1a205414b3f
> ("block: check queue's limits.discard_granularity in __blkdev_issue_discard()")
> 
> in conjunction with LVM2 raid1 volumes on discardable (SSD) backing.
> It seems until now, LVM-raid1 reported "discard_granularity" as 0,
> as well as "max_discard_sectors" as 0. (see "lsblk --discard").
> 
> The idea here is to fix the issue by calculating "max_discard_sectors"
> as the minimum over all involved block devices. (We use the meta-data
> for this to work here.)
> For calculating the "discard_granularity" we would have to calculate the
> lcm (least common multiple) of all discard_granularities of all involved
> block devices and finally round up to next power of 2.
> 
> However, since all "discard_granularity" are powers of 2, this algorithm
> will simplify to just determining the maximum and filtering for "0"-cases.
> 
> Signed-off-by: Stephan Baerwolf <stephan@matrixstorm.com>
> ---
> drivers/md/dm-raid.c | 32 ++++++++++++++++++++++++++++++--
> 1 file changed, 30 insertions(+), 2 deletions(-)
> 
> 
> 

> diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
> index 8d2b835d7a10..4c769fd93ced 100644
> --- a/drivers/md/dm-raid.c
> +++ b/drivers/md/dm-raid.c
> @@ -3734,8 +3734,36 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
>  	 * RAID0/4/5/6 don't and process large discard bios properly.
>  	 */
>  	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
> -		limits->discard_granularity = chunk_size_bytes;
> -		limits->max_discard_sectors = rs->md.chunk_sectors;

The above should be: if (rs_is_raid0(rs) || rs_is_raid10(rs)) {

And this was previous;y fixed with commit e0910c8e4f87bb9 but later
reverted due to various late MD discard reverts at the end of the 5.10
release.

So all said, I think the the proper fix (without all sorts of
open-coding to get limits to properly stack) is to change
raid_io_hints()'s rs_is_raid1() call to rs_is_raid0().

I'll get a fix queued up.

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-01-04 19:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-16 19:53 [dm-devel] [PATCH] dm-raid: set discard_granularity non-zero if possible Stephan Bärwolf
2021-01-04 19:38 ` [dm-devel] " Mike Snitzer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.