linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
@ 2021-02-04  7:50 Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio Xiao Ni
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:50 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon

Hi all

Now mkfs on raid10 which is combined with ssd/nvme disks takes a long time.
This patch set tries to resolve this problem.

This patch set had been reverted because of a data corruption problem. This
version fix this problem. The root cause which causes the data corruption is
the wrong calculation of start address of near copies disks.

Now we use a similar way with raid0 to handle discard request for raid10.
Because the discard region is very big, we can calculate the start/end
address for each disk. Then we can submit the discard request to each disk.
But for raid10, it has copies. For near layout, if the discard request
doesn't align with chunk size, we calculate a start_disk_offset. Now we only
use start_disk_offset for the first disk, but it should be used for the
near copies disks too.

[  789.709501] discard bio start : 70968, size : 191176
[  789.709507] first stripe index 69, start disk index 0, start disk offset 70968
[  789.709509] last stripe index 256, end disk index 0, end disk offset 262144
[  789.709511] disk 0, dev start : 70968, dev end : 262144
[  789.709515] disk 1, dev start : 70656, dev end : 262144

For example, in this test case, it has 2 near copies. The start_disk_offset
for the first disk is 70968. It should use the same offset address for second disk.
But it uses the start address of this chunk. It discard more region. This version
simply spilt the un-aligned part with strip size.

And it fixes another problem. The calculation of stripe_size is wrong in reverted version.

V2: Fix problems pointed by Christoph Hellwig.

Xiao Ni (5):
  md: add md_submit_discard_bio() for submitting discard bio
  md/raid10: extend r10bio devs to raid disks
  md/raid10: pull the code that wait for blocked dev into one function
  md/raid10: improve raid10 discard request
  md/raid10: improve discard request for far layout

 drivers/md/md.c     |  20 +++
 drivers/md/md.h     |   2 +
 drivers/md/raid0.c  |  14 +-
 drivers/md/raid10.c | 434 +++++++++++++++++++++++++++++++++++++++++++++-------
 drivers/md/raid10.h |   1 +
 5 files changed, 402 insertions(+), 69 deletions(-)

-- 
2.7.5


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio
  2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
@ 2021-02-04  7:50 ` Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 2/5] md/raid10: extend r10bio devs to raid disks Xiao Ni
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:50 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon

Move these logic from raid0.c to md.c, so that we can also use it in
raid10.c.

Reviewed-by: Coly Li <colyli@suse.de>
Reviewed-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Xiao Ni <xni@redhat.com>
---
 drivers/md/md.c    | 20 ++++++++++++++++++++
 drivers/md/md.h    |  2 ++
 drivers/md/raid0.c | 14 ++------------
 3 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7d1bb24..f18cd0d 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8590,6 +8590,26 @@ void md_write_end(struct mddev *mddev)
 
 EXPORT_SYMBOL(md_write_end);
 
+/* This is used by raid0 and raid10 */
+void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+			struct bio *bio, sector_t start, sector_t size)
+{
+	struct bio *discard_bio = NULL;
+
+	if (__blkdev_issue_discard(rdev->bdev, start, size, GFP_NOIO, 0,
+			&discard_bio) || !discard_bio)
+		return;
+
+	bio_chain(discard_bio, bio);
+	bio_clone_blkg_association(discard_bio, bio);
+	if (mddev->gendisk)
+		trace_block_bio_remap(discard_bio,
+				disk_devt(mddev->gendisk),
+				bio->bi_iter.bi_sector);
+	submit_bio_noacct(discard_bio);
+}
+EXPORT_SYMBOL_GPL(md_submit_discard_bio);
+
 /* md_allow_write(mddev)
  * Calling this ensures that the array is marked 'active' so that writes
  * may proceed without blocking.  It is important to call this before
diff --git a/drivers/md/md.h b/drivers/md/md.h
index f13290c..501fdcc57 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -713,6 +713,8 @@ extern void md_write_end(struct mddev *mddev);
 extern void md_done_sync(struct mddev *mddev, int blocks, int ok);
 extern void md_error(struct mddev *mddev, struct md_rdev *rdev);
 extern void md_finish_reshape(struct mddev *mddev);
+void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+			struct bio *bio, sector_t start, sector_t size);
 
 extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
 extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 67f157f..e5d7411 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -477,7 +477,6 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
 
 	for (disk = 0; disk < zone->nb_dev; disk++) {
 		sector_t dev_start, dev_end;
-		struct bio *discard_bio = NULL;
 		struct md_rdev *rdev;
 
 		if (disk < start_disk_index)
@@ -500,18 +499,9 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
 
 		rdev = conf->devlist[(zone - conf->strip_zone) *
 			conf->strip_zone[0].nb_dev + disk];
-		if (__blkdev_issue_discard(rdev->bdev,
+		md_submit_discard_bio(mddev, rdev, bio,
 			dev_start + zone->dev_start + rdev->data_offset,
-			dev_end - dev_start, GFP_NOIO, 0, &discard_bio) ||
-		    !discard_bio)
-			continue;
-		bio_chain(discard_bio, bio);
-		bio_clone_blkg_association(discard_bio, bio);
-		if (mddev->gendisk)
-			trace_block_bio_remap(discard_bio,
-				disk_devt(mddev->gendisk),
-				bio->bi_iter.bi_sector);
-		submit_bio_noacct(discard_bio);
+			dev_end - dev_start);
 	}
 	bio_endio(bio);
 }
-- 
2.7.5


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 2/5] md/raid10: extend r10bio devs to raid disks
  2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio Xiao Ni
@ 2021-02-04  7:50 ` Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 3/5] md/raid10: pull the code that wait for blocked dev into one function Xiao Ni
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:50 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon

Now it allocs r10bio->devs[conf->copies]. Discard bio needs to submit
to all member disks and it needs to use r10bio. So extend to
r10bio->devs[geo.raid_disks].

Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Xiao Ni <xni@redhat.com>
---
 drivers/md/raid10.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index be8f14a..0a734de 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -91,7 +91,7 @@ static inline struct r10bio *get_resync_r10bio(struct bio *bio)
 static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data)
 {
 	struct r10conf *conf = data;
-	int size = offsetof(struct r10bio, devs[conf->copies]);
+	int size = offsetof(struct r10bio, devs[conf->geo.raid_disks]);
 
 	/* allocate a r10bio with room for raid_disks entries in the
 	 * bios array */
@@ -238,7 +238,7 @@ static void put_all_bios(struct r10conf *conf, struct r10bio *r10_bio)
 {
 	int i;
 
-	for (i = 0; i < conf->copies; i++) {
+	for (i = 0; i < conf->geo.raid_disks; i++) {
 		struct bio **bio = & r10_bio->devs[i].bio;
 		if (!BIO_SPECIAL(*bio))
 			bio_put(*bio);
@@ -327,7 +327,7 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio,
 	int slot;
 	int repl = 0;
 
-	for (slot = 0; slot < conf->copies; slot++) {
+	for (slot = 0; slot < conf->geo.raid_disks; slot++) {
 		if (r10_bio->devs[slot].bio == bio)
 			break;
 		if (r10_bio->devs[slot].repl_bio == bio) {
@@ -336,7 +336,6 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio,
 		}
 	}
 
-	BUG_ON(slot == conf->copies);
 	update_head_pos(slot, r10_bio);
 
 	if (slotp)
@@ -1492,7 +1491,8 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
 	r10_bio->sector = bio->bi_iter.bi_sector;
 	r10_bio->state = 0;
 	r10_bio->read_slot = -1;
-	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * conf->copies);
+	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) *
+			conf->geo.raid_disks);
 
 	if (bio_data_dir(bio) == READ)
 		raid10_read_request(mddev, bio, r10_bio);
-- 
2.7.5


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 3/5] md/raid10: pull the code that wait for blocked dev into one function
  2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 2/5] md/raid10: extend r10bio devs to raid disks Xiao Ni
@ 2021-02-04  7:50 ` Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 4/5] md/raid10: improve raid10 discard request Xiao Ni
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:50 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon

The following patch will reuse these logics, so pull the same codes into
one function.

Signed-off-by: Xiao Ni <xni@redhat.com>
---
 drivers/md/raid10.c | 120 ++++++++++++++++++++++++++++++----------------------
 1 file changed, 69 insertions(+), 51 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 0a734de..ea97ce3 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1273,12 +1273,77 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
 	}
 }
 
+static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio)
+{
+	int i;
+	struct r10conf *conf = mddev->private;
+	struct md_rdev *blocked_rdev;
+
+retry_wait:
+	blocked_rdev = NULL;
+	rcu_read_lock();
+	for (i = 0; i < conf->copies; i++) {
+		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
+		struct md_rdev *rrdev = rcu_dereference(
+			conf->mirrors[i].replacement);
+		if (rdev == rrdev)
+			rrdev = NULL;
+		if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
+			atomic_inc(&rdev->nr_pending);
+			blocked_rdev = rdev;
+			break;
+		}
+		if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) {
+			atomic_inc(&rrdev->nr_pending);
+			blocked_rdev = rrdev;
+			break;
+		}
+
+		if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) {
+			sector_t first_bad;
+			sector_t dev_sector = r10_bio->devs[i].addr;
+			int bad_sectors;
+			int is_bad;
+
+			/*
+			 * Discard request doesn't care the write result
+			 * so it doesn't need to wait blocked disk here.
+			 */
+			if (!r10_bio->sectors)
+				continue;
+
+			is_bad = is_badblock(rdev, dev_sector, r10_bio->sectors,
+					     &first_bad, &bad_sectors);
+			if (is_bad < 0) {
+				/*
+				 * Mustn't write here until the bad block
+				 * is acknowledged
+				 */
+				atomic_inc(&rdev->nr_pending);
+				set_bit(BlockedBadBlocks, &rdev->flags);
+				blocked_rdev = rdev;
+				break;
+			}
+		}
+	}
+	rcu_read_unlock();
+
+	if (unlikely(blocked_rdev)) {
+		/* Have to wait for this device to get unblocked, then retry */
+		allow_barrier(conf);
+		raid10_log(conf->mddev, "%s wait rdev %d blocked",
+				__func__, blocked_rdev->raid_disk);
+		md_wait_for_blocked_rdev(blocked_rdev, mddev);
+		wait_barrier(conf);
+		goto retry_wait;
+	}
+}
+
 static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 				 struct r10bio *r10_bio)
 {
 	struct r10conf *conf = mddev->private;
 	int i;
-	struct md_rdev *blocked_rdev;
 	sector_t sectors;
 	int max_sectors;
 
@@ -1336,8 +1401,9 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 
 	r10_bio->read_slot = -1; /* make sure repl_bio gets freed */
 	raid10_find_phys(conf, r10_bio);
-retry_write:
-	blocked_rdev = NULL;
+
+	wait_blocked_dev(mddev, r10_bio);
+
 	rcu_read_lock();
 	max_sectors = r10_bio->sectors;
 
@@ -1348,16 +1414,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 			conf->mirrors[d].replacement);
 		if (rdev == rrdev)
 			rrdev = NULL;
-		if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
-			atomic_inc(&rdev->nr_pending);
-			blocked_rdev = rdev;
-			break;
-		}
-		if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) {
-			atomic_inc(&rrdev->nr_pending);
-			blocked_rdev = rrdev;
-			break;
-		}
 		if (rdev && (test_bit(Faulty, &rdev->flags)))
 			rdev = NULL;
 		if (rrdev && (test_bit(Faulty, &rrdev->flags)))
@@ -1378,15 +1434,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 
 			is_bad = is_badblock(rdev, dev_sector, max_sectors,
 					     &first_bad, &bad_sectors);
-			if (is_bad < 0) {
-				/* Mustn't write here until the bad block
-				 * is acknowledged
-				 */
-				atomic_inc(&rdev->nr_pending);
-				set_bit(BlockedBadBlocks, &rdev->flags);
-				blocked_rdev = rdev;
-				break;
-			}
 			if (is_bad && first_bad <= dev_sector) {
 				/* Cannot write here at all */
 				bad_sectors -= (dev_sector - first_bad);
@@ -1422,35 +1469,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 	}
 	rcu_read_unlock();
 
-	if (unlikely(blocked_rdev)) {
-		/* Have to wait for this device to get unblocked, then retry */
-		int j;
-		int d;
-
-		for (j = 0; j < i; j++) {
-			if (r10_bio->devs[j].bio) {
-				d = r10_bio->devs[j].devnum;
-				rdev_dec_pending(conf->mirrors[d].rdev, mddev);
-			}
-			if (r10_bio->devs[j].repl_bio) {
-				struct md_rdev *rdev;
-				d = r10_bio->devs[j].devnum;
-				rdev = conf->mirrors[d].replacement;
-				if (!rdev) {
-					/* Race with remove_disk */
-					smp_mb();
-					rdev = conf->mirrors[d].rdev;
-				}
-				rdev_dec_pending(rdev, mddev);
-			}
-		}
-		allow_barrier(conf);
-		raid10_log(conf->mddev, "wait rdev %d blocked", blocked_rdev->raid_disk);
-		md_wait_for_blocked_rdev(blocked_rdev, mddev);
-		wait_barrier(conf);
-		goto retry_write;
-	}
-
 	if (max_sectors < r10_bio->sectors)
 		r10_bio->sectors = max_sectors;
 
-- 
2.7.5


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 4/5] md/raid10: improve raid10 discard request
  2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
                   ` (2 preceding siblings ...)
  2021-02-04  7:50 ` [PATCH V2 3/5] md/raid10: pull the code that wait for blocked dev into one function Xiao Ni
@ 2021-02-04  7:50 ` Xiao Ni
  2021-02-04  7:50 ` [PATCH V2 5/5] md/raid10: improve discard request for far layout Xiao Ni
  2021-02-15  4:05 ` [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Matthew Ruffell
  5 siblings, 0 replies; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:50 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon

Now the discard request is split by chunk size. So it takes a long time
to finish mkfs on disks which support discard function. This patch improve
handling raid10 discard request. It uses the similar way with patch
29efc390b (md/md0: optimize raid0 discard handling).

But it's a little complex than raid0. Because raid10 has different layout.
If raid10 is offset layout and the discard request is smaller than stripe
size. There are some holes when we submit discard bio to underlayer disks.

For example: five disks (disk1 - disk5)
D01 D02 D03 D04 D05
D05 D01 D02 D03 D04
D06 D07 D08 D09 D10
D10 D06 D07 D08 D09
The discard bio just wants to discard from D03 to D10. For disk3, there is
a hole between D03 and D08. For disk4, there is a hole between D04 and D09.
D03 is a chunk, raid10_write_request can handle one chunk perfectly. So
the part that is not aligned with stripe size is still handled by
raid10_write_request.

If reshape is running when discard bio comes and the discard bio spans the
reshape position, raid10_write_request is responsible to handle this
discard bio.

I did a test with this patch set.
Without patch:
time mkfs.xfs /dev/md0
real4m39.775s
user0m0.000s
sys0m0.298s

With patch:
time mkfs.xfs /dev/md0
real0m0.105s
user0m0.000s
sys0m0.007s

nvme3n1           259:1    0   477G  0 disk
└─nvme3n1p1       259:10   0    50G  0 part
nvme4n1           259:2    0   477G  0 disk
└─nvme4n1p1       259:11   0    50G  0 part
nvme5n1           259:6    0   477G  0 disk
└─nvme5n1p1       259:12   0    50G  0 part
nvme2n1           259:9    0   477G  0 disk
└─nvme2n1p1       259:15   0    50G  0 part
nvme0n1           259:13   0   477G  0 disk
└─nvme0n1p1       259:14   0    50G  0 part

Reviewed-by: Coly Li <colyli@suse.de>
Reviewed-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Signed-off-by: Xiao Ni <xni@redhat.com>
---
 drivers/md/raid10.c | 263 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 262 insertions(+), 1 deletion(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index ea97ce3..73d1b250 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1518,6 +1518,263 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
 		raid10_write_request(mddev, bio, r10_bio);
 }
 
+static void raid10_end_discard_request(struct bio *bio)
+{
+	struct r10bio *r10_bio = bio->bi_private;
+	struct r10conf *conf = r10_bio->mddev->private;
+	struct md_rdev *rdev = NULL;
+	int dev;
+	int slot, repl;
+
+	/*
+	 * We don't care the return value of discard bio
+	 */
+	if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
+		set_bit(R10BIO_Uptodate, &r10_bio->state);
+
+	dev = find_bio_disk(conf, r10_bio, bio, &slot, &repl);
+	if (repl)
+		rdev = conf->mirrors[dev].replacement;
+	if (!rdev) {
+		/*
+		 * raid10_remove_disk uses smp_mb to make sure rdev is set to
+		 * replacement before setting replacement to NULL. It can read
+		 * rdev first without barrier protect even replacment is NULL
+		 */
+		smp_rmb();
+		rdev = conf->mirrors[dev].rdev;
+	}
+
+	if (atomic_dec_and_test(&r10_bio->remaining)) {
+		md_write_end(r10_bio->mddev);
+		raid_end_bio_io(r10_bio);
+	}
+
+	rdev_dec_pending(rdev, conf->mddev);
+}
+
+/*
+ * There are some limitations to handle discard bio
+ * 1st, the discard size is bigger than stripe_size*2.
+ * 2st, if the discard bio spans reshape progress, we use the old way to
+ * handle discard bio
+ */
+static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
+{
+	struct r10conf *conf = mddev->private;
+	struct geom *geo = &conf->geo;
+	struct r10bio *r10_bio;
+	struct bio *split;
+	int disk;
+	sector_t chunk;
+	unsigned int stripe_size;
+	unsigned int stripe_data_disks;
+	sector_t split_size;
+	sector_t bio_start, bio_end;
+	sector_t first_stripe_index, last_stripe_index;
+	sector_t start_disk_offset;
+	unsigned int start_disk_index;
+	sector_t end_disk_offset;
+	unsigned int end_disk_index;
+	unsigned int remainder;
+
+	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
+		return -EAGAIN;
+
+	wait_barrier(conf);
+
+	/*
+	 * Check reshape again to avoid reshape happens after checking
+	 * MD_RECOVERY_RESHAPE and before wait_barrier
+	 */
+	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
+		goto out;
+
+	if (geo->near_copies)
+		stripe_data_disks = geo->raid_disks / geo->near_copies +
+					geo->raid_disks % geo->near_copies;
+	else
+		stripe_data_disks = geo->raid_disks;
+
+	stripe_size = stripe_data_disks << geo->chunk_shift;
+
+	bio_start = bio->bi_iter.bi_sector;
+	bio_end = bio_end_sector(bio);
+
+	/*
+	 * Maybe one discard bio is smaller than strip size or across one
+	 * stripe and discard region is larger than one stripe size. For far
+	 * offset layout, if the discard region is not aligned with stripe
+	 * size, there is hole when we submit discard bio to member disk.
+	 * For simplicity, we only handle discard bio which discard region
+	 * is bigger than stripe_size * 2
+	 */
+	if (bio_sectors(bio) < stripe_size*2)
+		goto out;
+
+	/*
+	 * Keep bio aligned with strip size.
+	 */
+	div_u64_rem(bio_start, stripe_size, &remainder);
+	if (remainder) {
+		split_size = stripe_size - remainder;
+		split = bio_split(bio, split_size, GFP_NOIO, &conf->bio_split);
+		bio_chain(split, bio);
+		allow_barrier(conf);
+		/* Resend the fist split part */
+		submit_bio_noacct(split);
+		wait_barrier(conf);
+	}
+	div_u64_rem(bio_end, stripe_size, &remainder);
+	if (remainder) {
+		split_size = bio_sectors(bio) - remainder;
+		split = bio_split(bio, split_size, GFP_NOIO, &conf->bio_split);
+		bio_chain(split, bio);
+		allow_barrier(conf);
+		/* Resend the second split part */
+		submit_bio_noacct(bio);
+		bio = split;
+		wait_barrier(conf);
+	}
+
+	r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO);
+	r10_bio->mddev = mddev;
+	r10_bio->state = 0;
+	r10_bio->sectors = 0;
+	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
+
+	wait_blocked_dev(mddev, r10_bio);
+
+	r10_bio->master_bio = bio;
+
+	bio_start = bio->bi_iter.bi_sector;
+	bio_end = bio_end_sector(bio);
+
+	/*
+	 * Raid10 uses chunk as the unit to store data. It's similar like raid0.
+	 * One stripe contains the chunks from all member disk (one chunk from
+	 * one disk at the same HBA address). For layout detail, see 'man md 4'
+	 */
+	chunk = bio_start >> geo->chunk_shift;
+	chunk *= geo->near_copies;
+	first_stripe_index = chunk;
+	start_disk_index = sector_div(first_stripe_index, geo->raid_disks);
+	if (geo->far_offset)
+		first_stripe_index *= geo->far_copies;
+	start_disk_offset = (bio_start & geo->chunk_mask) +
+				(first_stripe_index << geo->chunk_shift);
+
+	chunk = bio_end >> geo->chunk_shift;
+	chunk *= geo->near_copies;
+	last_stripe_index = chunk;
+	end_disk_index = sector_div(last_stripe_index, geo->raid_disks);
+	if (geo->far_offset)
+		last_stripe_index *= geo->far_copies;
+	end_disk_offset = (bio_end & geo->chunk_mask) +
+				(last_stripe_index << geo->chunk_shift);
+
+	rcu_read_lock();
+	for (disk = 0; disk < geo->raid_disks; disk++) {
+		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+		struct md_rdev *rrdev = rcu_dereference(
+			conf->mirrors[disk].replacement);
+
+		r10_bio->devs[disk].bio = NULL;
+		r10_bio->devs[disk].repl_bio = NULL;
+
+		if (rdev && (test_bit(Faulty, &rdev->flags)))
+			rdev = NULL;
+		if (rrdev && (test_bit(Faulty, &rrdev->flags)))
+			rrdev = NULL;
+		if (!rdev && !rrdev)
+			continue;
+
+		if (rdev) {
+			r10_bio->devs[disk].bio = bio;
+			atomic_inc(&rdev->nr_pending);
+		}
+		if (rrdev) {
+			r10_bio->devs[disk].repl_bio = bio;
+			atomic_inc(&rrdev->nr_pending);
+		}
+	}
+	rcu_read_unlock();
+
+	atomic_set(&r10_bio->remaining, 1);
+	for (disk = 0; disk < geo->raid_disks; disk++) {
+		sector_t dev_start, dev_end;
+		struct bio *mbio, *rbio = NULL;
+		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+		struct md_rdev *rrdev = rcu_dereference(
+			conf->mirrors[disk].replacement);
+
+		/*
+		 * Now start to calculate the start and end address for each disk.
+		 * The space between dev_start and dev_end is the discard region.
+		 *
+		 * For dev_start, it needs to consider three conditions:
+		 * 1st, the disk is before start_disk, you can imagine the disk in
+		 * the next stripe. So the dev_start is the start address of next
+		 * stripe.
+		 * 2st, the disk is after start_disk, it means the disk is at the
+		 * same stripe of first disk
+		 * 3st, the first disk itself, we can use start_disk_offset directly
+		 */
+		if (disk < start_disk_index)
+			dev_start = (first_stripe_index + 1) * mddev->chunk_sectors;
+		else if (disk > start_disk_index)
+			dev_start = first_stripe_index * mddev->chunk_sectors;
+		else
+			dev_start = start_disk_offset;
+
+		if (disk < end_disk_index)
+			dev_end = (last_stripe_index + 1) * mddev->chunk_sectors;
+		else if (disk > end_disk_index)
+			dev_end = last_stripe_index * mddev->chunk_sectors;
+		else
+			dev_end = end_disk_offset;
+
+		/*
+		 * It only handles discard bio which size is >= stripe size, so
+		 * dev_end > dev_start all the time
+		 */
+		if (r10_bio->devs[disk].bio) {
+			mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
+			mbio->bi_end_io = raid10_end_discard_request;
+			mbio->bi_private = r10_bio;
+			r10_bio->devs[disk].bio = mbio;
+			r10_bio->devs[disk].devnum = disk;
+			atomic_inc(&r10_bio->remaining);
+			md_submit_discard_bio(mddev, rdev, mbio,
+					dev_start + choose_data_offset(r10_bio, rdev),
+					dev_end - dev_start);
+			bio_endio(mbio);
+		}
+		if (r10_bio->devs[disk].repl_bio) {
+			rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
+			rbio->bi_end_io = raid10_end_discard_request;
+			rbio->bi_private = r10_bio;
+			r10_bio->devs[disk].repl_bio = rbio;
+			r10_bio->devs[disk].devnum = disk;
+			atomic_inc(&r10_bio->remaining);
+			md_submit_discard_bio(mddev, rrdev, rbio,
+					dev_start + choose_data_offset(r10_bio, rrdev),
+					dev_end - dev_start);
+			bio_endio(rbio);
+		}
+	}
+
+	if (atomic_dec_and_test(&r10_bio->remaining)) {
+		md_write_end(r10_bio->mddev);
+		raid_end_bio_io(r10_bio);
+	}
+
+	return 0;
+out:
+	allow_barrier(conf);
+	return -EAGAIN;
+}
+
 static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
 {
 	struct r10conf *conf = mddev->private;
@@ -1532,6 +1789,10 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
 	if (!md_write_start(mddev, bio))
 		return false;
 
+	if (unlikely(bio_op(bio) == REQ_OP_DISCARD))
+		if (!raid10_handle_discard(mddev, bio))
+			return true;
+
 	/*
 	 * If this request crosses a chunk boundary, we need to split
 	 * it.
@@ -3771,7 +4032,7 @@ static int raid10_run(struct mddev *mddev)
 
 	if (mddev->queue) {
 		blk_queue_max_discard_sectors(mddev->queue,
-					      mddev->chunk_sectors);
+					      UINT_MAX);
 		blk_queue_max_write_same_sectors(mddev->queue, 0);
 		blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
 		blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);
-- 
2.7.5


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 5/5] md/raid10: improve discard request for far layout
  2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
                   ` (3 preceding siblings ...)
  2021-02-04  7:50 ` [PATCH V2 4/5] md/raid10: improve raid10 discard request Xiao Ni
@ 2021-02-04  7:50 ` Xiao Ni
  2021-02-15  4:05 ` [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Matthew Ruffell
  5 siblings, 0 replies; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:50 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon

For far layout, the discard region is not continuous on disks. So it needs
far copies r10bio to cover all regions. It needs a way to know all r10bios
have finish or not. Similar with raid10_sync_request, only the first r10bio
master_bio records the discard bio. Other r10bios master_bio record the
first r10bio. The first r10bio can finish after other r10bios finish and
then return the discard bio.

Signed-off-by: Xiao Ni <xni@redhat.com>
---
 drivers/md/raid10.c | 79 ++++++++++++++++++++++++++++++++++++++++-------------
 drivers/md/raid10.h |  1 +
 2 files changed, 61 insertions(+), 19 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 73d1b250..f78212d 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1518,6 +1518,28 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
 		raid10_write_request(mddev, bio, r10_bio);
 }
 
+static void raid_end_discard_bio(struct r10bio *r10bio)
+{
+	struct r10conf *conf = r10bio->mddev->private;
+	struct r10bio *first_r10bio;
+
+	while (atomic_dec_and_test(&r10bio->remaining)) {
+
+		allow_barrier(conf);
+
+		if (!test_bit(R10BIO_Discard, &r10bio->state)) {
+			first_r10bio = (struct r10bio *)r10bio->master_bio;
+			free_r10bio(r10bio);
+			r10bio = first_r10bio;
+		} else {
+			md_write_end(r10bio->mddev);
+			bio_endio(r10bio->master_bio);
+			free_r10bio(r10bio);
+			break;
+		}
+	}
+}
+
 static void raid10_end_discard_request(struct bio *bio)
 {
 	struct r10bio *r10_bio = bio->bi_private;
@@ -1545,11 +1567,7 @@ static void raid10_end_discard_request(struct bio *bio)
 		rdev = conf->mirrors[dev].rdev;
 	}
 
-	if (atomic_dec_and_test(&r10_bio->remaining)) {
-		md_write_end(r10_bio->mddev);
-		raid_end_bio_io(r10_bio);
-	}
-
+	raid_end_discard_bio(r10_bio);
 	rdev_dec_pending(rdev, conf->mddev);
 }
 
@@ -1563,7 +1581,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
 {
 	struct r10conf *conf = mddev->private;
 	struct geom *geo = &conf->geo;
-	struct r10bio *r10_bio;
+	int far_copies = geo->far_copies;
+	bool first_copy = true;
+	struct r10bio *r10_bio, *first_r10bio;
 	struct bio *split;
 	int disk;
 	sector_t chunk;
@@ -1637,16 +1657,6 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
 		wait_barrier(conf);
 	}
 
-	r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO);
-	r10_bio->mddev = mddev;
-	r10_bio->state = 0;
-	r10_bio->sectors = 0;
-	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
-
-	wait_blocked_dev(mddev, r10_bio);
-
-	r10_bio->master_bio = bio;
-
 	bio_start = bio->bi_iter.bi_sector;
 	bio_end = bio_end_sector(bio);
 
@@ -1673,6 +1683,29 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
 	end_disk_offset = (bio_end & geo->chunk_mask) +
 				(last_stripe_index << geo->chunk_shift);
 
+retry_discard:
+	r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO);
+	r10_bio->mddev = mddev;
+	r10_bio->state = 0;
+	r10_bio->sectors = 0;
+	memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
+	wait_blocked_dev(mddev, r10_bio);
+
+	/*
+	 * For far layout it needs more than one r10bio to cover all regions.
+	 * Inspired by raid10_sync_request, we can use the first r10bio->master_bio
+	 * to record the discard bio. Other r10bio->master_bio record the first
+	 * r10bio. The first r10bio only release after all other r10bios finish.
+	 * The discard bio returns only first r10bio finishes
+	 */
+	if (first_copy) {
+		r10_bio->master_bio = bio;
+		set_bit(R10BIO_Discard, &r10_bio->state);
+		first_copy = false;
+		first_r10bio = r10_bio;
+	} else
+		r10_bio->master_bio = (struct bio *)first_r10bio;
+
 	rcu_read_lock();
 	for (disk = 0; disk < geo->raid_disks; disk++) {
 		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
@@ -1764,11 +1797,19 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
 		}
 	}
 
-	if (atomic_dec_and_test(&r10_bio->remaining)) {
-		md_write_end(r10_bio->mddev);
-		raid_end_bio_io(r10_bio);
+	if (!geo->far_offset && --far_copies) {
+		first_stripe_index += geo->stride >> geo->chunk_shift;
+		start_disk_offset += geo->stride;
+		last_stripe_index += geo->stride >> geo->chunk_shift;
+		end_disk_offset += geo->stride;
+		atomic_inc(&first_r10bio->remaining);
+		raid_end_discard_bio(r10_bio);
+		wait_barrier(conf);
+		goto retry_discard;
 	}
 
+	raid_end_discard_bio(r10_bio);
+
 	return 0;
 out:
 	allow_barrier(conf);
diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
index 79cd2b7..1461fd5 100644
--- a/drivers/md/raid10.h
+++ b/drivers/md/raid10.h
@@ -179,5 +179,6 @@ enum r10bio_state {
 	R10BIO_Previous,
 /* failfast devices did receive failfast requests. */
 	R10BIO_FailFast,
+	R10BIO_Discard,
 };
 #endif
-- 
2.7.5


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
                   ` (4 preceding siblings ...)
  2021-02-04  7:50 ` [PATCH V2 5/5] md/raid10: improve discard request for far layout Xiao Ni
@ 2021-02-15  4:05 ` Matthew Ruffell
  2021-02-20  8:12   ` Xiao Ni
  5 siblings, 1 reply; 15+ messages in thread
From: Matthew Ruffell @ 2021-02-15  4:05 UTC (permalink / raw)
  To: Xiao Ni, songliubraving; +Cc: linux-raid, colyli, guoqing.jiang, ncroxon

Hi Xiao,

Thanks for posting the patchset. I have been testing them over the past week,
and they are looking good.

I backported [0] the patchset to the Ubuntu 4.15, 5.4 and 5.8 kernels, and I have
been testing them on public clouds.

[0] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1896578/comments/13

For performance, formatting a Raid10 array on NVMe disks drops from 8.5 minutes 
to about 6 seconds [1], on AWS i3.8xlarge with 4x 1.7TB disks, due to the 
speedup in block discard.

[1] https://paste.ubuntu.com/p/NNGqP3xdsc/

I have also tested the data corruption reproducer from my original problem 
report [2], and I have found that throughout each of the steps of formatting the 
array, doing a consistency check, writing data, doing a consistency check, 
issuing a fstrim, doing a consistency check, the /sys/block/md0/md/mismatch_cnt 
was always 0, and all deep fsck checks came back clean for individual disks [3].

[2] https://www.spinics.net/lists/kernel/msg3765302.html
[3] https://paste.ubuntu.com/p/5DK57TzdFH/

So I think your patches do solve the data corruption problem. Great job.

To try and get some more eyes on the patches, I have provided my test kernels to
5 other users who are hitting the Raid10 block discard performance problem, and
I have asked them to test on spare test servers, and to provide feedback on
performance and data safety.

I will let you know their feedback as it comes in.

As for getting this merged, I actually agree with Song, the 5.12 merge window
is happening right now, and it is a bit too soon for large changes like this. 
I think we should wait for the 5.13 merge window. That way we can do some more 
testing, get feedback from some users, and make sure we don't cause any more 
data corruption regressions.

I will write back soon with some user feedback and more test results.

Thanks,
Matthew

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-15  4:05 ` [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Matthew Ruffell
@ 2021-02-20  8:12   ` Xiao Ni
  2021-02-24  8:41     ` Song Liu
  0 siblings, 1 reply; 15+ messages in thread
From: Xiao Ni @ 2021-02-20  8:12 UTC (permalink / raw)
  To: Matthew Ruffell, songliubraving
  Cc: linux-raid, colyli, guoqing.jiang, ncroxon

Hi Matthew

Thanks very much for those test. And as you said, it's better to wait 
more test results.
By the way, do you know the date of 5.13 merge window?

Regards
Xiao

On 02/15/2021 12:05 PM, Matthew Ruffell wrote:
> Hi Xiao,
>
> Thanks for posting the patchset. I have been testing them over the past week,
> and they are looking good.
>
> I backported [0] the patchset to the Ubuntu 4.15, 5.4 and 5.8 kernels, and I have
> been testing them on public clouds.
>
> [0] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1896578/comments/13
>
> For performance, formatting a Raid10 array on NVMe disks drops from 8.5 minutes
> to about 6 seconds [1], on AWS i3.8xlarge with 4x 1.7TB disks, due to the
> speedup in block discard.
>
> [1] https://paste.ubuntu.com/p/NNGqP3xdsc/
>
> I have also tested the data corruption reproducer from my original problem
> report [2], and I have found that throughout each of the steps of formatting the
> array, doing a consistency check, writing data, doing a consistency check,
> issuing a fstrim, doing a consistency check, the /sys/block/md0/md/mismatch_cnt
> was always 0, and all deep fsck checks came back clean for individual disks [3].
>
> [2] https://www.spinics.net/lists/kernel/msg3765302.html
> [3] https://paste.ubuntu.com/p/5DK57TzdFH/
>
> So I think your patches do solve the data corruption problem. Great job.
>
> To try and get some more eyes on the patches, I have provided my test kernels to
> 5 other users who are hitting the Raid10 block discard performance problem, and
> I have asked them to test on spare test servers, and to provide feedback on
> performance and data safety.
>
> I will let you know their feedback as it comes in.
>
> As for getting this merged, I actually agree with Song, the 5.12 merge window
> is happening right now, and it is a bit too soon for large changes like this.
> I think we should wait for the 5.13 merge window. That way we can do some more
> testing, get feedback from some users, and make sure we don't cause any more
> data corruption regressions.
>
> I will write back soon with some user feedback and more test results.
>
> Thanks,
> Matthew
>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-20  8:12   ` Xiao Ni
@ 2021-02-24  8:41     ` Song Liu
  0 siblings, 0 replies; 15+ messages in thread
From: Song Liu @ 2021-02-24  8:41 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Matthew Ruffell, Song Liu, linux-raid, Coly Li, Guoqing Jiang,
	Nigel Croxon

On Sat, Feb 20, 2021 at 12:21 AM Xiao Ni <xni@redhat.com> wrote:
>
> Hi Matthew
>
> Thanks very much for those test. And as you said, it's better to wait
> more test results.
> By the way, do you know the date of 5.13 merge window?

5.13 merge window will be April (about 2 months from now).

I applied the set to md-next.

Thanks,
Song

>
> Regards
> Xiao
>
> On 02/15/2021 12:05 PM, Matthew Ruffell wrote:
> > Hi Xiao,
> >
> > Thanks for posting the patchset. I have been testing them over the past week,
> > and they are looking good.
> >
> > I backported [0] the patchset to the Ubuntu 4.15, 5.4 and 5.8 kernels, and I have
> > been testing them on public clouds.
> >
> > [0] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1896578/comments/13
> >
> > For performance, formatting a Raid10 array on NVMe disks drops from 8.5 minutes
> > to about 6 seconds [1], on AWS i3.8xlarge with 4x 1.7TB disks, due to the
> > speedup in block discard.
> >
> > [1] https://paste.ubuntu.com/p/NNGqP3xdsc/
> >
> > I have also tested the data corruption reproducer from my original problem
> > report [2], and I have found that throughout each of the steps of formatting the
> > array, doing a consistency check, writing data, doing a consistency check,
> > issuing a fstrim, doing a consistency check, the /sys/block/md0/md/mismatch_cnt
> > was always 0, and all deep fsck checks came back clean for individual disks [3].
> >
> > [2] https://www.spinics.net/lists/kernel/msg3765302.html
> > [3] https://paste.ubuntu.com/p/5DK57TzdFH/
> >
> > So I think your patches do solve the data corruption problem. Great job.
> >
> > To try and get some more eyes on the patches, I have provided my test kernels to
> > 5 other users who are hitting the Raid10 block discard performance problem, and
> > I have asked them to test on spare test servers, and to provide feedback on
> > performance and data safety.
> >
> > I will let you know their feedback as it comes in.
> >
> > As for getting this merged, I actually agree with Song, the 5.12 merge window
> > is happening right now, and it is a bit too soon for large changes like this.
> > I think we should wait for the 5.13 merge window. That way we can do some more
> > testing, get feedback from some users, and make sure we don't cause any more
> > data corruption regressions.
> >
> > I will write back soon with some user feedback and more test results.
> >
> > Thanks,
> > Matthew
> >
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
@ 2021-03-11  3:55 Adrian Huang12
  0 siblings, 0 replies; 15+ messages in thread
From: Adrian Huang12 @ 2021-03-11  3:55 UTC (permalink / raw)
  To: Xiao Ni, songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon, hch

> -----Original Message-----
> From: Xiao Ni <xni@redhat.com>
> Sent: Thursday, February 4, 2021 1:57 PM
> To: songliubraving@fb.com
> Cc: linux-raid@vger.kernel.org; matthew.ruffell@canonical.com;
> colyli@suse.de; guoqing.jiang@cloud.ionos.com; ncroxon@redhat.com;
> hch@infradead.org
> Subject: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
> 
> Xiao Ni (5):
>   md: add md_submit_discard_bio() for submitting discard bio
>   md/raid10: extend r10bio devs to raid disks
>   md/raid10: pull the code that wait for blocked dev into one function
>   md/raid10: improve raid10 discard request
>   md/raid10: improve discard request for far layout

Hi Xiao Ni,

Thanks for this series. I also reproduced this issue when creating a RAID10 disk via
Intel VROC.

The xfs formatting process was not finished on 5.4.0-66 and 5.12.0-rc2 (waiting
for one hour), and there were lots of IO timeouts from dmesg. 

With this series (on top of 5.12.0-rc2), the xfs formatting process only took
1 second. And, I did not see any IO timeouts from dmesg.

The test detail [0] is shown as follows. 

So, feel free to add my tested-by. 

[0] https://gist.githubusercontent.com/AdrianHuang/56daafe1b4dbd8b5744d02c5a473e5cd/raw/82f33862698be2567af48b7662f08ccd8e8d27fd/raid10-issue-test-detail.log

-- Adrian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-04  8:39     ` Xiao Ni
@ 2021-02-04 17:29       ` Song Liu
  0 siblings, 0 replies; 15+ messages in thread
From: Song Liu @ 2021-02-04 17:29 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Song Liu, linux-raid, Matthew Ruffell, Coly Li, Guoqing Jiang,
	Nigel Croxon, hch

On Thu, Feb 4, 2021 at 12:39 AM Xiao Ni <xni@redhat.com> wrote:
>
>
>
> On 02/04/2021 04:12 PM, Song Liu wrote:
> > On Wed, Feb 3, 2021 at 11:42 PM Xiao Ni <xni@redhat.com> wrote:
> >> Hi Song
> >>
> >> Please ignore the v2 version. There is a place that needs to be fix.
> >> I'll re-send
> >> v2 version again.
> > What did you change in the new v2? Removing "extern" in the header?
> > For small changes like this, I can just update it while applying the patches.
> > If we do need resend (for bigger changes), it's better to call it v3.
>
> Yes, it only removes "extern" in patch1.
> >
> > I was testing the first v2 in the past hour or so, it looks good in the test.
> > I will take a closer look tomorrow. On the other hand, we are getting close
> > to the 5.12 merge window, so it is a little too late for bigger
> > changes like this.
> > Therefore, I would prefer to wait until 5.13. If you need it in 5.12 for some
> > reason, please let me know.
> Is md-next a branch that is used before merging? If so, could you merge
> the patches
> to md-next if your test pass? There is a bug that needs to be fixed in
> rhel. We can
> backport the patches if you merge the patches to md-next.

Yes, I can apply them to md-next. But I rebase it from time to time, so the
hash tag will change.

Thanks,
Song

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-04  8:12   ` Song Liu
@ 2021-02-04  8:39     ` Xiao Ni
  2021-02-04 17:29       ` Song Liu
  0 siblings, 1 reply; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  8:39 UTC (permalink / raw)
  To: Song Liu
  Cc: Song Liu, linux-raid, Matthew Ruffell, Coly Li, Guoqing Jiang,
	Nigel Croxon, hch



On 02/04/2021 04:12 PM, Song Liu wrote:
> On Wed, Feb 3, 2021 at 11:42 PM Xiao Ni <xni@redhat.com> wrote:
>> Hi Song
>>
>> Please ignore the v2 version. There is a place that needs to be fix.
>> I'll re-send
>> v2 version again.
> What did you change in the new v2? Removing "extern" in the header?
> For small changes like this, I can just update it while applying the patches.
> If we do need resend (for bigger changes), it's better to call it v3.

Yes, it only removes "extern" in patch1.
>
> I was testing the first v2 in the past hour or so, it looks good in the test.
> I will take a closer look tomorrow. On the other hand, we are getting close
> to the 5.12 merge window, so it is a little too late for bigger
> changes like this.
> Therefore, I would prefer to wait until 5.13. If you need it in 5.12 for some
> reason, please let me know.
Is md-next a branch that is used before merging? If so, could you merge 
the patches
to md-next if your test pass? There is a bug that needs to be fixed in 
rhel. We can
backport the patches if you merge the patches to md-next.

Regards
Xiao


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-04  7:38 ` Xiao Ni
@ 2021-02-04  8:12   ` Song Liu
  2021-02-04  8:39     ` Xiao Ni
  0 siblings, 1 reply; 15+ messages in thread
From: Song Liu @ 2021-02-04  8:12 UTC (permalink / raw)
  To: Xiao Ni
  Cc: Song Liu, linux-raid, Matthew Ruffell, Coly Li, Guoqing Jiang,
	Nigel Croxon, hch

On Wed, Feb 3, 2021 at 11:42 PM Xiao Ni <xni@redhat.com> wrote:
>
> Hi Song
>
> Please ignore the v2 version. There is a place that needs to be fix.
> I'll re-send
> v2 version again.

What did you change in the new v2? Removing "extern" in the header?
For small changes like this, I can just update it while applying the patches.
If we do need resend (for bigger changes), it's better to call it v3.

I was testing the first v2 in the past hour or so, it looks good in the test.
I will take a closer look tomorrow. On the other hand, we are getting close
to the 5.12 merge window, so it is a little too late for bigger
changes like this.
Therefore, I would prefer to wait until 5.13. If you need it in 5.12 for some
reason, please let me know.

Thanks,
Song

>
> Regards
> Xiao
>
> On 02/04/2021 01:57 PM, Xiao Ni wrote:
> > Hi all
> >
> > Now mkfs on raid10 which is combined with ssd/nvme disks takes a long time.
> > This patch set tries to resolve this problem.
> >
> > This patch set had been reverted because of a data corruption problem. This
> > version fix this problem. The root cause which causes the data corruption is
> > the wrong calculation of start address of near copies disks.
> >
> > Now we use a similar way with raid0 to handle discard request for raid10.
> > Because the discard region is very big, we can calculate the start/end
> > address for each disk. Then we can submit the discard request to each disk.
> > But for raid10, it has copies. For near layout, if the discard request
> > doesn't align with chunk size, we calculate a start_disk_offset. Now we only
> > use start_disk_offset for the first disk, but it should be used for the
> > near copies disks too.
> >
> > [  789.709501] discard bio start : 70968, size : 191176
> > [  789.709507] first stripe index 69, start disk index 0, start disk offset 70968
> > [  789.709509] last stripe index 256, end disk index 0, end disk offset 262144
> > [  789.709511] disk 0, dev start : 70968, dev end : 262144
> > [  789.709515] disk 1, dev start : 70656, dev end : 262144
> >
> > For example, in this test case, it has 2 near copies. The start_disk_offset
> > for the first disk is 70968. It should use the same offset address for second disk.
> > But it uses the start address of this chunk. It discard more region. This version
> > simply spilt the un-aligned part with strip size.
> >
> > And it fixes another problem. The calculation of stripe_size is wrong in reverted version.
> >
> > V2: Fix problems pointed by Christoph Hellwig.
> >
> > Xiao Ni (5):
> >    md: add md_submit_discard_bio() for submitting discard bio
> >    md/raid10: extend r10bio devs to raid disks
> >    md/raid10: pull the code that wait for blocked dev into one function
> >    md/raid10: improve raid10 discard request
> >    md/raid10: improve discard request for far layout
> >
> >   drivers/md/md.c     |  20 +++
> >   drivers/md/md.h     |   2 +
> >   drivers/md/raid0.c  |  14 +-
> >   drivers/md/raid10.c | 434 +++++++++++++++++++++++++++++++++++++++++++++-------
> >   drivers/md/raid10.h |   1 +
> >   5 files changed, 402 insertions(+), 69 deletions(-)
> >
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
  2021-02-04  5:57 Xiao Ni
@ 2021-02-04  7:38 ` Xiao Ni
  2021-02-04  8:12   ` Song Liu
  0 siblings, 1 reply; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  7:38 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon, hch

Hi Song

Please ignore the v2 version. There is a place that needs to be fix. 
I'll re-send
v2 version again.

Regards
Xiao

On 02/04/2021 01:57 PM, Xiao Ni wrote:
> Hi all
>
> Now mkfs on raid10 which is combined with ssd/nvme disks takes a long time.
> This patch set tries to resolve this problem.
>
> This patch set had been reverted because of a data corruption problem. This
> version fix this problem. The root cause which causes the data corruption is
> the wrong calculation of start address of near copies disks.
>
> Now we use a similar way with raid0 to handle discard request for raid10.
> Because the discard region is very big, we can calculate the start/end
> address for each disk. Then we can submit the discard request to each disk.
> But for raid10, it has copies. For near layout, if the discard request
> doesn't align with chunk size, we calculate a start_disk_offset. Now we only
> use start_disk_offset for the first disk, but it should be used for the
> near copies disks too.
>
> [  789.709501] discard bio start : 70968, size : 191176
> [  789.709507] first stripe index 69, start disk index 0, start disk offset 70968
> [  789.709509] last stripe index 256, end disk index 0, end disk offset 262144
> [  789.709511] disk 0, dev start : 70968, dev end : 262144
> [  789.709515] disk 1, dev start : 70656, dev end : 262144
>
> For example, in this test case, it has 2 near copies. The start_disk_offset
> for the first disk is 70968. It should use the same offset address for second disk.
> But it uses the start address of this chunk. It discard more region. This version
> simply spilt the un-aligned part with strip size.
>
> And it fixes another problem. The calculation of stripe_size is wrong in reverted version.
>
> V2: Fix problems pointed by Christoph Hellwig.
>
> Xiao Ni (5):
>    md: add md_submit_discard_bio() for submitting discard bio
>    md/raid10: extend r10bio devs to raid disks
>    md/raid10: pull the code that wait for blocked dev into one function
>    md/raid10: improve raid10 discard request
>    md/raid10: improve discard request for far layout
>
>   drivers/md/md.c     |  20 +++
>   drivers/md/md.h     |   2 +
>   drivers/md/raid0.c  |  14 +-
>   drivers/md/raid10.c | 434 +++++++++++++++++++++++++++++++++++++++++++++-------
>   drivers/md/raid10.h |   1 +
>   5 files changed, 402 insertions(+), 69 deletions(-)
>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request
@ 2021-02-04  5:57 Xiao Ni
  2021-02-04  7:38 ` Xiao Ni
  0 siblings, 1 reply; 15+ messages in thread
From: Xiao Ni @ 2021-02-04  5:57 UTC (permalink / raw)
  To: songliubraving
  Cc: linux-raid, matthew.ruffell, colyli, guoqing.jiang, ncroxon, hch

Hi all

Now mkfs on raid10 which is combined with ssd/nvme disks takes a long time.
This patch set tries to resolve this problem.

This patch set had been reverted because of a data corruption problem. This
version fix this problem. The root cause which causes the data corruption is
the wrong calculation of start address of near copies disks.

Now we use a similar way with raid0 to handle discard request for raid10.
Because the discard region is very big, we can calculate the start/end
address for each disk. Then we can submit the discard request to each disk.
But for raid10, it has copies. For near layout, if the discard request
doesn't align with chunk size, we calculate a start_disk_offset. Now we only
use start_disk_offset for the first disk, but it should be used for the
near copies disks too.

[  789.709501] discard bio start : 70968, size : 191176
[  789.709507] first stripe index 69, start disk index 0, start disk offset 70968
[  789.709509] last stripe index 256, end disk index 0, end disk offset 262144
[  789.709511] disk 0, dev start : 70968, dev end : 262144
[  789.709515] disk 1, dev start : 70656, dev end : 262144

For example, in this test case, it has 2 near copies. The start_disk_offset
for the first disk is 70968. It should use the same offset address for second disk.
But it uses the start address of this chunk. It discard more region. This version
simply spilt the un-aligned part with strip size.

And it fixes another problem. The calculation of stripe_size is wrong in reverted version.

V2: Fix problems pointed by Christoph Hellwig.

Xiao Ni (5):
  md: add md_submit_discard_bio() for submitting discard bio
  md/raid10: extend r10bio devs to raid disks
  md/raid10: pull the code that wait for blocked dev into one function
  md/raid10: improve raid10 discard request
  md/raid10: improve discard request for far layout

 drivers/md/md.c     |  20 +++
 drivers/md/md.h     |   2 +
 drivers/md/raid0.c  |  14 +-
 drivers/md/raid10.c | 434 +++++++++++++++++++++++++++++++++++++++++++++-------
 drivers/md/raid10.h |   1 +
 5 files changed, 402 insertions(+), 69 deletions(-)

-- 
2.7.5


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-03-11  3:56 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-04  7:50 [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Xiao Ni
2021-02-04  7:50 ` [PATCH V2 1/5] md: add md_submit_discard_bio() for submitting discard bio Xiao Ni
2021-02-04  7:50 ` [PATCH V2 2/5] md/raid10: extend r10bio devs to raid disks Xiao Ni
2021-02-04  7:50 ` [PATCH V2 3/5] md/raid10: pull the code that wait for blocked dev into one function Xiao Ni
2021-02-04  7:50 ` [PATCH V2 4/5] md/raid10: improve raid10 discard request Xiao Ni
2021-02-04  7:50 ` [PATCH V2 5/5] md/raid10: improve discard request for far layout Xiao Ni
2021-02-15  4:05 ` [PATCH V2 0/5] md/raid10: Improve handling raid10 discard request Matthew Ruffell
2021-02-20  8:12   ` Xiao Ni
2021-02-24  8:41     ` Song Liu
  -- strict thread matches above, loose matches on Subject: below --
2021-03-11  3:55 Adrian Huang12
2021-02-04  5:57 Xiao Ni
2021-02-04  7:38 ` Xiao Ni
2021-02-04  8:12   ` Song Liu
2021-02-04  8:39     ` Xiao Ni
2021-02-04 17:29       ` Song Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).