All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/19] mdadm/clustermd_tests: update the testing part
@ 2018-02-02  6:10 Zhilong Liu
  2018-02-02  6:10 ` [PATCH 01/19] mdadm/test: improve filtering r10 from raid1 in raidtype Zhilong Liu
                   ` (18 more replies)
  0 siblings, 19 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Hi Jes,

This patch-set mainly focus on improving test part and adding some new
test cases for cluster-md.

Improving:
1. filter the r10(raid10) from raid1 in raidtype, because the cluster-md
test cases named r10 as contraction rule.
2. adding disk metadata in save_log() if found working array.
3. adding do_clean() in do_test() to ensure each case only capture its own
test logs.
4. adding prompt in --zero-superblock part of man-page. It should be careful
to call --zero when operating clustered raids.

Cluster-md new cases:
1. cover testing switch bitmap among 'clustered', 'none' and 'internal' mode
against clustered raid1/10.
2. cover testing switch resync/recovery against clustered raid1/10.
3. cover testing manage mode(add/add-spare/re-add) against raid1/10.
4. add testing grow mode(add) against raid1, currently it's not supported by
clustered raid10.


Thanks,
Zhilong

Zhilong Liu (19):
  mdadm/test: improve filtering r10 from raid1 in raidtype
  mdadm/test: add disk metadata infos in save_log
  mdadm/test: add do_clean to ensure each case only catch its own
    testlog
  mdadm/clustermd_tests: add nobitmap in check
  mdadm/clustermd_tests: delete meaningless commands in check
  manpage: add prompt in --zero-superblock against clustered raid
  clustermd_tests: add test case to test switching bitmap against
    cluster-raid1
  clustermd_tests: add test case to test switching bitmap against
    cluster-raid10
  clustermd_tests: add test case to test grow_add against cluster-raid1
  clustermd_tests: add test case to test manage_add against
    cluster-raid1
  clustermd_tests: add test case to test manage_add against
    cluster-raid10
  clustermd_tests: add test case to test manage_add-spare against
    cluster-raid1
  clustermd_tests: add test case to test manage_add-spare against
    cluster-raid10
  clustermd_tests: add test case to test manage_re-add against
    cluster-raid1
  clustermd_tests: add test case to test manage_re-add against
    cluster-raid10
  clustermd_tests: add test case to test switch-resync against
    cluster-raid1
  clustermd_tests: add test case to test switch-resync against
    cluster-raid10
  clustermd_tests: add test case to test switch-recovery against
    cluster-raid1
  clustermd_tests: add test case to test switch-recovery against
    cluster-raid10

 clustermd_tests/01r10_Grow_bitmap-switch | 51 ++++++++++++++++++++++++
 clustermd_tests/01r1_Grow_add            | 68 ++++++++++++++++++++++++++++++++
 clustermd_tests/01r1_Grow_bitmap-switch  | 51 ++++++++++++++++++++++++
 clustermd_tests/02r10_Manage_add         | 33 ++++++++++++++++
 clustermd_tests/02r10_Manage_add-spare   | 30 ++++++++++++++
 clustermd_tests/02r10_Manage_re-add      | 18 +++++++++
 clustermd_tests/02r1_Manage_add          | 33 ++++++++++++++++
 clustermd_tests/02r1_Manage_add-spare    | 30 ++++++++++++++
 clustermd_tests/02r1_Manage_re-add       | 18 +++++++++
 clustermd_tests/03r10_switch-recovery    | 21 ++++++++++
 clustermd_tests/03r10_switch-resync      | 18 +++++++++
 clustermd_tests/03r1_switch-recovery     | 21 ++++++++++
 clustermd_tests/03r1_switch-resync       | 18 +++++++++
 clustermd_tests/func.sh                  | 20 ++++++++--
 mdadm.8.in                               |  4 ++
 test                                     |  6 +--
 tests/func.sh                            | 10 ++++-
 17 files changed, 441 insertions(+), 9 deletions(-)
 create mode 100644 clustermd_tests/01r10_Grow_bitmap-switch
 create mode 100644 clustermd_tests/01r1_Grow_add
 create mode 100644 clustermd_tests/01r1_Grow_bitmap-switch
 create mode 100644 clustermd_tests/02r10_Manage_add
 create mode 100644 clustermd_tests/02r10_Manage_add-spare
 create mode 100644 clustermd_tests/02r10_Manage_re-add
 create mode 100644 clustermd_tests/02r1_Manage_add
 create mode 100644 clustermd_tests/02r1_Manage_add-spare
 create mode 100644 clustermd_tests/02r1_Manage_re-add
 create mode 100644 clustermd_tests/03r10_switch-recovery
 create mode 100644 clustermd_tests/03r10_switch-resync
 create mode 100644 clustermd_tests/03r1_switch-recovery
 create mode 100644 clustermd_tests/03r1_switch-resync

-- 
2.6.6


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 01/19] mdadm/test: improve filtering r10 from raid1 in raidtype
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 02/19] mdadm/test: add disk metadata infos in save_log Zhilong Liu
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 test | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/test b/test
index 4ddef38..024cb63 100755
--- a/test
+++ b/test
@@ -183,7 +183,7 @@ parse_args() {
 				TESTLIST=($(ls $testdir | grep "linear"))
 				;;
 			raid1 )
-				TESTLIST=($(ls $testdir | grep "[0-9][0-9]r1\|raid1" | grep -vi raid10))
+				TESTLIST=($(ls $testdir | grep "[0-9][0-9]r1\|raid1" | grep -vi "r10\|raid10"))
 				;;
 			raid456 )
 				TESTLIST=($(ls $testdir | grep "[0-9][0-9]r[4-6]\|raid[4-6]"))
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 02/19] mdadm/test: add disk metadata infos in save_log
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
  2018-02-02  6:10 ` [PATCH 01/19] mdadm/test: improve filtering r10 from raid1 in raidtype Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 03/19] mdadm/test: add do_clean to ensure each case only catch its own testlog Zhilong Liu
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/func.sh | 2 ++
 tests/func.sh           | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/clustermd_tests/func.sh b/clustermd_tests/func.sh
index 2387424..8ac5921 100644
--- a/clustermd_tests/func.sh
+++ b/clustermd_tests/func.sh
@@ -178,6 +178,8 @@ save_log()
 			then
 				echo "##$ip: mdadm -X ${md_disks[@]}" >> $logdir/$logfile
 				ssh $ip "mdadm -X ${md_disks[@]}" >> $logdir/$logfile
+				echo "##$ip: mdadm -E ${md_disks[@]}" >> $logdir/$logfile
+				ssh $ip "mdadm -E ${md_disks[@]}" >> $logdir/$logfile
 			fi
 		else
 			echo "##$ip: no array assembled!" >> $logdir/$logfile
diff --git a/tests/func.sh b/tests/func.sh
index 40c6026..6bfdafe 100644
--- a/tests/func.sh
+++ b/tests/func.sh
@@ -55,6 +55,8 @@ save_log() {
 			then
 				echo "## $HOSTNAME: mdadm -X ${md_disks[@]}" >> $logdir/$logfile
 				$mdadm -X ${md_disks[@]} >> $logdir/$logfile
+				echo "## $HOSTNAME: mdadm -E ${md_disks[@]}" >> $logdir/$logfile
+				$mdadm -E ${md_disks[@]} >> $logdir/$logfile
 			fi
 		else
 			echo "## $HOSTNAME: no array assembled!" >> $logdir/$logfile
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 03/19] mdadm/test: add do_clean to ensure each case only catch its own testlog
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
  2018-02-02  6:10 ` [PATCH 01/19] mdadm/test: improve filtering r10 from raid1 in raidtype Zhilong Liu
  2018-02-02  6:10 ` [PATCH 02/19] mdadm/test: add disk metadata infos in save_log Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 04/19] mdadm/clustermd_tests: add nobitmap in check Zhilong Liu
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/func.sh | 9 +++++++--
 test                    | 4 +---
 tests/func.sh           | 8 +++++++-
 3 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/clustermd_tests/func.sh b/clustermd_tests/func.sh
index 8ac5921..5c0b168 100644
--- a/clustermd_tests/func.sh
+++ b/clustermd_tests/func.sh
@@ -196,9 +196,8 @@ do_setup()
 	ulimit -c unlimited
 }
 
-cleanup()
+do_clean()
 {
-	check_ssh
 	for ip in $NODE1 $NODE2
 	do
 		ssh $ip "mdadm -Ssq; dmesg -c > /dev/null"
@@ -206,6 +205,12 @@ cleanup()
 	mdadm --zero ${devlist[@]} &> /dev/null
 }
 
+cleanup()
+{
+	check_ssh
+	do_clean
+}
+
 # check: $1/cluster_node $2/feature $3/optional
 check()
 {
diff --git a/test b/test
index 024cb63..111a2e7 100755
--- a/test
+++ b/test
@@ -82,11 +82,9 @@ do_test() {
 	if [ -f "$_script" ]
 	then
 		rm -f $targetdir/stderr
-		# stop all arrays, just incase some script left an array active.
-		$mdadm -Ssq 2> /dev/null
-		mdadm --zero $devlist 2> /dev/null
 		# this might have been reset: restore the default.
 		echo 2000 > /proc/sys/dev/raid/speed_limit_max
+		do_clean
 		# source script in a subshell, so it has access to our
 		# namespace, but cannot change it.
 		echo -ne "$_script... "
diff --git a/tests/func.sh b/tests/func.sh
index 6bfdafe..8cfee0c 100644
--- a/tests/func.sh
+++ b/tests/func.sh
@@ -88,6 +88,13 @@ cleanup() {
 	esac
 }
 
+do_clean()
+{
+	mdadm -Ss > /dev/null
+	mdadm --zero $devlist 2> /dev/null
+	dmesg -c > /dev/null
+}
+
 check_env() {
 	user=$(id -un)
 	[ "X$user" != "Xroot" ] && {
@@ -141,7 +148,6 @@ do_setup() {
 
 	check_env
 	[ -d $logdir ] || mkdir -p $logdir
-	dmesg -c > /dev/null
 
 	devlist=
 	if [ "$DEVTYPE" == "loop" ]
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 04/19] mdadm/clustermd_tests: add nobitmap in check
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (2 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 03/19] mdadm/test: add do_clean to ensure each case only catch its own testlog Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 05/19] mdadm/clustermd_tests: delete meaningless commands " Zhilong Liu
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/func.sh | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/clustermd_tests/func.sh b/clustermd_tests/func.sh
index 5c0b168..329f610 100644
--- a/clustermd_tests/func.sh
+++ b/clustermd_tests/func.sh
@@ -284,6 +284,13 @@ check()
 					die "$ip: no '$2' found in /proc/mdstat."
 			done
 		;;
+		nobitmap )
+			for ip in ${NODES[@]}
+			do
+				ssh $ip "grep -sq 'bitmap' /proc/mdstat" &&
+					die "$ip: 'bitmap' found in /proc/mdstat."
+			done
+		;;
 		chunk )
 			for ip in ${NODES[@]}
 			do
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 05/19] mdadm/clustermd_tests: delete meaningless commands in check
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (3 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 04/19] mdadm/clustermd_tests: add nobitmap in check Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 06/19] manpage: add prompt in --zero-superblock against clustered raid Zhilong Liu
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/func.sh | 2 --
 1 file changed, 2 deletions(-)

diff --git a/clustermd_tests/func.sh b/clustermd_tests/func.sh
index 329f610..c2be0e5 100644
--- a/clustermd_tests/func.sh
+++ b/clustermd_tests/func.sh
@@ -278,8 +278,6 @@ check()
 		bitmap )
 			for ip in ${NODES[@]}
 			do
-				echo $ip
-				ssh $ip cat /proc/mdstat
 				ssh $ip "grep -sq '$2' /proc/mdstat" ||
 					die "$ip: no '$2' found in /proc/mdstat."
 			done
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 06/19] manpage: add prompt in --zero-superblock against clustered raid
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (4 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 05/19] mdadm/clustermd_tests: delete meaningless commands " Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 07/19] clustermd_tests: add test case to test switching bitmap against cluster-raid1 Zhilong Liu
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

Clustered raid would be damaged if calls --zero-superblock
incorrectly, so add prompt in --zero-superblock chapter of
manpage. Such as: cluster node1 has assembled cluster-md,
but calls --zero-superblock in other cluster node.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 mdadm.8.in | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mdadm.8.in b/mdadm.8.in
index f0fd1fc..e6befeb 100644
--- a/mdadm.8.in
+++ b/mdadm.8.in
@@ -1691,6 +1691,10 @@ overwritten with zeros.  With
 the block where the superblock would be is overwritten even if it
 doesn't appear to be valid.
 
+.B Note:
+Be careful to call \-\-zero\-superblock with clustered raid, make sure
+array isn't used or assembled in other cluster node before execute it.
+
 .TP
 .B \-\-kill\-subarray=
 If the device is a container and the argument to \-\-kill\-subarray
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 07/19] clustermd_tests: add test case to test switching bitmap against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (5 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 06/19] manpage: add prompt in --zero-superblock against clustered raid Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 08/19] clustermd_tests: add test case to test switching bitmap against cluster-raid10 Zhilong Liu
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

01r1_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid1.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/01r1_Grow_bitmap-switch | 51 +++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)
 create mode 100644 clustermd_tests/01r1_Grow_bitmap-switch

diff --git a/clustermd_tests/01r1_Grow_bitmap-switch b/clustermd_tests/01r1_Grow_bitmap-switch
new file mode 100644
index 0000000..3b363d9
--- /dev/null
+++ b/clustermd_tests/01r1_Grow_bitmap-switch
@@ -0,0 +1,51 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid1
+check all bitmap
+check all state UU
+
+# switch 'clustered' bitmap to 'none', and then 'none' to 'internal'
+stop_md $NODE2 $md0
+mdadm --grow $md0 --bitmap=none
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'clustered' to 'none' failed."
+mdadm -X $dev0 $dev1 &> /dev/null
+[ $? -eq '0' ] &&
+	die "$NODE1: bitmap still exists in member_disks."
+check all nobitmap
+mdadm --grow $md0 --bitmap=internal
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'none' to 'internal' failed."
+sleep 2
+mdadm -X $dev0 $dev1 &> /dev/null
+[ $? -eq '0' ] ||
+	die "$NODE1: create 'internal' bitmap failed."
+check $NODE1 bitmap
+
+# switch 'internal' bitmap to 'none', and then 'none' to 'clustered'
+mdadm --grow $md0 --bitmap=none
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'internal' to 'none' failed."
+mdadm -X $dev0 $dev1 &> /dev/null
+[ $? -eq '0' ] &&
+	die "$NODE1: bitmap still exists in member_disks."
+check $NODE1 nobitmap
+mdadm --grow $md0 --bitmap=clustered
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'none' to 'clustered' failed."
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+sleep 2
+for ip in $NODES
+do
+	ssh $ip "mdadm -X $dev0 $dev1 | grep -q 'Cluster name'" ||
+		die "$ip: create 'clustered' bitmap failed."
+done
+check all bitmap
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 08/19] clustermd_tests: add test case to test switching bitmap against cluster-raid10
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (6 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 07/19] clustermd_tests: add test case to test switching bitmap against cluster-raid1 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 09/19] clustermd_tests: add test case to test grow_add against cluster-raid1 Zhilong Liu
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

01r10_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid10.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/01r10_Grow_bitmap-switch | 51 ++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)
 create mode 100644 clustermd_tests/01r10_Grow_bitmap-switch

diff --git a/clustermd_tests/01r10_Grow_bitmap-switch b/clustermd_tests/01r10_Grow_bitmap-switch
new file mode 100644
index 0000000..1794719
--- /dev/null
+++ b/clustermd_tests/01r10_Grow_bitmap-switch
@@ -0,0 +1,51 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid10
+check all bitmap
+check all state UU
+
+# switch 'clustered' bitmap to 'none', and then 'none' to 'internal'
+stop_md $NODE2 $md0
+mdadm --grow $md0 --bitmap=none
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'clustered' to 'none' failed."
+mdadm -X $dev0 $dev1 &> /dev/null
+[ $? -eq '0' ] &&
+	die "$NODE1: bitmap still exists in member_disks."
+check all nobitmap
+mdadm --grow $md0 --bitmap=internal
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'none' to 'internal' failed."
+sleep 1
+mdadm -X $dev0 $dev1 &> /dev/null
+[ $? -eq '0' ] ||
+	die "$NODE1: create 'internal' bitmap failed."
+check $NODE1 bitmap
+
+# switch 'internal' bitmap to 'none', and then 'none' to 'clustered'
+mdadm --grow $md0 --bitmap=none
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'internal' to 'none' failed."
+mdadm -X $dev0 $dev1 &> /dev/null
+[ $? -eq '0' ] &&
+	die "$NODE1: bitmap still exists in member_disks."
+check $NODE1 nobitmap
+mdadm --grow $md0 --bitmap=clustered
+[ $? -eq '0' ] ||
+	die "$NODE1: change bitmap 'none' to 'clustered' failed."
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+sleep 1
+for ip in $NODES
+do
+	ssh $ip "mdadm -X $dev0 $dev1 | grep -q 'Cluster name'" ||
+		die "$ip: create 'clustered' bitmap failed."
+done
+check all bitmap
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 09/19] clustermd_tests: add test case to test grow_add against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (7 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 08/19] clustermd_tests: add test case to test switching bitmap against cluster-raid10 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 10/19] clustermd_tests: add test case to test manage_add " Zhilong Liu
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

01r1_Grow_add: It contains 3 kinds of growing array.
1. 2 active disk in md array, grow and add new disk into array.
2. 2 active and 1 spare disk in md array, grow and add new disk
   into array.
3. 2 active and 1 spare disk in md array, grow the device-number
   and make spare disk as active disk in array.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/01r1_Grow_add | 68 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)
 create mode 100644 clustermd_tests/01r1_Grow_add

diff --git a/clustermd_tests/01r1_Grow_add b/clustermd_tests/01r1_Grow_add
new file mode 100644
index 0000000..5706114
--- /dev/null
+++ b/clustermd_tests/01r1_Grow_add
@@ -0,0 +1,68 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid1
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --grow $md0 --raid-devices=3 --add $dev2
+sleep 0.3
+grep recovery /proc/mdstat
+if [ $? -eq '0' ]
+then
+	check $NODE1 wait
+else
+	check $NODE2 recovery
+	check $NODE2 wait
+fi
+check all state UUU
+check all dmesg
+stop_md all $md0
+
+mdadm -CR $md0 -l1 -b clustered -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid1
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --grow $md0 --raid-devices=3 --add $dev3
+sleep 0.3
+grep recovery /proc/mdstat
+if [ $? -eq '0' ]
+then
+	check $NODE1 wait
+else
+	check $NODE2 recovery
+	check $NODE2 wait
+fi
+check all state UUU
+check all dmesg
+stop_md all $md0
+
+mdadm -CR $md0 -l1 -b clustered -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid1
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --grow $md0 --raid-devices=3
+sleep 0.3
+grep recovery /proc/mdstat
+if [ $? -eq '0' ]
+then
+	check $NODE1 wait
+else
+	check $NODE2 recovery
+	check $NODE2 wait
+fi
+check all state UUU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 10/19] clustermd_tests: add test case to test manage_add against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (8 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 09/19] clustermd_tests: add test case to test grow_add against cluster-raid1 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 11/19] clustermd_tests: add test case to test manage_add against cluster-raid10 Zhilong Liu
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

02r1_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
   from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
   the 'add' in equal to 'add-spare'.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/02r1_Manage_add | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 clustermd_tests/02r1_Manage_add

diff --git a/clustermd_tests/02r1_Manage_add b/clustermd_tests/02r1_Manage_add
new file mode 100644
index 0000000..ab2751c
--- /dev/null
+++ b/clustermd_tests/02r1_Manage_add
@@ -0,0 +1,33 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid1
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0 --remove $dev0
+mdadm --zero $dev2
+mdadm --manage $md0 --add $dev2
+sleep 0.3
+check $NODE1 recovery
+check $NODE1 wait
+check all state UU
+check all dmesg
+stop_md all $md0
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid1
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --add $dev2
+check all spares 1
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 11/19] clustermd_tests: add test case to test manage_add against cluster-raid10
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (9 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 10/19] clustermd_tests: add test case to test manage_add " Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 12/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid1 Zhilong Liu
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

02r10_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
   from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
   the 'add' in equal to 'add-spare'.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/02r10_Manage_add | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 clustermd_tests/02r10_Manage_add

diff --git a/clustermd_tests/02r10_Manage_add b/clustermd_tests/02r10_Manage_add
new file mode 100644
index 0000000..8e878ab
--- /dev/null
+++ b/clustermd_tests/02r10_Manage_add
@@ -0,0 +1,33 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid10
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0 --remove $dev0
+mdadm --zero $dev2
+mdadm --manage $md0 --add $dev2
+sleep 0.3
+check $NODE1 recovery
+check $NODE1 wait
+check all state UU
+check all dmesg
+stop_md all $md0
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid10
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --add $dev2
+check all spares 1
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 12/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (10 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 11/19] clustermd_tests: add test case to test manage_add against cluster-raid10 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 13/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid10 Zhilong Liu
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

02r1_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
   then check spares.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/02r1_Manage_add-spare | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)
 create mode 100644 clustermd_tests/02r1_Manage_add-spare

diff --git a/clustermd_tests/02r1_Manage_add-spare b/clustermd_tests/02r1_Manage_add-spare
new file mode 100644
index 0000000..eab8111
--- /dev/null
+++ b/clustermd_tests/02r1_Manage_add-spare
@@ -0,0 +1,30 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid1
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --add-spare $dev2
+check all spares 1
+check all state UU
+check all dmesg
+stop_md all $md0
+
+mdadm -CR $md0 -l1 -b clustered -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid1
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --manage $md0 --add-spare $dev3
+check all spares 2
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 13/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid10
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (11 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 12/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid1 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 14/19] clustermd_tests: add test case to test manage_re-add against cluster-raid1 Zhilong Liu
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

02r10_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
   then check spares.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/02r10_Manage_add-spare | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)
 create mode 100644 clustermd_tests/02r10_Manage_add-spare

diff --git a/clustermd_tests/02r10_Manage_add-spare b/clustermd_tests/02r10_Manage_add-spare
new file mode 100644
index 0000000..9924aa8
--- /dev/null
+++ b/clustermd_tests/02r10_Manage_add-spare
@@ -0,0 +1,30 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid10
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --add-spare $dev2
+check all spares 1
+check all state UU
+check all dmesg
+stop_md all $md0
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid10
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --manage $md0 --add-spare $dev3
+check all spares 2
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 14/19] clustermd_tests: add test case to test manage_re-add against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (12 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 13/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid10 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:10 ` [PATCH 15/19] clustermd_tests: add test case to test manage_re-add against cluster-raid10 Zhilong Liu
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

02r1_Manage_re-add:
2 active disk in array, set 1 disk 'fail' and 'remove' it from array,
then re-add the disk back to array and triggers recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/02r1_Manage_re-add | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 clustermd_tests/02r1_Manage_re-add

diff --git a/clustermd_tests/02r1_Manage_re-add b/clustermd_tests/02r1_Manage_re-add
new file mode 100644
index 0000000..dd9c416
--- /dev/null
+++ b/clustermd_tests/02r1_Manage_re-add
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid1
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0 --remove $dev0
+mdadm --manage $md0 --re-add $dev0
+check $NODE1 recovery
+check all wait
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 15/19] clustermd_tests: add test case to test manage_re-add against cluster-raid10
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (13 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 14/19] clustermd_tests: add test case to test manage_re-add against cluster-raid1 Zhilong Liu
@ 2018-02-02  6:10 ` Zhilong Liu
  2018-02-02  6:11 ` [PATCH 16/19] clustermd_tests: add test case to test switch-resync against cluster-raid1 Zhilong Liu
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:10 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

02r10_Manage_re-add:
2 active disk in array, set 1 disk 'fail' and 'remove' it from array,
then re-add the disk back to array and triggers recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/02r10_Manage_re-add | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 clustermd_tests/02r10_Manage_re-add

diff --git a/clustermd_tests/02r10_Manage_re-add b/clustermd_tests/02r10_Manage_re-add
new file mode 100644
index 0000000..2288a00
--- /dev/null
+++ b/clustermd_tests/02r10_Manage_re-add
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid10
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0 --remove $dev0
+mdadm --manage $md0 --re-add $dev0
+check $NODE1 recovery
+check all wait
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 16/19] clustermd_tests: add test case to test switch-resync against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (14 preceding siblings ...)
  2018-02-02  6:10 ` [PATCH 15/19] clustermd_tests: add test case to test manage_re-add against cluster-raid10 Zhilong Liu
@ 2018-02-02  6:11 ` Zhilong Liu
  2018-02-02  6:11 ` [PATCH 17/19] clustermd_tests: add test case to test switch-resync against cluster-raid10 Zhilong Liu
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:11 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

03r1_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/03r1_switch-resync | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 clustermd_tests/03r1_switch-resync

diff --git a/clustermd_tests/03r1_switch-resync b/clustermd_tests/03r1_switch-resync
new file mode 100644
index 0000000..d99e1c5
--- /dev/null
+++ b/clustermd_tests/03r1_switch-resync
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 $dev0 $dev1
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check $NODE1 resync
+check $NODE2 PENDING
+stop_md $NODE1 $md0
+check $NODE2 resync
+check $NODE2 wait
+mdadm -A $md0 $dev0 $dev1
+check all raid1
+check all bitmap
+check all nosync
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 17/19] clustermd_tests: add test case to test switch-resync against cluster-raid10
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (15 preceding siblings ...)
  2018-02-02  6:11 ` [PATCH 16/19] clustermd_tests: add test case to test switch-resync against cluster-raid1 Zhilong Liu
@ 2018-02-02  6:11 ` Zhilong Liu
  2018-02-02  6:11 ` [PATCH 18/19] clustermd_tests: add test case to test switch-recovery against cluster-raid1 Zhilong Liu
  2018-02-02  6:11 ` [PATCH 19/19] clustermd_tests: add test case to test switch-recovery against cluster-raid10 Zhilong Liu
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:11 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

03r10_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/03r10_switch-resync | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 clustermd_tests/03r10_switch-resync

diff --git a/clustermd_tests/03r10_switch-resync b/clustermd_tests/03r10_switch-resync
new file mode 100644
index 0000000..127c569
--- /dev/null
+++ b/clustermd_tests/03r10_switch-resync
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check $NODE1 resync
+check $NODE2 PENDING
+stop_md $NODE1 $md0
+check $NODE2 resync
+check $NODE2 wait
+mdadm -A $md0 $dev0 $dev1
+check all raid10
+check all bitmap
+check all nosync
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 18/19] clustermd_tests: add test case to test switch-recovery against cluster-raid1
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (16 preceding siblings ...)
  2018-02-02  6:11 ` [PATCH 17/19] clustermd_tests: add test case to test switch-resync against cluster-raid10 Zhilong Liu
@ 2018-02-02  6:11 ` Zhilong Liu
  2018-02-02  6:11 ` [PATCH 19/19] clustermd_tests: add test case to test switch-recovery against cluster-raid10 Zhilong Liu
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:11 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

03r1_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/03r1_switch-recovery | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)
 create mode 100644 clustermd_tests/03r1_switch-recovery

diff --git a/clustermd_tests/03r1_switch-recovery b/clustermd_tests/03r1_switch-recovery
new file mode 100644
index 0000000..a1a7cbe
--- /dev/null
+++ b/clustermd_tests/03r1_switch-recovery
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l1 -b clustered -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid1
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0
+sleep 0.3
+check $NODE1 recovery
+stop_md $NODE1 $md0
+check $NODE2 recovery
+check $NODE2 wait
+check $NODE2 state UU
+check all dmesg
+stop_md $NODE2 $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 19/19] clustermd_tests: add test case to test switch-recovery against cluster-raid10
  2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
                   ` (17 preceding siblings ...)
  2018-02-02  6:11 ` [PATCH 18/19] clustermd_tests: add test case to test switch-recovery against cluster-raid1 Zhilong Liu
@ 2018-02-02  6:11 ` Zhilong Liu
  18 siblings, 0 replies; 20+ messages in thread
From: Zhilong Liu @ 2018-02-02  6:11 UTC (permalink / raw)
  To: Jes.Sorensen; +Cc: linux-raid, gqjiang, Zhilong Liu

03r10_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
---
 clustermd_tests/03r10_switch-recovery | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)
 create mode 100644 clustermd_tests/03r10_switch-recovery

diff --git a/clustermd_tests/03r10_switch-recovery b/clustermd_tests/03r10_switch-recovery
new file mode 100644
index 0000000..867388d
--- /dev/null
+++ b/clustermd_tests/03r10_switch-recovery
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid10
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0
+sleep 0.2
+check $NODE1 recovery
+stop_md $NODE1 $md0
+check $NODE2 recovery
+check $NODE2 wait
+check $NODE2 state UU
+check all dmesg
+stop_md $NODE2 $md0
+
+exit 0
-- 
2.6.6


^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-02-02  6:11 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-02  6:10 [PATCH 00/19] mdadm/clustermd_tests: update the testing part Zhilong Liu
2018-02-02  6:10 ` [PATCH 01/19] mdadm/test: improve filtering r10 from raid1 in raidtype Zhilong Liu
2018-02-02  6:10 ` [PATCH 02/19] mdadm/test: add disk metadata infos in save_log Zhilong Liu
2018-02-02  6:10 ` [PATCH 03/19] mdadm/test: add do_clean to ensure each case only catch its own testlog Zhilong Liu
2018-02-02  6:10 ` [PATCH 04/19] mdadm/clustermd_tests: add nobitmap in check Zhilong Liu
2018-02-02  6:10 ` [PATCH 05/19] mdadm/clustermd_tests: delete meaningless commands " Zhilong Liu
2018-02-02  6:10 ` [PATCH 06/19] manpage: add prompt in --zero-superblock against clustered raid Zhilong Liu
2018-02-02  6:10 ` [PATCH 07/19] clustermd_tests: add test case to test switching bitmap against cluster-raid1 Zhilong Liu
2018-02-02  6:10 ` [PATCH 08/19] clustermd_tests: add test case to test switching bitmap against cluster-raid10 Zhilong Liu
2018-02-02  6:10 ` [PATCH 09/19] clustermd_tests: add test case to test grow_add against cluster-raid1 Zhilong Liu
2018-02-02  6:10 ` [PATCH 10/19] clustermd_tests: add test case to test manage_add " Zhilong Liu
2018-02-02  6:10 ` [PATCH 11/19] clustermd_tests: add test case to test manage_add against cluster-raid10 Zhilong Liu
2018-02-02  6:10 ` [PATCH 12/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid1 Zhilong Liu
2018-02-02  6:10 ` [PATCH 13/19] clustermd_tests: add test case to test manage_add-spare against cluster-raid10 Zhilong Liu
2018-02-02  6:10 ` [PATCH 14/19] clustermd_tests: add test case to test manage_re-add against cluster-raid1 Zhilong Liu
2018-02-02  6:10 ` [PATCH 15/19] clustermd_tests: add test case to test manage_re-add against cluster-raid10 Zhilong Liu
2018-02-02  6:11 ` [PATCH 16/19] clustermd_tests: add test case to test switch-resync against cluster-raid1 Zhilong Liu
2018-02-02  6:11 ` [PATCH 17/19] clustermd_tests: add test case to test switch-resync against cluster-raid10 Zhilong Liu
2018-02-02  6:11 ` [PATCH 18/19] clustermd_tests: add test case to test switch-recovery against cluster-raid1 Zhilong Liu
2018-02-02  6:11 ` [PATCH 19/19] clustermd_tests: add test case to test switch-recovery against cluster-raid10 Zhilong Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.