All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex
@ 2021-06-03  9:59 Leo Yan
  2021-06-03  9:59 ` [PATCH v1 01/17] tests: Enable the testing for IDM locking scheme Leo Yan
                   ` (16 more replies)
  0 siblings, 17 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch series is to enable In-Drive-Mutex (IDM) testing in LVM2.

Patches 01 ~ 04 are fundamental changes for supporting IDM testing.

  - Patch 01 provides the scripts for setting up environment and
    teardown.  And introduces the testing option for IDM;
  - Patch 02 is to extend backing device from only one device name to
    multiple device names with comma as splitter.  This can give the
    flexibility for testing IDM drives;
  - Patch 03 introduces the SCSI command to explicitly cleanup drive
    firmware;
  - Patch 04 adds checkers for lvmlockd log, it provides helper for
    verify the failure handling.

Patches 05 ~ 07 are stress testing patches, they verify the single
thread and multi-threads testing cases; this can give the pressure to
lvmlockd and the locking manager for multiple requests simultaneously.

Patches 08 ~ 13 are failure handling testing cases.  Patch 08 introduces
an utility for injection failures into the IDM lock manager, thus this
can emulate the drive failures in the lock manager; patch 09 is verify
the failure happens in lvmlockd; patches 10~12 test the failure
handling, which deliberately delete part or all drives, then check if
the IDM lock manager and lvmlockd can handle these failures as expected
or not; patch 13 is to check the failure handling when the IDM lock
manager is malfunctioning.

Patches 14 ~ 17 introduces multi-hosts testing cases.  Every testing
provides pair script, one is with suffix "hosta.sh" and another is with
suffix "hostb.sh".  The designed sequence is firstly to launch the
script "hosta.sh" on one host, and then run the script "hostb.sh" on
another host.  So with the co-operation with these two scripts, we can
verify the locking scheme on multi-hosts.

After applied this patch series, the IDM locking scheme was verified
on Centos 7 (centos-release-7-9.2009.1.el7.centos.x86_64) with commands:

  # export IDM_DRIVES=/dev/sdb2,/dev/sdd2,/dev/sde2,/dev/sdg2,\
         	     /dev/sdb3,/dev/sdd3,/dev/sde3,/dev/sdg3,\
         	     /dev/sdb4,/dev/sdd4,/dev/sde4,/dev/sdg4,\
         	     /dev/sdb5,/dev/sdd5,/dev/sde5,/dev/sdg5
  # make check_lvmlockd_idm LVM_TEST_BACKING_DEVICE=$IDM_DRIVES

The enviornment variable "IDM_DRIVES" is set to the partitions used for
PVs and the drives containing these partitions support IDM locking.

The testing result is recorded in the file [1]:
- The testing result shows no regression introduced for LVM core paths;
- There have 15 failure cases for IDM locking scheme, the main reasons
  for these failure cases are:
  1. Cannot find drive path cannot achieve majority for IDM locking
     algorithm;
  2. Have no sufficient disk size for the testing case;
  3. Fail to find drive device name after deleting PV's device mapper;
  4. Need to specify the bigger log line number, otherwise the testing
     is interrupted by the log system.

[1] https://github.com/Seagate/propeller/blob/master/doc/lvm_test.md


Leo Yan (17):
  tests: Enable the testing for IDM locking scheme
  tests: Support multiple backing devices
  tests: Cleanup idm context when prepare devices
  tests: Add checking for lvmlockd log
  tests: stress: Add single thread stress testing
  tests: stress: Add multi-threads stress testing for VG/LV
  tests: stress: Add multi-threads stress testing for PV/VG/LV
  tests: Support idm failure injection
  tests: Add testing for lvmlockd failure
  tests: idm: Add testing for the fabric failure
  tests: idm: Add testing for the fabric failure and timeout
  tests: idm: Add testing for the fabric's half brain failure
  tests: idm: Add testing for IDM lock manager failure
  tests: multi-hosts: Add VG testing
  tests: multi-hosts: Add LV testing
  tests: multi-hosts: Test lease timeout with LV exclusive mode
  tests: multi-hosts: Test lease timeout with LV shareable mode

 test/Makefile.in                              |  15 +++
 test/lib/aux.sh                               |  73 +++++++++++-
 test/lib/check.sh                             |   5 +
 test/lib/flavour-udev-lvmlockd-idm.sh         |   5 +
 test/lib/idm_inject_failure.c                 |  55 +++++++++
 test/lib/inittest.sh                          |   8 +-
 test/shell/aa-lvmlockd-idm-prepare.sh         |  20 ++++
 test/shell/idm_fabric_failure.sh              |  58 +++++++++
 test/shell/idm_fabric_failure_half_brain.sh   |  78 ++++++++++++
 test/shell/idm_fabric_failure_timeout.sh      |  74 ++++++++++++
 test/shell/idm_ilm_failure.sh                 |  80 +++++++++++++
 test/shell/lvmlockd-lv-types.sh               |   6 +
 test/shell/lvmlockd_failure.sh                |  37 ++++++
 test/shell/multi_hosts_lv_ex_timeout_hosta.sh |  87 ++++++++++++++
 test/shell/multi_hosts_lv_ex_timeout_hostb.sh |  56 +++++++++
 test/shell/multi_hosts_lv_hosta.sh            |  78 ++++++++++++
 test/shell/multi_hosts_lv_hostb.sh            |  61 ++++++++++
 test/shell/multi_hosts_lv_sh_timeout_hosta.sh |  87 ++++++++++++++
 test/shell/multi_hosts_lv_sh_timeout_hostb.sh |  56 +++++++++
 test/shell/multi_hosts_vg_hosta.sh            |  45 +++++++
 test/shell/multi_hosts_vg_hostb.sh            |  52 ++++++++
 test/shell/stress_multi_threads_1.sh          | 111 ++++++++++++++++++
 test/shell/stress_multi_threads_2.sh          |  93 +++++++++++++++
 test/shell/stress_single_thread.sh            |  59 ++++++++++
 test/shell/zz-lvmlockd-idm-remove.sh          |  29 +++++
 25 files changed, 1323 insertions(+), 5 deletions(-)
 create mode 100644 test/lib/flavour-udev-lvmlockd-idm.sh
 create mode 100644 test/lib/idm_inject_failure.c
 create mode 100644 test/shell/aa-lvmlockd-idm-prepare.sh
 create mode 100644 test/shell/idm_fabric_failure.sh
 create mode 100644 test/shell/idm_fabric_failure_half_brain.sh
 create mode 100644 test/shell/idm_fabric_failure_timeout.sh
 create mode 100644 test/shell/idm_ilm_failure.sh
 create mode 100644 test/shell/lvmlockd_failure.sh
 create mode 100644 test/shell/multi_hosts_lv_ex_timeout_hosta.sh
 create mode 100644 test/shell/multi_hosts_lv_ex_timeout_hostb.sh
 create mode 100644 test/shell/multi_hosts_lv_hosta.sh
 create mode 100644 test/shell/multi_hosts_lv_hostb.sh
 create mode 100644 test/shell/multi_hosts_lv_sh_timeout_hosta.sh
 create mode 100644 test/shell/multi_hosts_lv_sh_timeout_hostb.sh
 create mode 100644 test/shell/multi_hosts_vg_hosta.sh
 create mode 100644 test/shell/multi_hosts_vg_hostb.sh
 create mode 100644 test/shell/stress_multi_threads_1.sh
 create mode 100644 test/shell/stress_multi_threads_2.sh
 create mode 100644 test/shell/stress_single_thread.sh
 create mode 100644 test/shell/zz-lvmlockd-idm-remove.sh

-- 
2.25.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v1 01/17] tests: Enable the testing for IDM locking scheme
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 02/17] tests: Support multiple backing devices Leo Yan
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing.  Also add the prepare and remove shell scripts for
IDM.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/Makefile.in                      |  9 +++++++++
 test/lib/aux.sh                       | 25 +++++++++++++++++++++++
 test/lib/flavour-udev-lvmlockd-idm.sh |  5 +++++
 test/lib/inittest.sh                  |  3 ++-
 test/shell/aa-lvmlockd-idm-prepare.sh | 20 ++++++++++++++++++
 test/shell/lvmlockd-lv-types.sh       |  6 ++++++
 test/shell/zz-lvmlockd-idm-remove.sh  | 29 +++++++++++++++++++++++++++
 7 files changed, 96 insertions(+), 1 deletion(-)
 create mode 100644 test/lib/flavour-udev-lvmlockd-idm.sh
 create mode 100644 test/shell/aa-lvmlockd-idm-prepare.sh
 create mode 100644 test/shell/zz-lvmlockd-idm-remove.sh

diff --git a/test/Makefile.in b/test/Makefile.in
index e4cd3aac5..662974be6 100644
--- a/test/Makefile.in
+++ b/test/Makefile.in
@@ -85,6 +85,7 @@ help:
 	@echo "  check_all_lvmpolld     Run all tests with lvmpolld daemon."
 	@echo "  check_lvmlockd_sanlock Run tests with lvmlockd and sanlock."
 	@echo "  check_lvmlockd_dlm     Run tests with lvmlockd and dlm."
+	@echo "  check_lvmlockd_idm	Run tests with lvmlockd and idm."
 	@echo "  check_lvmlockd_test    Run tests with lvmlockd --test."
 	@echo "  run-unit-test          Run only unit tests (root not needed)."
 	@echo "  clean			Clean dir."
@@ -168,6 +169,13 @@ check_lvmlockd_dlm: .tests-stamp
 		--flavours udev-lvmlockd-dlm --only shell/aa-lvmlockd-dlm-prepare.sh,$(T),shell/zz-lvmlockd-dlm-remove.sh --skip $(S)
 endif
 
+ifeq ("@BUILD_LVMLOCKD@", "yes")
+check_lvmlockd_idm: .tests-stamp
+	VERBOSE=$(VERBOSE) ./lib/runner \
+		--testdir . --outdir $(LVM_TEST_RESULTS) \
+		--flavours udev-lvmlockd-idm --only shell/aa-lvmlockd-idm-prepare.sh,$(T),shell/zz-lvmlockd-idm-remove.sh --skip $(S)
+endif
+
 ifeq ("@BUILD_LVMLOCKD@", "yes")
 check_lvmlockd_test: .tests-stamp
 	VERBOSE=$(VERBOSE) ./lib/runner \
@@ -189,6 +197,7 @@ LIB_FLAVOURS = \
  flavour-udev-lvmpolld\
  flavour-udev-lvmlockd-sanlock\
  flavour-udev-lvmlockd-dlm\
+ flavour-udev-lvmlockd-idm\
  flavour-udev-lvmlockd-test\
  flavour-udev-vanilla
 
diff --git a/test/lib/aux.sh b/test/lib/aux.sh
index 1a1f11a1d..97c7ac68b 100644
--- a/test/lib/aux.sh
+++ b/test/lib/aux.sh
@@ -119,6 +119,20 @@ prepare_sanlock() {
 	fi
 }
 
+prepare_idm() {
+	if pgrep seagate_ilm; then
+		echo "Cannot run while existing seagate_ilm process exists"
+		exit 1
+	fi
+
+	seagate_ilm -D 0 -l 0 -L 7 -E 7 -S 7
+
+	if ! pgrep seagate_ilm; then
+		echo "Failed to start seagate_ilm"
+		exit 1
+	fi
+}
+
 prepare_lvmlockd() {
 	if pgrep lvmlockd ; then
 		echo "Cannot run while existing lvmlockd process exists"
@@ -135,6 +149,11 @@ prepare_lvmlockd() {
 		echo "starting lvmlockd for dlm"
 		lvmlockd
 
+	elif test -n "$LVM_TEST_LOCK_TYPE_IDM"; then
+		# make check_lvmlockd_idm
+		echo "starting lvmlockd for idm"
+		lvmlockd -g idm
+
 	elif test -n "$LVM_TEST_LVMLOCKD_TEST_DLM"; then
 		# make check_lvmlockd_test
 		echo "starting lvmlockd --test (dlm)"
@@ -144,6 +163,12 @@ prepare_lvmlockd() {
 		# FIXME: add option for this combination of --test and sanlock
 		echo "starting lvmlockd --test (sanlock)"
 		lvmlockd --test -g sanlock -o 2
+
+	elif test -n "$LVM_TEST_LVMLOCKD_TEST_IDM"; then
+		# make check_lvmlockd_test
+		echo "starting lvmlockd --test (idm)"
+		lvmlockd --test -g idm
+
 	else
 		echo "not starting lvmlockd"
 		exit 0
diff --git a/test/lib/flavour-udev-lvmlockd-idm.sh b/test/lib/flavour-udev-lvmlockd-idm.sh
new file mode 100644
index 000000000..e9f8908df
--- /dev/null
+++ b/test/lib/flavour-udev-lvmlockd-idm.sh
@@ -0,0 +1,5 @@
+export LVM_TEST_LOCKING=1
+export LVM_TEST_LVMPOLLD=1
+export LVM_TEST_LVMLOCKD=1
+export LVM_TEST_LOCK_TYPE_IDM=1
+export LVM_TEST_DEVDIR=/dev
diff --git a/test/lib/inittest.sh b/test/lib/inittest.sh
index 0fd651710..6b4bcb348 100644
--- a/test/lib/inittest.sh
+++ b/test/lib/inittest.sh
@@ -40,6 +40,7 @@ LVM_TEST_LVMPOLLD=${LVM_TEST_LVMPOLLD-}
 LVM_TEST_DEVICES_FILE=${LVM_TEST_DEVICES_FILE-}
 LVM_TEST_LOCK_TYPE_DLM=${LVM_TEST_LOCK_TYPE_DLM-}
 LVM_TEST_LOCK_TYPE_SANLOCK=${LVM_TEST_LOCK_TYPE_SANLOCK-}
+LVM_TEST_LOCK_TYPE_IDM=${LVM_TEST_LOCK_TYPE_IDM-}
 
 SKIP_WITHOUT_CLVMD=${SKIP_WITHOUT_CLVMD-}
 SKIP_WITH_CLVMD=${SKIP_WITH_CLVMD-}
@@ -64,7 +65,7 @@ unset CDPATH
 
 export LVM_TEST_BACKING_DEVICE LVM_TEST_DEVDIR LVM_TEST_NODEBUG
 export LVM_TEST_LVMLOCKD LVM_TEST_LVMLOCKD_TEST
-export LVM_TEST_LVMPOLLD LVM_TEST_LOCK_TYPE_DLM LVM_TEST_LOCK_TYPE_SANLOCK
+export LVM_TEST_LVMPOLLD LVM_TEST_LOCK_TYPE_DLM LVM_TEST_LOCK_TYPE_SANLOCK LVM_TEST_LOCK_TYPE_IDM
 export LVM_TEST_DEVICES_FILE
 # grab some common utilities
 . lib/utils
diff --git a/test/shell/aa-lvmlockd-idm-prepare.sh b/test/shell/aa-lvmlockd-idm-prepare.sh
new file mode 100644
index 000000000..8faff3bc2
--- /dev/null
+++ b/test/shell/aa-lvmlockd-idm-prepare.sh
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate.  All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+test_description='Set up things to run tests with idm'
+
+. lib/inittest
+
+[ -z "$LVM_TEST_LOCK_TYPE_IDM" ] && skip;
+
+aux prepare_idm
+aux prepare_lvmlockd
diff --git a/test/shell/lvmlockd-lv-types.sh b/test/shell/lvmlockd-lv-types.sh
index 6138e5623..ee350b1c6 100644
--- a/test/shell/lvmlockd-lv-types.sh
+++ b/test/shell/lvmlockd-lv-types.sh
@@ -36,6 +36,12 @@ LOCKARGS2="dlm"
 LOCKARGS3="dlm"
 fi
 
+if test -n "$LVM_TEST_LOCK_TYPE_IDM" ; then
+LOCKARGS1="idm"
+LOCKARGS2="idm"
+LOCKARGS3="idm"
+fi
+
 aux prepare_devs 5
 
 vgcreate --shared $vg "$dev1" "$dev2" "$dev3" "$dev4" "$dev5"
diff --git a/test/shell/zz-lvmlockd-idm-remove.sh b/test/shell/zz-lvmlockd-idm-remove.sh
new file mode 100644
index 000000000..25943a579
--- /dev/null
+++ b/test/shell/zz-lvmlockd-idm-remove.sh
@@ -0,0 +1,29 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+test_description='Remove the idm test setup'
+
+. lib/inittest
+
+[ -z "$LVM_TEST_LOCK_TYPE_IDM" ] && skip;
+
+# FIXME: collect debug logs (only if a test failed?)
+# lvmlockctl -d > lvmlockd-debug.txt
+# dlm_tool dump > dlm-debug.txt
+
+lvmlockctl --stop-lockspaces
+sleep 1
+killall lvmlockd
+sleep 1
+killall lvmlockd || true
+sleep 1
+killall seagate_ilm
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 02/17] tests: Support multiple backing devices
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
  2021-06-03  9:59 ` [PATCH v1 01/17] tests: Enable the testing for IDM locking scheme Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 03/17] tests: Cleanup idm context when prepare devices Leo Yan
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

In current implementation, the option "LVM_TEST_BACKING_DEVICE" only
supports to specify one backing device; this patch is to extend the
option to support multiple backing devices by using comma as separator,
e.g. below command specifies two backing devices:

  make check_lvmlockd_idm LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3

This can allow the testing works on multiple drives and verify the
locking scheme if can work as expected for multiple drives case.  For
example, for Seagate IDM locking scheme, if a VG uses two PVs, every PV
is resident on a drive, thus the locking operations will be sent to two
drives respectively; so the extension for "LVM_TEST_BACKING_DEVICE" can
help to verify different drive configurations for locking.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/lib/aux.sh | 31 ++++++++++++++++++++++++++++---
 1 file changed, 28 insertions(+), 3 deletions(-)

diff --git a/test/lib/aux.sh b/test/lib/aux.sh
index 97c7ac68b..a592dad81 100644
--- a/test/lib/aux.sh
+++ b/test/lib/aux.sh
@@ -901,11 +901,22 @@ prepare_backing_dev() {
 	local size=${1=32}
 	shift
 
+	if test -n "$LVM_TEST_BACKING_DEVICE"; then
+		IFS=',' read -r -a BACKING_DEVICE_ARRAY <<< "$LVM_TEST_BACKING_DEVICE"
+
+		for d in "${BACKING_DEVICE_ARRAY[@]}"; do
+			if test ! -b "$d"; then
+				echo "Device $d doesn't exist!"
+				return 1
+			fi
+		done
+	fi
+
 	if test -f BACKING_DEV; then
 		BACKING_DEV=$(< BACKING_DEV)
 		return 0
-	elif test -b "$LVM_TEST_BACKING_DEVICE"; then
-		BACKING_DEV=$LVM_TEST_BACKING_DEVICE
+	elif test -n "$LVM_TEST_BACKING_DEVICE"; then
+		BACKING_DEV=${BACKING_DEVICE_ARRAY[0]}
 		echo "$BACKING_DEV" > BACKING_DEV
 		return 0
 	elif test "${LVM_TEST_PREFER_BRD-1}" = "1" && \
@@ -953,7 +964,14 @@ prepare_devs() {
 		local dev="$DM_DEV_DIR/mapper/$name"
 		DEVICES[$count]=$dev
 		count=$((  count + 1 ))
-		echo 0 $size linear "$BACKING_DEV" $(( ( i - 1 ) * size + ( header_shift * 2048 ) )) > "$name.table"
+		# If the backing device number can meet the requirement for PV devices,
+		# then allocate a dedicated backing device for PV; otherwise, rollback
+		# to use single backing device for device-mapper.
+		if [ -n "$LVM_TEST_BACKING_DEVICE" ] && [ $n -le ${#BACKING_DEVICE_ARRAY[@]} ]; then
+			echo 0 $size linear "${BACKING_DEVICE_ARRAY[$(( count - 1 ))]}" $(( header_shift * 2048 )) > "$name.table"
+		else
+			echo 0 $size linear "$BACKING_DEV" $(( ( i - 1 ) * size + ( header_shift * 2048 ) )) > "$name.table"
+		fi
 		dmsetup create -u "TEST-$name" "$name" "$name.table" || touch CREATE_FAILED &
 		test -f CREATE_FAILED && break;
 	done
@@ -971,6 +989,13 @@ prepare_devs() {
 		return $?
 	fi
 
+	for d in "${BACKING_DEVICE_ARRAY[@]}"; do
+		cnt=$((`blockdev --getsize64 $d` / 1024 / 1024))
+		cnt=$(( cnt < 1000 ? cnt : 1000 ))
+		dd if=/dev/zero of="$d" bs=1MB count=$cnt
+		wipefs -a "$d" 2>/dev/null || true
+	done
+
 	# non-ephemeral devices need to be cleared between tests
 	test -f LOOP -o -f RAMDISK || for d in "${DEVICES[@]}"; do
 		# ensure disk header is always zeroed
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 03/17] tests: Cleanup idm context when prepare devices
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
  2021-06-03  9:59 ` [PATCH v1 01/17] tests: Enable the testing for IDM locking scheme Leo Yan
  2021-06-03  9:59 ` [PATCH v1 02/17] tests: Support multiple backing devices Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 04/17] tests: Add checking for lvmlockd log Leo Yan
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

For testing idm locking scheme, it's good to cleanup the idm context
before run the test cases.  This can give a clean environment for the
testing.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/lib/aux.sh | 29 +++++++++++++++++++++++------
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/test/lib/aux.sh b/test/lib/aux.sh
index a592dad81..bb189f466 100644
--- a/test/lib/aux.sh
+++ b/test/lib/aux.sh
@@ -897,6 +897,20 @@ wipefs_a() {
 	udev_wait
 }
 
+cleanup_idm_context() {
+	local dev=$1
+
+	if [ -n "$LVM_TEST_LOCK_TYPE_IDM" ]; then
+		sg_dev=`sg_map26 ${dev}`
+		echo "Cleanup IDM context for drive ${dev} ($sg_dev)"
+		sg_raw -v -r 512 -o /tmp/idm_tmp_data.bin $sg_dev \
+			88 00 01 00 00 00 00 20 FF 01 00 00 00 01 00 00
+		sg_raw -v -s 512 -i /tmp/idm_tmp_data.bin $sg_dev \
+			8E 00 FF 00 00 00 00 00 00 00 00 00 00 01 00 00
+		rm /tmp/idm_tmp_data.bin
+	fi
+}
+
 prepare_backing_dev() {
 	local size=${1=32}
 	shift
@@ -989,12 +1003,15 @@ prepare_devs() {
 		return $?
 	fi
 
-	for d in "${BACKING_DEVICE_ARRAY[@]}"; do
-		cnt=$((`blockdev --getsize64 $d` / 1024 / 1024))
-		cnt=$(( cnt < 1000 ? cnt : 1000 ))
-		dd if=/dev/zero of="$d" bs=1MB count=$cnt
-		wipefs -a "$d" 2>/dev/null || true
-	done
+	if [ -n "$LVM_TEST_BACKING_DEVICE" ]; then
+		for d in "${BACKING_DEVICE_ARRAY[@]}"; do
+			cnt=$((`blockdev --getsize64 $d` / 1024 / 1024))
+			cnt=$(( cnt < 1000 ? cnt : 1000 ))
+			dd if=/dev/zero of="$d" bs=1MB count=$cnt
+			wipefs -a "$d" 2>/dev/null || true
+			cleanup_idm_context "$d"
+		done
+	fi
 
 	# non-ephemeral devices need to be cleared between tests
 	test -f LOOP -o -f RAMDISK || for d in "${DEVICES[@]}"; do
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 04/17] tests: Add checking for lvmlockd log
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (2 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 03/17] tests: Cleanup idm context when prepare devices Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 05/17] tests: stress: Add single thread stress testing Leo Yan
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

Add checking for lvmlockd log, this can be used for the test cases which
are interested in the interaction with lvmlockd.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/lib/check.sh | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/test/lib/check.sh b/test/lib/check.sh
index 8493bde83..1f261940a 100644
--- a/test/lib/check.sh
+++ b/test/lib/check.sh
@@ -456,6 +456,11 @@ grep_dmsetup() {
 	grep -q "${@:3}" out || die "Expected output \"" "${@:3}" "\" from dmsetup $1 not found!"
 }
 
+grep_lvmlockd_dump() {
+	lvmlockctl --dump | tee out
+	grep -q "${@:1}" out || die "Expected output \"" "${@:1}" "\" from lvmlockctl --dump not found!"
+}
+
 #set -x
 unset LVM_VALGRIND
 "$@"
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 05/17] tests: stress: Add single thread stress testing
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (3 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 04/17] tests: Add checking for lvmlockd log Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 06/17] tests: stress: Add multi-threads stress testing for VG/LV Leo Yan
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to add the stress testing, which loops to create LV,
activate and deactivate LV in the single thread.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/stress_single_thread.sh | 59 ++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)
 create mode 100644 test/shell/stress_single_thread.sh

diff --git a/test/shell/stress_single_thread.sh b/test/shell/stress_single_thread.sh
new file mode 100644
index 000000000..e18d4900b
--- /dev/null
+++ b/test/shell/stress_single_thread.sh
@@ -0,0 +1,59 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+aux prepare_vg 3
+
+for i in {1..1000}
+do
+	# Create new logic volume and deactivate it
+	lvcreate -a n --zero n -l 1 -n foo $vg
+
+	# Set minor number
+	lvchange $vg/foo -My --major=255 --minor=123
+
+	# Activate logic volume
+	lvchange $vg/foo -a y
+
+	# Check device mapper
+	dmsetup info $vg-foo | tee info
+	grep -E "^Major, minor: *[0-9]+, 123" info
+
+	# Extend logic volume with 10%
+	lvextend -l+10 $vg/foo
+
+	# Deactivate logic volume
+	lvchange $vg/foo -a n
+
+	# Deactivate volume group
+	vgchange $vg -a n
+
+	# Activate volume group with shareable mode
+	vgchange $vg -a sy
+
+	# lvextend fails due to mismatched lock mode
+	not lvextend -l+10 $vg/foo
+
+	# Promote volume group to exclusive mode
+	vgchange $vg -a ey
+
+	lvreduce -f -l-4 $vg/foo
+
+	lvchange -an $vg/foo
+	lvremove $vg/foo
+done
+
+vgremove -ff $vg
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 06/17] tests: stress: Add multi-threads stress testing for VG/LV
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (4 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 05/17] tests: stress: Add single thread stress testing Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 07/17] tests: stress: Add multi-threads stress testing for PV/VG/LV Leo Yan
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to add the stress testing, which launches two threads,
each thread creates LV, activate and deactivate LV in the loop; so this
can test for multi-threading in lvmlockd and its backend lock manager.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/stress_multi_threads_1.sh | 111 +++++++++++++++++++++++++++
 1 file changed, 111 insertions(+)
 create mode 100644 test/shell/stress_multi_threads_1.sh

diff --git a/test/shell/stress_multi_threads_1.sh b/test/shell/stress_multi_threads_1.sh
new file mode 100644
index 000000000..c96fa244b
--- /dev/null
+++ b/test/shell/stress_multi_threads_1.sh
@@ -0,0 +1,111 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+aux prepare_devs 6
+get_devs
+
+pvcreate -M2 "${DEVICES[@]}"
+
+vgcreate --shared -M2 "$vg1" "$dev1" "$dev2" "$dev3"
+vgcreate --shared -M2 "$vg2" "$dev4" "$dev5" "$dev6"
+
+test_vg_thread1()
+{
+	for i in {1..1000}
+	do
+		# Create new logic volume and deactivate it
+		lvcreate -a n --zero n -l 1 -n foo $vg1
+
+		# Set minor number
+		lvchange $vg1/foo -My --major=255 --minor=123
+
+		# Activate logic volume
+		lvchange $vg1/foo -a y
+
+		# Extend logic volume with 10%
+		lvextend -l+10 $vg1/foo
+
+		# Deactivate logic volume
+		lvchange $vg1/foo -a n
+
+		# Deactivate volume group
+		vgchange $vg1 -a n
+
+		# Activate volume group with shareable mode
+		vgchange $vg1 -a sy
+
+		# lvextend fails due to mismatched lock mode
+		not lvextend -l+10 $vg1/foo
+
+		# Promote volume group to exclusive mode
+		vgchange $vg1 -a ey
+
+		lvreduce -f -l-4 $vg1/foo
+
+		lvchange -an $vg1/foo
+		lvremove $vg1/foo
+	done
+}
+
+test_vg_thread2()
+{
+	for i in {1..1000}
+	do
+		# Create new logic volume and deactivate it
+		lvcreate -a n --zero n -l 1 -n foo $vg2
+
+		# Set minor number
+		lvchange $vg2/foo -My --major=255 --minor=124
+
+		# Activate logic volume
+		lvchange $vg2/foo -a y
+
+		# Extend logic volume with 10%
+		lvextend -l+10 $vg2/foo
+
+		# Deactivate logic volume
+		lvchange $vg2/foo -a n
+
+		# Deactivate volume group
+		vgchange $vg2 -a n
+
+		# Activate volume group with shareable mode
+		vgchange $vg2 -a sy
+
+		# lvextend fails due to mismatched lock mode
+		not lvextend -l+10 $vg2/foo
+
+		# Promote volume group to exclusive mode
+		vgchange $vg2 -a ey
+
+		lvreduce -f -l-4 $vg2/foo
+
+		lvchange -an $vg2/foo
+		lvremove $vg2/foo
+	done
+}
+
+test_vg_thread1 &
+WAITPID=$!
+
+test_vg_thread2 &
+WAITPID="$WAITPID "$!
+
+wait $WAITPID
+
+vgremove -ff $vg1
+vgremove -ff $vg2
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 07/17] tests: stress: Add multi-threads stress testing for PV/VG/LV
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (5 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 06/17] tests: stress: Add multi-threads stress testing for VG/LV Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 08/17] tests: Support idm failure injection Leo Yan
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to add the stress testing, which launches three threads,
one thread is for creating/removing PV, one thread is for
creating/removing VG, and the last one thread is for LV operations.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/stress_multi_threads_2.sh | 93 ++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)
 create mode 100644 test/shell/stress_multi_threads_2.sh

diff --git a/test/shell/stress_multi_threads_2.sh b/test/shell/stress_multi_threads_2.sh
new file mode 100644
index 000000000..a035b5727
--- /dev/null
+++ b/test/shell/stress_multi_threads_2.sh
@@ -0,0 +1,93 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+aux prepare_devs 8
+get_devs
+
+pvcreate -M2 "$dev1" "$dev2" "$dev3" "$dev4" "$dev5" "$dev6"
+
+test_vg_thread1()
+{
+	for i in {1..1000}
+	do
+		vgcreate --shared -M2 "$vg1" "$dev1" "$dev2" "$dev3"
+		vgremove -ff $vg1
+	done
+}
+
+test_vg_thread2()
+{
+	vgcreate --shared -M2 "$vg2" "$dev4" "$dev5" "$dev6"
+
+	for i in {1..1000}
+	do
+		# Create new logic volume and deactivate it
+		lvcreate -a n --zero n -l 1 -n foo $vg2
+
+		# Set minor number
+		lvchange $vg2/foo -My --major=255 --minor=124
+
+		# Activate logic volume
+		lvchange $vg2/foo -a y
+
+		# Extend logic volume with 10%
+		lvextend -l+10 $vg2/foo
+
+		# Deactivate logic volume
+		lvchange $vg2/foo -a n
+
+		# Deactivate volume group
+		vgchange $vg2 -a n
+
+		# Activate volume group with shareable mode
+		vgchange $vg2 -a sy
+
+		# lvextend fails due to mismatched lock mode
+		not lvextend -l+10 $vg2/foo
+
+		# Promote volume group to exclusive mode
+		vgchange $vg2 -a ey
+
+		lvreduce -f -l-4 $vg2/foo
+
+		lvchange -an $vg2/foo
+		lvremove $vg2/foo
+	done
+
+	vgremove -ff $vg2
+}
+
+test_vg_thread3()
+{
+	for i in {1..1000}
+	do
+		pvcreate -M2 "$dev7" "$dev8"
+		pvremove "$dev7"
+		pvremove "$dev8"
+	done
+}
+
+test_vg_thread1 &
+WAITPID=$!
+
+test_vg_thread2 &
+WAITPID="$WAITPID "$!
+
+test_vg_thread3 &
+WAITPID="$WAITPID "$!
+
+wait $WAITPID
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 08/17] tests: Support idm failure injection
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (6 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 07/17] tests: stress: Add multi-threads stress testing for PV/VG/LV Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 09/17] tests: Add testing for lvmlockd failure Leo Yan
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

When the drive failure occurs, the IDM lock manager and lvmlockd should
handle this case properly.  E.g. when the IDM lock manager detects the
lease renewal failure caused by I/O errors, it should invoke the kill
path which is predefined by lvmlockd, so that the kill path program
(like lvmlockctl) can send requests to lvmlockd to stop and drop lock
for the relevant VG/LVs.

To verify the failure handling flow, this patch introduces an idm
failure injection program, it can input the "percentage" for drive
failures so that can emulate different failure cases.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/Makefile.in              |  5 ++++
 test/lib/idm_inject_failure.c | 55 +++++++++++++++++++++++++++++++++++
 2 files changed, 60 insertions(+)
 create mode 100644 test/lib/idm_inject_failure.c

diff --git a/test/Makefile.in b/test/Makefile.in
index 662974be6..573df77a7 100644
--- a/test/Makefile.in
+++ b/test/Makefile.in
@@ -171,6 +171,7 @@ endif
 
 ifeq ("@BUILD_LVMLOCKD@", "yes")
 check_lvmlockd_idm: .tests-stamp
+	$(INSTALL_PROGRAM) lib/idm_inject_failure $(EXECDIR)
 	VERBOSE=$(VERBOSE) ./lib/runner \
 		--testdir . --outdir $(LVM_TEST_RESULTS) \
 		--flavours udev-lvmlockd-idm --only shell/aa-lvmlockd-idm-prepare.sh,$(T),shell/zz-lvmlockd-idm-remove.sh --skip $(S)
@@ -269,6 +270,10 @@ lib/securetest: lib/dmsecuretest.o .lib-dir-stamp
 	@echo "    [CC] $@"
 	$(Q) $(CC) -g $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) -o $@ $< -L$(interfacebuilddir) -ldevmapper $(LIBS)
 
+lib/idm_inject_failure: lib/idm_inject_failure.o .lib-dir-stamp
+	@echo "    [CC] $@"
+	$(Q) $(CC) -g $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) -o $@ $< $(INTERNAL_LIBS) $(LIBS) -lseagate_ilm
+
 lib/runner.o: $(wildcard $(srcdir)/lib/*.h)
 
 CFLAGS_runner.o += $(EXTRA_EXEC_CFLAGS)
diff --git a/test/lib/idm_inject_failure.c b/test/lib/idm_inject_failure.c
new file mode 100644
index 000000000..4998b585a
--- /dev/null
+++ b/test/lib/idm_inject_failure.c
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2020-2021 Seagate Ltd.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+ * of the GNU Lesser General Public License v.2.1.
+ */
+
+#include <errno.h>
+#include <limits.h>
+#include <signal.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/inotify.h>
+#include <uuid/uuid.h>
+#include <unistd.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <ilm.h>
+
+int main(int argc, char *argv[])
+{
+	int pecent = atoi(argv[1]);
+	int ret, s;
+
+	ret = ilm_connect(&s);
+	if (ret == 0) {
+		printf("ilm_connect: SUCCESS\n");
+	} else {
+		printf("ilm_connect: FAIL\n");
+		exit(-1);
+	}
+
+	ret = ilm_inject_fault(s, pecent);
+	if (ret == 0) {
+		printf("ilm_inject_fault (100): SUCCESS\n");
+	} else {
+		printf("ilm_inject_fault (100): FAIL\n");
+		exit(-1);
+	}
+
+	ret = ilm_disconnect(s);
+	if (ret == 0) {
+		printf("ilm_disconnect: SUCCESS\n");
+	} else {
+		printf("ilm_disconnect: FAIL\n");
+		exit(-1);
+	}
+
+	return 0;
+}
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 09/17] tests: Add testing for lvmlockd failure
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (7 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 08/17] tests: Support idm failure injection Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 10/17] tests: idm: Add testing for the fabric failure Leo Yan
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.

This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back.  Below is an example for testing
the case:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
	LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/lib/inittest.sh           |  3 ++-
 test/shell/lvmlockd_failure.sh | 37 ++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+), 1 deletion(-)
 create mode 100644 test/shell/lvmlockd_failure.sh

diff --git a/test/lib/inittest.sh b/test/lib/inittest.sh
index 6b4bcb348..98a916ca6 100644
--- a/test/lib/inittest.sh
+++ b/test/lib/inittest.sh
@@ -31,6 +31,7 @@ LVM_TEST_BACKING_DEVICE=${LVM_TEST_BACKING_DEVICE-}
 LVM_TEST_DEVDIR=${LVM_TEST_DEVDIR-}
 LVM_TEST_NODEBUG=${LVM_TEST_NODEBUG-}
 LVM_TEST_LVM1=${LVM_TEST_LVM1-}
+LVM_TEST_FAILURE=${LVM_TEST_FAILURE-}
 # TODO: LVM_TEST_SHARED
 SHARED=${SHARED-}
 
@@ -63,7 +64,7 @@ test -n "$SKIP_WITH_LVMLOCKD" && test -n "$LVM_TEST_LVMLOCKD" && initskip
 
 unset CDPATH
 
-export LVM_TEST_BACKING_DEVICE LVM_TEST_DEVDIR LVM_TEST_NODEBUG
+export LVM_TEST_BACKING_DEVICE LVM_TEST_DEVDIR LVM_TEST_NODEBUG LVM_TEST_FAILURE
 export LVM_TEST_LVMLOCKD LVM_TEST_LVMLOCKD_TEST
 export LVM_TEST_LVMPOLLD LVM_TEST_LOCK_TYPE_DLM LVM_TEST_LOCK_TYPE_SANLOCK LVM_TEST_LOCK_TYPE_IDM
 export LVM_TEST_DEVICES_FILE
diff --git a/test/shell/lvmlockd_failure.sh b/test/shell/lvmlockd_failure.sh
new file mode 100644
index 000000000..e0fccfb83
--- /dev/null
+++ b/test/shell/lvmlockd_failure.sh
@@ -0,0 +1,37 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020~2021 Seagate, Inc.  All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v.2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_FAILURE" ] && skip;
+
+aux prepare_vg 3
+
+# Create new logic volume
+lvcreate -a ey --zero n -l 1 -n $lv1 $vg
+
+# Emulate lvmlockd abnormally exiting
+killall -9 lvmlockd
+
+systemctl start lvm2-lvmlockd
+
+vgchange --lock-start $vg
+
+lvchange -a n $vg/$lv1
+lvchange -a sy $vg/$lv1
+
+lvcreate -a ey --zero n -l 1 -n $lv2 $vg
+lvchange -a n $vg/$lv2
+
+vgremove -ff $vg
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 10/17] tests: idm: Add testing for the fabric failure
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (8 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 09/17] tests: Add testing for lvmlockd failure Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 11/17] tests: idm: Add testing for the fabric failure and timeout Leo Yan
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

When the fabric failure occurs, it will lose the connection with hosts
instantly, and after a while it can recovery back so that the hosts can
continue to access the drives.

For this case, the locking manager should be reliable for this case and
can dynamically handle this case and allows user to continue to use the
VG/LV with associated locking scheme.

This patch adds a testing to emulate the fabric faliure, verify LVM
commands for this case.  The testing usage is:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
	LVM_TEST_FAILURE=1 T=idm_fabric_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/idm_fabric_failure.sh | 58 ++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644 test/shell/idm_fabric_failure.sh

diff --git a/test/shell/idm_fabric_failure.sh b/test/shell/idm_fabric_failure.sh
new file mode 100644
index 000000000..e68d6ad07
--- /dev/null
+++ b/test/shell/idm_fabric_failure.sh
@@ -0,0 +1,58 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_FAILURE" ] && skip;
+
+aux prepare_devs 3
+aux extend_filter_LVMTEST
+
+vgcreate $SHARED $vg "$dev1" "$dev2" "$dev3"
+
+# Create new logic volume
+lvcreate -a ey --zero n -l 50%FREE -n $lv1 $vg
+
+DRIVE1=`dmsetup deps -o devname $dev1 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+DRIVE2=`dmsetup deps -o devname $dev2 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+DRIVE3=`dmsetup deps -o devname $dev3 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+
+HOST1=`readlink /sys/block/$DRIVE1 | awk -F'/' '{print $6}'`
+HOST2=`readlink /sys/block/$DRIVE2 | awk -F'/' '{print $6}'`
+HOST3=`readlink /sys/block/$DRIVE3 | awk -F'/' '{print $6}'`
+
+# Emulate fabric failure
+echo 1 > /sys/block/$DRIVE1/device/delete
+[ -f /sys/block/$DRIVE2/device/delete ] && echo 1 > /sys/block/$DRIVE2/device/delete
+[ -f /sys/block/$DRIVE3/device/delete ] && echo 1 > /sys/block/$DRIVE3/device/delete
+
+# Wait for 10s and will not lead to timeout
+sleep 10
+
+# Rescan drives so can probe the deleted drives and join back them
+echo "- - -" > /sys/class/scsi_host/${HOST1}/scan
+echo "- - -" > /sys/class/scsi_host/${HOST2}/scan
+echo "- - -" > /sys/class/scsi_host/${HOST3}/scan
+
+not check grep_lvmlockd_dump "S lvm_$vg kill_vg"
+
+# The previous device-mapper are removed, but LVM still can directly
+# access VGs from the specified physical drives.  So enable drives
+# for these drives.
+aux extend_filter_LVMTEST "a|/dev/$DRIVE1*|" "a|/dev/$DRIVE2*|" "a|/dev/$DRIVE3*|"
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+lvcreate -a n --zero n -l 10 -n $lv2 $vg
+
+vgremove -ff $vg
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 11/17] tests: idm: Add testing for the fabric failure and timeout
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (9 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 10/17] tests: idm: Add testing for the fabric failure Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 12/17] tests: idm: Add testing for the fabric's half brain failure Leo Yan
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

If the fabric is broken instantly, the drives connected on the fabric
will disappear from the system.  For worst case, the lease is timeout
and the drives cannot recovery back.  So a new test is added to emulate
this scenario, it uses a drive for LVM operations and this drive is also
used for locking scheme; if the drive and all its associated paths (if
the drive supports multiple paths) are disconnected, the lock manager
should stop the lockspace for the VG/LVs.

And afterwards, if the drive recovers back, the VG/LV resident in the
drive should be operated properly.  The test command is as below:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdp3 LVM_TEST_FAILURE=1 \
	T=idm_fabric_failure_timeout.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/idm_fabric_failure_timeout.sh | 74 ++++++++++++++++++++++++
 1 file changed, 74 insertions(+)
 create mode 100644 test/shell/idm_fabric_failure_timeout.sh

diff --git a/test/shell/idm_fabric_failure_timeout.sh b/test/shell/idm_fabric_failure_timeout.sh
new file mode 100644
index 000000000..cf71f7609
--- /dev/null
+++ b/test/shell/idm_fabric_failure_timeout.sh
@@ -0,0 +1,74 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_LOCK_TYPE_IDM" ] && skip;
+[ -z "$LVM_TEST_FAILURE" ] && skip;
+
+aux prepare_devs 1
+aux extend_filter_LVMTEST
+
+DRIVE1=`dmsetup deps -o devname $dev1 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+
+# The previous device-mapper are removed, but LVM still can directly
+# access VGs from the specified physical drives.  So enable drives
+# for these drives.
+aux extend_filter_LVMTEST "a|/dev/$DRIVE1*|"
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgcreate $SHARED $vg "$dev1"
+
+# Create new logic volume
+lvcreate -a ey --zero n -l 1 -n $lv1 $vg
+
+drive_list=($DRIVE1)
+
+# Find all drives with the same WWN and delete them from system,
+# so that we can emulate the same drive with multiple paths are
+# disconnected with system.
+drive_wwn=`udevadm info /dev/${DRIVE1} | awk -F= '/E: ID_WWN=/ {print $2}'`
+for dev in /dev/*; do
+	if [ -b "$dev" ] && [[ ! "$dev" =~ [0-9] ]]; then
+		wwn=`udevadm info "${dev}" | awk -F= '/E: ID_WWN=/ {print $2}'`
+		if [ "$wwn" = "$drive_wwn" ]; then
+			base_name="$(basename -- ${dev})"
+			drive_list+=("$base_name")
+			host_list+=(`readlink /sys/block/$base_name | awk -F'/' '{print $6}'`)
+		fi
+	fi
+done
+
+for d in "${drive_list[@]}"; do
+	[ -f /sys/block/$d/device/delete ] && echo 1 > /sys/block/$d/device/delete
+done
+
+# Fail to create new logic volume
+not lvcreate -a n --zero n -l 1 -n $lv2 $vg
+
+# Wait for lock time out caused by drive failure
+sleep 70
+
+check grep_lvmlockd_dump "S lvm_$vg kill_vg"
+lvmlockctl --drop $vg
+
+# Rescan drives so can probe the deleted drives and join back them
+for h in "${host_list[@]}"; do
+	[ -f /sys/class/scsi_host/${h}/scan ] && echo "- - -" > /sys/class/scsi_host/${h}/scan
+done
+
+# After the drive is reconnected, $vg should be visible again.
+vgchange --lock-start
+vgremove -ff $vg
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 12/17] tests: idm: Add testing for the fabric's half brain failure
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (10 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 11/17] tests: idm: Add testing for the fabric failure and timeout Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 13/17] tests: idm: Add testing for IDM lock manager failure Leo Yan
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

If the fabric is broken instantly and the partial drives connected on
the fabric disappear from the system.  For this case, according to the
locking algorithm in idm, the lease will not lose since the half drives
are still alive so can renew the lease for the half drives.  On the
other hand, since the VG lock requires to acquire the majority of drive
number, but half drives failure cannot achieve the majority, so it
cannot acquire the lock for VG and thus cannot change metadata for VG.

This patch is to add half brain failure for idm; the test command is as
below:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdo3 LVM_TEST_FAILURE=1 \
	T=idm_fabric_failure_half_brain.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/idm_fabric_failure_half_brain.sh | 78 +++++++++++++++++++++
 1 file changed, 78 insertions(+)
 create mode 100644 test/shell/idm_fabric_failure_half_brain.sh

diff --git a/test/shell/idm_fabric_failure_half_brain.sh b/test/shell/idm_fabric_failure_half_brain.sh
new file mode 100644
index 000000000..c692a12ad
--- /dev/null
+++ b/test/shell/idm_fabric_failure_half_brain.sh
@@ -0,0 +1,78 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_LOCK_TYPE_IDM" ] && skip;
+[ -z "$LVM_TEST_FAILURE" ] && skip;
+
+aux prepare_devs 2
+aux extend_filter_LVMTEST
+
+DRIVE1=`dmsetup deps -o devname $dev1 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+DRIVE2=`dmsetup deps -o devname $dev2 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+
+[ "$(basename -- $DRIVE1)" = "$(basename -- $DRIVE2)" ] && die "Need to pass two different drives!?"
+
+# The previous device-mapper are removed, but LVM still can directly
+# access VGs from the specified physical drives.  So enable drives
+# for these drives.
+aux extend_filter_LVMTEST "a|/dev/$DRIVE1*|" "a|/dev/$DRIVE2*|"
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgcreate $SHARED $vg "$dev1" "$dev2"
+
+# Create new logic volume
+lvcreate -a ey --zero n -l 100%FREE -n $lv1 $vg
+
+drive_list=($DRIVE1)
+
+# Find all drives with the same WWN and delete them from system,
+# so that we can emulate the same drive with multiple paths are
+# disconnected with system.
+drive_wwn=`udevadm info /dev/${DRIVE1} | awk -F= '/E: ID_WWN=/ {print $2}'`
+for dev in /dev/*; do
+	if [ -b "$dev" ] && [[ ! "$dev" =~ [0-9] ]]; then
+		wwn=`udevadm info "${dev}" | awk -F= '/E: ID_WWN=/ {print $2}'`
+		if [ "$wwn" = "$drive_wwn" ]; then
+			base_name="$(basename -- ${dev})"
+			drive_list+=("$base_name")
+			host_list+=(`readlink /sys/block/$base_name | awk -F'/' '{print $6}'`)
+		fi
+	fi
+done
+
+for d in "${drive_list[@]}"; do
+	[ -f /sys/block/$d/device/delete ] && echo 1 > /sys/block/$d/device/delete
+done
+
+# Fail to create new logic volume
+not lvcreate -a n --zero n -l 1 -n $lv2 $vg
+
+# Wait for lock time out caused by drive failure
+sleep 70
+
+not check grep_lvmlockd_dump "S lvm_$vg kill_vg"
+
+# Rescan drives so can probe the deleted drives and join back them
+for h in "${host_list[@]}"; do
+	[ -f /sys/class/scsi_host/${h}/scan ] && echo "- - -" > /sys/class/scsi_host/${h}/scan
+done
+
+# After the drive is reconnected, $vg should be visible again.
+vgchange --lock-start
+lvremove -f $vg/$lv1
+lvcreate -a ey --zero n -l 1 -n $lv2 $vg
+vgremove -ff $vg
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 13/17] tests: idm: Add testing for IDM lock manager failure
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (11 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 12/17] tests: idm: Add testing for the fabric's half brain failure Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 14/17] tests: multi-hosts: Add VG testing Leo Yan
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

If the IDM lock manager fails to access drives, might partially fail to
access drives (e.g. it fails to access one of three drives), or totally
fail to access drives, the lock manager should handle properly for these
cases.  When the drives are partially failure, if the lock manager still
can renew the lease for the locking, then it doesn't need to take any
action for the drive failure; otherwise, if it detects it cannot renew
the locking majority, it needs ti immediately kill the VG from the
lvmlockd.

This patch adds the test for verification the IDM lock manager failure;
the command can be used as below:

  # make check_lvmlockd_idm \
    LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdl3,/dev/sdq3 \
    LVM_TEST_FAILURE=1 T=idm_ilm_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/idm_ilm_failure.sh | 80 +++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)
 create mode 100644 test/shell/idm_ilm_failure.sh

diff --git a/test/shell/idm_ilm_failure.sh b/test/shell/idm_ilm_failure.sh
new file mode 100644
index 000000000..58bed270e
--- /dev/null
+++ b/test/shell/idm_ilm_failure.sh
@@ -0,0 +1,80 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v.2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_LOCK_TYPE_IDM" ] && skip;
+[ -z "$LVM_TEST_FAILURE" ] && skip;
+
+aux prepare_devs 3
+aux extend_filter_LVMTEST
+
+DRIVE1=`dmsetup deps -o devname $dev1 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+DRIVE2=`dmsetup deps -o devname $dev2 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+DRIVE3=`dmsetup deps -o devname $dev3 | awk '{gsub(/[()]/,""); print $4;}' | sed 's/[0-9]*$//'`
+
+if [ "$DRIVE1" = "$DRIVE2" ] || [ "$DRIVE1" = "$DRIVE3" ] || [ "$DRIVE2" = "$DRIVE3" ]; then
+	die "Need to pass three different drives!?"
+fi
+
+# The previous device-mapper are removed, but LVM still can directly
+# access VGs from the specified physical drives.  So enable drives
+# for these drives.
+aux extend_filter_LVMTEST "a|/dev/$DRIVE1*|" "a|/dev/$DRIVE2*|" "a|/dev/$DRIVE3*|"
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgcreate $SHARED $vg "$dev1" "$dev2" "$dev3"
+
+# Create new logic volume and deactivate it
+lvcreate -a y --zero n -l 1 -n $lv1 $vg
+
+# Inject failure 40% so cannot send partially request to drives
+idm_inject_failure 40
+
+# Wait for 40s, but the lock will not be time out
+sleep 40
+
+# Inject failure with 0% so can access drives
+idm_inject_failure 0
+
+# Deactivate logic volume due to locking failure
+lvchange $vg/$lv1 -a n
+
+# Inject failure 100% so cannot send request to drives
+idm_inject_failure 100
+
+# Wait for 70s but should have no any alive locks
+sleep 70
+
+# Inject failure with 0% so can access drives
+idm_inject_failure 0
+
+# Activate logic volume
+lvchange $vg/$lv1 -a y
+
+# Inject failure so cannot send request to drives
+idm_inject_failure 100
+
+# Wait for 70s but will not time out
+sleep 70
+
+# Inject failure with 0% so can access drives
+idm_inject_failure 0
+
+check grep_lvmlockd_dump "S lvm_$vg kill_vg"
+lvmlockctl --drop $vg
+
+vgchange --lock-start
+vgremove -f $vg
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 14/17] tests: multi-hosts: Add VG testing
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (12 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 13/17] tests: idm: Add testing for IDM lock manager failure Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 15/17] tests: multi-hosts: Add LV testing Leo Yan
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to add VG testing on multi hosts.  There have two scripts,
the script multi_hosts_vg_hosta.sh is used to create VGs on one host,
and the second script multi_hosts_vg_hostb.sh afterwards will acquire
global lock and VG lock, and remove VGs.  The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/Makefile.in                   |  1 +
 test/lib/inittest.sh               |  2 ++
 test/shell/multi_hosts_vg_hosta.sh | 45 ++++++++++++++++++++++++++
 test/shell/multi_hosts_vg_hostb.sh | 52 ++++++++++++++++++++++++++++++
 4 files changed, 100 insertions(+)
 create mode 100644 test/shell/multi_hosts_vg_hosta.sh
 create mode 100644 test/shell/multi_hosts_vg_hostb.sh

diff --git a/test/Makefile.in b/test/Makefile.in
index 573df77a7..cd134129b 100644
--- a/test/Makefile.in
+++ b/test/Makefile.in
@@ -93,6 +93,7 @@ help:
 	@echo -e "\nSupported variables:"
 	@echo "  LVM_TEST_AUX_TRACE	Set for verbose messages for aux scripts []."
 	@echo "  LVM_TEST_BACKING_DEVICE Set device used for testing (see also LVM_TEST_DIR)."
+	@echo "  LVM_TEST_MULTI_HOST	Set multiple hosts used for testing."
 	@echo "  LVM_TEST_CAN_CLOBBER_DMESG Allow to clobber dmesg buffer without /dev/kmsg. (1)"
 	@echo "  LVM_TEST_DEVDIR	Set to '/dev' to run on real /dev."
 	@echo "  LVM_TEST_PREFER_BRD	Prefer using brd (ramdisk) over loop for testing [1]."
diff --git a/test/lib/inittest.sh b/test/lib/inittest.sh
index 98a916ca6..4ca8ac59e 100644
--- a/test/lib/inittest.sh
+++ b/test/lib/inittest.sh
@@ -32,6 +32,7 @@ LVM_TEST_DEVDIR=${LVM_TEST_DEVDIR-}
 LVM_TEST_NODEBUG=${LVM_TEST_NODEBUG-}
 LVM_TEST_LVM1=${LVM_TEST_LVM1-}
 LVM_TEST_FAILURE=${LVM_TEST_FAILURE-}
+LVM_TEST_MULTI_HOST=${LVM_TEST_MULTI_HOST-}
 # TODO: LVM_TEST_SHARED
 SHARED=${SHARED-}
 
@@ -65,6 +66,7 @@ test -n "$SKIP_WITH_LVMLOCKD" && test -n "$LVM_TEST_LVMLOCKD" && initskip
 unset CDPATH
 
 export LVM_TEST_BACKING_DEVICE LVM_TEST_DEVDIR LVM_TEST_NODEBUG LVM_TEST_FAILURE
+export LVM_TEST_MULTI_HOST
 export LVM_TEST_LVMLOCKD LVM_TEST_LVMLOCKD_TEST
 export LVM_TEST_LVMPOLLD LVM_TEST_LOCK_TYPE_DLM LVM_TEST_LOCK_TYPE_SANLOCK LVM_TEST_LOCK_TYPE_IDM
 export LVM_TEST_DEVICES_FILE
diff --git a/test/shell/multi_hosts_vg_hosta.sh b/test/shell/multi_hosts_vg_hosta.sh
new file mode 100644
index 000000000..15347490c
--- /dev/null
+++ b/test/shell/multi_hosts_vg_hosta.sh
@@ -0,0 +1,45 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing, the paired scripts
+# are: multi_hosts_vg_hosta.sh / multi_hosts_vg_hostb.sh
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+i=0
+for d in "${BLKDEVS[@]}"; do
+	echo $i
+	i=$((i+1))
+	vgcreate $SHARED TESTVG$i $d
+	vgchange -a n TESTVG$i
+done
diff --git a/test/shell/multi_hosts_vg_hostb.sh b/test/shell/multi_hosts_vg_hostb.sh
new file mode 100644
index 000000000..bab65b68b
--- /dev/null
+++ b/test/shell/multi_hosts_vg_hostb.sh
@@ -0,0 +1,52 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing, the paired scripts
+# are: multi_hosts_vg_hosta.sh / multi_hosts_vg_hostb.sh
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgchange --lock-start
+
+i=0
+for d in "${BLKDEVS[@]}"; do
+        i=$((i+1))
+	check vg_field TESTVG$i lv_count 0
+done
+
+i=0
+for d in "${BLKDEVS[@]}"; do
+        i=$((i+1))
+	vgchange -a ey TESTVG$i
+	vgremove -ff TESTVG$i
+done
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 15/17] tests: multi-hosts: Add LV testing
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (13 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 14/17] tests: multi-hosts: Add VG testing Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 16/17] tests: multi-hosts: Test lease timeout with LV exclusive mode Leo Yan
  2021-06-03  9:59 ` [PATCH v1 17/17] tests: multi-hosts: Test lease timeout with LV shareable mode Leo Yan
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to add LV testing on multi hosts.  There have two scripts,
the script multi_hosts_lv_hosta.sh is used to create LVs on one host,
and the second script multi_hosts_lv_hostb.sh will acquire
global lock and VG lock, and remove VGs.  The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/multi_hosts_lv_hosta.sh | 78 ++++++++++++++++++++++++++++++
 test/shell/multi_hosts_lv_hostb.sh | 61 +++++++++++++++++++++++
 2 files changed, 139 insertions(+)
 create mode 100644 test/shell/multi_hosts_lv_hosta.sh
 create mode 100644 test/shell/multi_hosts_lv_hostb.sh

diff --git a/test/shell/multi_hosts_lv_hosta.sh b/test/shell/multi_hosts_lv_hosta.sh
new file mode 100644
index 000000000..68404d251
--- /dev/null
+++ b/test/shell/multi_hosts_lv_hosta.sh
@@ -0,0 +1,78 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing, the paired scripts
+# are: multi_hosts_lv_hosta.sh / multi_hosts_lv_hostb.sh
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+BLKDEVS_NUM=${#BLKDEVS[@]}
+
+for d in "${BLKDEVS[@]}"; do
+	dd if=/dev/zero of="$d" bs=32k count=1
+	wipefs -a "$d" 2>/dev/null || true
+
+	sg_dev=`sg_map26 ${d}`
+	if [ -n "$LVM_TEST_LOCK_TYPE_IDM" ]; then
+		echo "Cleanup IDM context for drive ${d} ($sg_dev)"
+		sg_raw -v -r 512 -o /tmp/idm_tmp_data.bin $sg_dev \
+			88 00 01 00 00 00 00 20 FF 01 00 00 00 01 00 00
+		sg_raw -v -s 512 -i /tmp/idm_tmp_data.bin $sg_dev \
+			8E 00 FF 00 00 00 00 00 00 00 00 00 00 01 00 00
+		rm /tmp/idm_tmp_data.bin
+	fi
+done
+
+#aux prepare_pvs $BLKDEVS_NUM 6400
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	echo $i
+	d="dev$i"
+	vgcreate $SHARED TESTVG$i ${BLKDEVS[$(( i - 1 ))]}
+
+	for j in {1..20}; do
+		lvcreate -a n --zero n -l 1 -n foo$j TESTVG$i
+	done
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	for j in {1..20}; do
+		lvchange -a ey TESTVG$i/foo$j
+	done
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	for j in {1..20}; do
+		lvchange -a n TESTVG$i/foo$j
+	done
+done
diff --git a/test/shell/multi_hosts_lv_hostb.sh b/test/shell/multi_hosts_lv_hostb.sh
new file mode 100644
index 000000000..13efd1a6b
--- /dev/null
+++ b/test/shell/multi_hosts_lv_hostb.sh
@@ -0,0 +1,61 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2020 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing, the paired scripts
+# are: multi_hosts_lv_hosta.sh / multi_hosts_lv_hostb.sh
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgchange --lock-start
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	for j in {1..20}; do
+		lvchange -a sy TESTVG$i/foo$j
+	done
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	for j in {1..20}; do
+		lvchange -a ey TESTVG$i/foo$j
+	done
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	for j in {1..20}; do
+		lvchange -a n TESTVG$i/foo$j
+	done
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	vgremove -f TESTVG$i
+done
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 16/17] tests: multi-hosts: Test lease timeout with LV exclusive mode
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (14 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 15/17] tests: multi-hosts: Add LV testing Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  2021-06-03  9:59 ` [PATCH v1 17/17] tests: multi-hosts: Test lease timeout with LV shareable mode Leo Yan
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to test timeout handling after activate LV with exclusive
mode.  It contains two scripts for host A and host B separately.

The script on host A firstly creates VGs and LVs based on the passed
back devices, every back device is for a dedicated VG and a LV is
created as well in the VG.  Afterwards, all LVs are activated by host A,
so host A acquires the lease for these LVs.  Then the test is designed
to fail on host A.

After the host A fails, host B starts to run the paired testing script,
it firstly fails to activate the LVs since the locks are leased by
host A; after lease expiration (after 70s), host B can achieve the lease
for LVs and it can operate LVs and VGs.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/multi_hosts_lv_ex_timeout_hosta.sh | 87 +++++++++++++++++++
 test/shell/multi_hosts_lv_ex_timeout_hostb.sh | 56 ++++++++++++
 2 files changed, 143 insertions(+)
 create mode 100644 test/shell/multi_hosts_lv_ex_timeout_hosta.sh
 create mode 100644 test/shell/multi_hosts_lv_ex_timeout_hostb.sh

diff --git a/test/shell/multi_hosts_lv_ex_timeout_hosta.sh b/test/shell/multi_hosts_lv_ex_timeout_hosta.sh
new file mode 100644
index 000000000..c8be91ee3
--- /dev/null
+++ b/test/shell/multi_hosts_lv_ex_timeout_hosta.sh
@@ -0,0 +1,87 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing.
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+for d in "${BLKDEVS[@]}"; do
+	dd if=/dev/zero of="$d" bs=32k count=1
+	wipefs -a "$d" 2>/dev/null || true
+
+	sg_dev=`sg_map26 ${d}`
+	if [ -n "$LVM_TEST_LOCK_TYPE_IDM" ]; then
+		echo "Cleanup IDM context for drive ${d} ($sg_dev)"
+		sg_raw -v -r 512 -o /tmp/idm_tmp_data.bin $sg_dev \
+			88 00 01 00 00 00 00 20 FF 01 00 00 00 01 00 00
+		sg_raw -v -s 512 -i /tmp/idm_tmp_data.bin $sg_dev \
+			8E 00 FF 00 00 00 00 00 00 00 00 00 00 01 00 00
+		rm /tmp/idm_tmp_data.bin
+	fi
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	vgcreate $SHARED TESTVG$i ${BLKDEVS[$(( i - 1 ))]}
+	lvcreate -a n --zero n -l 1 -n foo TESTVG$i
+	lvchange -a ey TESTVG$i/foo
+done
+
+for d in "${BLKDEVS[@]}"; do
+	drive_wwn=`udevadm info $d | awk -F= '/E: ID_WWN=/ {print $2}'`
+	for dev in /dev/*; do
+		if [ -b "$dev" ] && [[ ! "$dev" =~ [0-9] ]]; then
+			wwn=`udevadm info "${dev}" | awk -F= '/E: ID_WWN=/ {print $2}'`
+			if [ "$wwn" = "$drive_wwn" ]; then
+				base_name="$(basename -- ${dev})"
+				drive_list+=("$base_name")
+				host_list+=(`readlink /sys/block/$base_name | awk -F'/' '{print $6}'`)
+			fi
+		fi
+	done
+done
+
+for d in "${drive_list[@]}"; do
+	[ -f /sys/block/$d/device/delete ] && echo 1 > /sys/block/$d/device/delete
+done
+
+sleep 100
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	check grep_lvmlockd_dump "S lvm_TESTVG$i kill_vg"
+	lvmlockctl --drop TESTVG$i
+done
+
+# Rescan drives so can probe the deleted drives and join back them
+for h in "${host_list[@]}"; do
+	[ -f /sys/class/scsi_host/${h}/scan ] && echo "- - -" > /sys/class/scsi_host/${h}/scan
+done
diff --git a/test/shell/multi_hosts_lv_ex_timeout_hostb.sh b/test/shell/multi_hosts_lv_ex_timeout_hostb.sh
new file mode 100644
index 000000000..f0273fa44
--- /dev/null
+++ b/test/shell/multi_hosts_lv_ex_timeout_hostb.sh
@@ -0,0 +1,56 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing.
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgchange --lock-start
+
+vgdisplay
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	not lvchange -a ey TESTVG$i/foo
+done
+
+# Sleep for 70 seconds so the previous lease is expired
+sleep 70
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	lvchange -a ey TESTVG$i/foo
+	lvchange -a n TESTVG$i/foo
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	vgremove -f TESTVG$i
+done
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v1 17/17] tests: multi-hosts: Test lease timeout with LV shareable mode
  2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
                   ` (15 preceding siblings ...)
  2021-06-03  9:59 ` [PATCH v1 16/17] tests: multi-hosts: Test lease timeout with LV exclusive mode Leo Yan
@ 2021-06-03  9:59 ` Leo Yan
  16 siblings, 0 replies; 18+ messages in thread
From: Leo Yan @ 2021-06-03  9:59 UTC (permalink / raw)
  To: lvm-devel

This patch is to test timeout handling after activate LV with shareable
mode.  It has the same logic with the testing for LV exclusive mode,
except it verifies the locking with shareable mode.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 test/shell/multi_hosts_lv_sh_timeout_hosta.sh | 87 +++++++++++++++++++
 test/shell/multi_hosts_lv_sh_timeout_hostb.sh | 56 ++++++++++++
 2 files changed, 143 insertions(+)
 create mode 100644 test/shell/multi_hosts_lv_sh_timeout_hosta.sh
 create mode 100644 test/shell/multi_hosts_lv_sh_timeout_hostb.sh

diff --git a/test/shell/multi_hosts_lv_sh_timeout_hosta.sh b/test/shell/multi_hosts_lv_sh_timeout_hosta.sh
new file mode 100644
index 000000000..6b24f9290
--- /dev/null
+++ b/test/shell/multi_hosts_lv_sh_timeout_hosta.sh
@@ -0,0 +1,87 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing.
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+for d in "${BLKDEVS[@]}"; do
+	dd if=/dev/zero of="$d" bs=32k count=1
+	wipefs -a "$d" 2>/dev/null || true
+
+	sg_dev=`sg_map26 ${d}`
+	if [ -n "$LVM_TEST_LOCK_TYPE_IDM" ]; then
+		echo "Cleanup IDM context for drive ${d} ($sg_dev)"
+		sg_raw -v -r 512 -o /tmp/idm_tmp_data.bin $sg_dev \
+			88 00 01 00 00 00 00 20 FF 01 00 00 00 01 00 00
+		sg_raw -v -s 512 -i /tmp/idm_tmp_data.bin $sg_dev \
+			8E 00 FF 00 00 00 00 00 00 00 00 00 00 01 00 00
+		rm /tmp/idm_tmp_data.bin
+	fi
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	vgcreate $SHARED TESTVG$i ${BLKDEVS[$(( i - 1 ))]}
+	lvcreate -a n --zero n -l 1 -n foo TESTVG$i
+	lvchange -a sy TESTVG$i/foo
+done
+
+for d in "${BLKDEVS[@]}"; do
+	drive_wwn=`udevadm info $d | awk -F= '/E: ID_WWN=/ {print $2}'`
+	for dev in /dev/*; do
+		if [ -b "$dev" ] && [[ ! "$dev" =~ [0-9] ]]; then
+			wwn=`udevadm info "${dev}" | awk -F= '/E: ID_WWN=/ {print $2}'`
+			if [ "$wwn" = "$drive_wwn" ]; then
+				base_name="$(basename -- ${dev})"
+				drive_list+=("$base_name")
+				host_list+=(`readlink /sys/block/$base_name | awk -F'/' '{print $6}'`)
+			fi
+		fi
+	done
+done
+
+for d in "${drive_list[@]}"; do
+	[ -f /sys/block/$d/device/delete ] && echo 1 > /sys/block/$d/device/delete
+done
+
+sleep 100
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	check grep_lvmlockd_dump "S lvm_TESTVG$i kill_vg"
+	lvmlockctl --drop TESTVG$i
+done
+
+# Rescan drives so can probe the deleted drives and join back them
+for h in "${host_list[@]}"; do
+	[ -f /sys/class/scsi_host/${h}/scan ] && echo "- - -" > /sys/class/scsi_host/${h}/scan
+done
diff --git a/test/shell/multi_hosts_lv_sh_timeout_hostb.sh b/test/shell/multi_hosts_lv_sh_timeout_hostb.sh
new file mode 100644
index 000000000..7aea2235d
--- /dev/null
+++ b/test/shell/multi_hosts_lv_sh_timeout_hostb.sh
@@ -0,0 +1,56 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2021 Seagate, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+# This testing script is for multi-hosts testing.
+#
+# On the host A:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh
+# On the host B:
+#   make check_lvmlockd_idm \
+#     LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
+#     LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+[ -z "$LVM_TEST_MULTI_HOST" ] && skip;
+
+IFS=',' read -r -a BLKDEVS <<< "$LVM_TEST_BACKING_DEVICE"
+
+for d in "${BLKDEVS[@]}"; do
+	aux extend_filter_LVMTEST "a|$d|"
+done
+
+aux lvmconf "devices/allow_changes_with_duplicate_pvs = 1"
+
+vgchange --lock-start
+
+vgdisplay
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	lvchange -a sy TESTVG$i/foo
+done
+
+# Sleep for 70 seconds so the previous lease is expired
+sleep 70
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	lvchange -a ey TESTVG$i/foo
+	lvchange -a n TESTVG$i/foo
+done
+
+for i in $(seq 1 ${#BLKDEVS[@]}); do
+	vgremove -f TESTVG$i
+done
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-06-03  9:59 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-03  9:59 [PATCH v1 00/17] LVM2: Enable testing for In-Drive-Mutex Leo Yan
2021-06-03  9:59 ` [PATCH v1 01/17] tests: Enable the testing for IDM locking scheme Leo Yan
2021-06-03  9:59 ` [PATCH v1 02/17] tests: Support multiple backing devices Leo Yan
2021-06-03  9:59 ` [PATCH v1 03/17] tests: Cleanup idm context when prepare devices Leo Yan
2021-06-03  9:59 ` [PATCH v1 04/17] tests: Add checking for lvmlockd log Leo Yan
2021-06-03  9:59 ` [PATCH v1 05/17] tests: stress: Add single thread stress testing Leo Yan
2021-06-03  9:59 ` [PATCH v1 06/17] tests: stress: Add multi-threads stress testing for VG/LV Leo Yan
2021-06-03  9:59 ` [PATCH v1 07/17] tests: stress: Add multi-threads stress testing for PV/VG/LV Leo Yan
2021-06-03  9:59 ` [PATCH v1 08/17] tests: Support idm failure injection Leo Yan
2021-06-03  9:59 ` [PATCH v1 09/17] tests: Add testing for lvmlockd failure Leo Yan
2021-06-03  9:59 ` [PATCH v1 10/17] tests: idm: Add testing for the fabric failure Leo Yan
2021-06-03  9:59 ` [PATCH v1 11/17] tests: idm: Add testing for the fabric failure and timeout Leo Yan
2021-06-03  9:59 ` [PATCH v1 12/17] tests: idm: Add testing for the fabric's half brain failure Leo Yan
2021-06-03  9:59 ` [PATCH v1 13/17] tests: idm: Add testing for IDM lock manager failure Leo Yan
2021-06-03  9:59 ` [PATCH v1 14/17] tests: multi-hosts: Add VG testing Leo Yan
2021-06-03  9:59 ` [PATCH v1 15/17] tests: multi-hosts: Add LV testing Leo Yan
2021-06-03  9:59 ` [PATCH v1 16/17] tests: multi-hosts: Test lease timeout with LV exclusive mode Leo Yan
2021-06-03  9:59 ` [PATCH v1 17/17] tests: multi-hosts: Test lease timeout with LV shareable mode Leo Yan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.