All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG
@ 2021-02-25 11:04 Leo Yan
  2021-02-25 11:04 ` [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure Leo Yan
  2021-02-25 12:01 ` [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Zdenek Kabelac
  0 siblings, 2 replies; 9+ messages in thread
From: Leo Yan @ 2021-02-25 11:04 UTC (permalink / raw)
  To: lvm-devel

From: Zhang Huan <zhanghuan@huayun.com>

This patch introduces new option "forcevg" for LVM, the main purpose is
to flush in-flight I/O operations and replace the LV's device mapper
table with 'error' target, this is accomplished by using command
"dmsetup wipe_table".

To handle the failure as soon as possible, it will not deactivate
holders and not try to use command "lvchange", thus can speed up
wiping table.

This option is supposed to be used by "lvmlockctl" command to forcibly
deactivate volume group, this can avoid watchdog reset when detect
drive failures.

Signed-off-by: Zhang Huan <zhanghuan@huayun.com>
[Refactored changes and commit log]
Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 man/blkdeactivate.8_main    | 17 +++++++++++
 scripts/blkdeactivate.sh.in | 71 +++++++++++++++++++++++++++++++++++++++------
 2 files changed, 79 insertions(+), 9 deletions(-)

diff --git a/man/blkdeactivate.8_main b/man/blkdeactivate.8_main
index f3c19a8..33f410c 100644
--- a/man/blkdeactivate.8_main
+++ b/man/blkdeactivate.8_main
@@ -51,6 +51,17 @@ Retry removal several times in case of failure.
 Deactivate the whole LVM Volume Group when processing a Logical Volume.
 Deactivating the Volume Group as a whole is quicker than deactivating
 each Logical Volume separately.
+.IP \fIforcevg\fP
+Forcibly deactivate the whole LVM Volume Group as soon as possible.
+The primary job of this option is to flush the in-flight I/O operation
+and then replace the LV's device-mapper table with 'error' target.
+This is accomplished by using command "dmsetup wipe_table". After that,
+any applications on the host cannot access the device anymore, so even
+the LV's lock is expired, it's safe for without data corruption and can
+avoid watchdog resetting.
+This option is usually used for sanlock's fence procedure to avoid host
+reset. When this option is used, any other options
+(e.g. "-d retry|force", "-u") will be ignored.
 .RE
 .TP
 .BR -m ", " --mpathoptions \ \fImpath_options\fP
@@ -108,6 +119,12 @@ Volume Group at once when processing an LVM Logical Volume.
 .B blkdeactivate -u -d retry -l wholevg
 .BR
 .P
+Forcibly deactivate the whole vg.
+.BR
+#
+.B blkdeactivate -l forcevg testvg
+.BR
+.P
 Deactivate all supported block devices found in the system. If the deactivation
 of a device-mapper device fails, retry it and force removal.
 .BR
diff --git a/scripts/blkdeactivate.sh.in b/scripts/blkdeactivate.sh.in
index a4b8a8f..833e3f3 100644
--- a/scripts/blkdeactivate.sh.in
+++ b/scripts/blkdeactivate.sh.in
@@ -70,6 +70,8 @@ DO_UMOUNT=0
 
 # Deactivate each LV separately by default (not the whole VG).
 LVM_DO_WHOLE_VG=0
+# Forcily deactivate the whole VG by wiping DM table
+LVM_DO_FORCE_VG=0
 # Do not retry LV deactivation by default.
 LVM_CONFIG="activation{retry_deactivation=0}"
 
@@ -134,6 +136,7 @@ usage() {
 	echo "    LVM_OPTIONS:"
 	echo "      retry           retry removal several times in case of failure"
 	echo "      wholevg         deactivate the whole VG when processing an LV"
+	echo "      forcevg         force deactivate (wipe_table) the whole VG"
 	echo "    MDRAID_OPTIONS:"
 	echo "      wait            wait for resync, recovery or reshape to complete first"
 	echo "    MPATH_OPTIONS:"
@@ -282,6 +285,50 @@ deactivate_lvm () {
 	fi
 }
 
+is_top_level_lv() {
+	is_top_level_device && return 0
+	skip=1
+	while $LSBLK_READ; do
+		# First line self device
+		test "$skip" -eq 1 && skip=0 && continue
+
+		# not top device but top lv in this VG, return 0
+		test "$devtype" != "lvm" && return 0
+		test ${name:0:${#DM_VG_NAME}+1} != $DM_VG_NAME"-" && return 0
+		test ${name:0:${#DM_VG_NAME}+2} = $DM_VG_NAME"--" && return 0
+		# the same vg, hidden lv
+		test ${name:0:${#DM_VG_NAME}+1} = $DM_VG_NAME"-" && return 1
+	done <<< "$($LSBLK $DEV_DIR/$kname)"
+}
+
+deactivate_vg () {
+	local VG_NAME; local LV_NAME;
+	local DM_VG_NAME; local DM_LV_NAME;
+	local LVS;
+	local skip_disablequeue=0
+
+	VG_NAME=$name
+	DM_VG_NAME=${name/-/--}
+	test -z "${SKIP_VG_LIST["$DM_VG_NAME"]}" || return 1
+	test "$LVM_AVAILABLE" -eq 0 && {
+		add_device_to_skip_list
+		return 1
+	}
+
+	# The lock manager (e.g. sanlock) only give a short time to handle failure
+	# before reset host, so replace DM table with 'error' target.
+	# The reason for not using command "lvchange" ahead if because it may hang
+	# for a long time for the failed device.
+	echo -n "  [LVM]: force deactivating Logical Volumes for $VG_NAME... "
+	"$DMSETUP" info -c -S "uuid=~LVM && vgname=$VG_NAME && lv_layer=\"\"" \
+		-o name --noheadings | xargs "$DMSETUP" wipe_table
+	if [ "$?" = "0" ]; then
+		echo "wipe table done"
+	else
+		echo "wipe table failed" && return 1
+	fi
+}
+
 deactivate_md () {
 	local xname
 	xname=$(printf "%s" "$name")
@@ -333,7 +380,9 @@ deactivate () {
 	# deactivate_holders first to recursively deactivate any existing    #
 	# holders it might have before deactivating the device it processes. #
 	######################################################################
-	if test "$devtype" = "lvm"; then
+	if test "$devtype" = "vg"; then
+		deactivate_vg
+	elif test "$devtype" = "lvm"; then
 		deactivate_lvm
 	elif test "${kname:0:3}" = "dm-"; then
 		deactivate_dm
@@ -395,14 +444,17 @@ deactivate_all() {
 		##################################
 
 		while test $# -ne 0; do
-			# Unmount all relevant mountpoints first
-			while $LSBLK_READ; do
-				device_umount
-			done <<< "$($LSBLK "$1" | $SORT_MNT)"
-
-			# Do deactivate
-			# Single dm device tree deactivation.
-			if test -b "$1"; then
+			# Force deactivate the whole vg
+			if test $LVM_DO_FORCE_VG -ne 0; then
+				$LSBLK_READ <<< "vg $1 $1"
+				deactivate || return 1
+			elif -b "$1"; then
+				# Single dm device tree deactivation.
+				# Unmount all relevant mountpoints first
+				while $LSBLK_READ; do
+					device_umount
+				done <<< "$($LSBLK "$1" | $SORT_MNT)"
+
 				$LSBLK_READ <<< "$($LSBLK --nodeps "$1")"
 
 				# check if the device is not on the skip list already
@@ -444,6 +496,7 @@ get_lvmopts() {
 			"") ;;
 			"retry") LVM_CONFIG="activation{retry_deactivation=1}" ;;
 			"wholevg") LVM_DO_WHOLE_VG=1 ;;
+			"forcevg") LVM_DO_FORCE_VG=1 ;;
 			*) echo "$opt: unknown LVM option"
 		esac
 	done
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure
  2021-02-25 11:04 [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Leo Yan
@ 2021-02-25 11:04 ` Leo Yan
  2021-03-02  0:40   ` David Teigland
  2021-02-25 12:01 ` [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Zdenek Kabelac
  1 sibling, 1 reply; 9+ messages in thread
From: Leo Yan @ 2021-02-25 11:04 UTC (permalink / raw)
  To: lvm-devel

From: Zhang Huan <zhanghuan@huayun.com>

When the lock manager detects drive failure, it invokes command
"lvmlockctl" to handle the faiulre; in this case, lvmlockctl
automatically calls "blkdeactivate -l forcevg" to deactivate VG
and calls drop_vg() to cleanup the lockspace.

Signed-off-by: Zhang Huan <zhanghuan@huayun.com>
[Refactored the changes and commit log]
Signed-off-by: Leo Yan <leo.yan@linaro.org>
---
 configure                     |  3 +++
 configure.ac                  |  2 ++
 daemons/lvmlockd/lvmlockctl.c | 40 +++++++++++++++++++++++++++-------------
 include/configure.h.in        |  3 +++
 man/lvmlockctl.8_main         | 11 +++++------
 5 files changed, 40 insertions(+), 19 deletions(-)

diff --git a/configure b/configure
index 7d4f337..a6cd432 100755
--- a/configure
+++ b/configure
@@ -684,6 +684,7 @@ MANGLING
 LVM_RELEASE_DATE
 LVM_RELEASE
 LVM_PATH
+LVM_DIR
 LVM_PATCHLEVEL
 LVM_MINOR
 LVM_MAJOR
@@ -15298,9 +15299,11 @@ SYSCONFDIR="$(eval echo $(eval echo $sysconfdir))"
 
 SBINDIR="$(eval echo $(eval echo $sbindir))"
 LVM_PATH="$SBINDIR/lvm"
+LVM_DIR="$SBINDIR/"
 
 cat >>confdefs.h <<_ACEOF
 #define LVM_PATH "$LVM_PATH"
+#define LVM_DIR "$LVM_DIR"
 _ACEOF
 
 
diff --git a/configure.ac b/configure.ac
index 99b7c88..11ffdb5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1820,7 +1820,9 @@ SYSCONFDIR="$(eval echo $(eval echo $sysconfdir))"
 
 SBINDIR="$(eval echo $(eval echo $sbindir))"
 LVM_PATH="$SBINDIR/lvm"
+LVM_DIR="$SBINDIR/"
 AC_DEFINE_UNQUOTED(LVM_PATH, ["$LVM_PATH"], [Path to lvm binary.])
+AC_DEFINE_UNQUOTED(LVM_DIR, ["$LVM_DIR"], [Path to lvm binary dir.])
 
 USRSBINDIR="$(eval echo $(eval echo $usrsbindir))"
 CLVMD_PATH="$USRSBINDIR/clvmd"
diff --git a/daemons/lvmlockd/lvmlockctl.c b/daemons/lvmlockd/lvmlockctl.c
index 436221d..35b409d 100644
--- a/daemons/lvmlockd/lvmlockctl.c
+++ b/daemons/lvmlockd/lvmlockctl.c
@@ -17,6 +17,7 @@
 #include <signal.h>
 #include <errno.h>
 #include <fcntl.h>
+#include <stdlib.h>
 #include <syslog.h>
 #include <sys/socket.h>
 #include <sys/un.h>
@@ -32,6 +33,7 @@ static int gl_enable = 0;
 static int gl_disable = 0;
 static int stop_lockspaces = 0;
 static char *arg_vg_name = NULL;
+static int do_drop(void);
 
 #define DUMP_SOCKET_NAME "lvmlockd-dump.sock"
 #define DUMP_BUF_SIZE (1024 * 1024)
@@ -53,6 +55,11 @@ do { \
 #define MAX_NAME 64
 #define MAX_ARGS 64
 
+#define BLKDEACTIVATE_CMD "blkdeactivate -l forcevg "
+/* The max string length for blkdeactivate command */
+#define MAX_BLKDEACTIVATE_CMD	(sizeof(LVM_DIR) + sizeof(BLKDEACTIVATE_CMD) + \
+				 MAX_NAME + 1)
+
 /*
  * lvmlockd dumps the client info before the lockspaces,
  * so we can look up client info when printing lockspace info.
@@ -506,11 +513,9 @@ static int do_kill(void)
 	daemon_reply reply;
 	int result;
 	int rv;
+	char deactivate_cmd[MAX_BLKDEACTIVATE_CMD+1] = { 0 };
 
 	syslog(LOG_EMERG, "Lost access to sanlock lease storage in VG %s.", arg_vg_name);
-	/* These two lines explain the manual alternative to the FIXME below. */
-	syslog(LOG_EMERG, "Immediately deactivate LVs in VG %s.", arg_vg_name);
-	syslog(LOG_EMERG, "Once VG is unused, run lvmlockctl --drop %s.", arg_vg_name);
 
 	/*
 	 * It may not be strictly necessary to notify lvmlockd of the kill, but
@@ -534,16 +539,25 @@ static int do_kill(void)
 
 	daemon_reply_destroy(reply);
 
-	/*
-	 * FIXME: here is where we should implement a strong form of
-	 * blkdeactivate, and if it completes successfully, automatically call
-	 * do_drop() afterward.  (The drop step may not always be necessary
-	 * if the lvm commands run while shutting things down release all the
-	 * leases.)
-	 *
-	 * run_strong_blkdeactivate();
-	 * do_drop();
-	 */
+	snprintf(deactivate_cmd, MAX_BLKDEACTIVATE_CMD, "%s%s%s",
+		 LVM_DIR, BLKDEACTIVATE_CMD, arg_vg_name);
+
+	syslog(LOG_EMERG, "Immediately deactivate LVs in VG %s.", arg_vg_name);
+	rv = system(deactivate_cmd);
+	if (rv) {
+		syslog(LOG_EMERG, "Deactivated LVs in VG %s failed.",
+		       arg_vg_name);
+		return rv;
+	} else {
+		syslog(LOG_EMERG, "Deactivated LVs in VG %s successfully.",
+		       arg_vg_name);
+	}
+
+	rv = do_drop();
+	if (rv)
+		syslog(LOG_EMERG, "lvmlockctl --drop %s failed.", arg_vg_name);
+	else
+		syslog(LOG_EMERG, "lvmlockctl --drop %s success.", arg_vg_name);
 
 	return rv;
 }
diff --git a/include/configure.h.in b/include/configure.h.in
index 812cacc..9e7b127 100644
--- a/include/configure.h.in
+++ b/include/configure.h.in
@@ -643,6 +643,9 @@
 /* Path to lvm binary. */
 #undef LVM_PATH
 
+/* Path to lvm binary dir. */
+#undef LVM_DIR
+
 /* Define to 1 if `major', `minor', and `makedev' are declared in <mkdev.h>.
    */
 #undef MAJOR_IN_MKDEV
diff --git a/man/lvmlockctl.8_main b/man/lvmlockctl.8_main
index b7ac0ec..99698bc 100644
--- a/man/lvmlockctl.8_main
+++ b/man/lvmlockctl.8_main
@@ -65,17 +65,16 @@ and prints it.
 .SS kill
 
 This is run by sanlock when it loses access to the storage holding leases
-for a VG.  It currently emits a syslog message stating that the VG must
-be immediately deactivated.  In the future it may automatically attempt to
-forcibly deactivate the VG.  For more, see
-.BR lvmlockd (8).
+for a VG.  It will call 'deactivate -l forcevg <vgname>' to forcibly
+deactivate the whole VG.  After successful deactivate, do 'drop vg' to
+clear the stale lockspace.  For more, see
+.BR lvmlockd (8), blkdeactivate (8).
 
 .SS drop
 
 This should only be run after a VG has been successfully deactivated
 following an lvmlockctl --kill command.  It clears the stale lockspace
-from lvmlockd.  In the future, this may become automatic along with an
-automatic handling of --kill.  For more, see
+from lvmlockd.  For more, see
 .BR lvmlockd (8).
 
 .SS gl-enable
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG
  2021-02-25 11:04 [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Leo Yan
  2021-02-25 11:04 ` [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure Leo Yan
@ 2021-02-25 12:01 ` Zdenek Kabelac
  2021-02-25 12:39   ` Leo Yan
  1 sibling, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2021-02-25 12:01 UTC (permalink / raw)
  To: lvm-devel

Dne 25. 02. 21 v 12:04 Leo Yan napsal(a):
> From: Zhang Huan <zhanghuan@huayun.com>
> 
> This patch introduces new option "forcevg" for LVM, the main purpose is
> to flush in-flight I/O operations and replace the LV's device mapper
> table with 'error' target, this is accomplished by using command
> "dmsetup wipe_table".

Hi

For this moment - I'd prefer different solution in upstream.

We should first see what are the chance to support forcible deactivation
within lvm2 code directly.

This 'out-of-control' external logic may cause significant data loss breakages 
- as the external tools have not so good idea about connections between targets.

There need to be something like: vgchange -an --force --yes

tried first when the lvm2 'metadata' are present
and eventually support work with targets withot 'metadata' but
with  LVM-  uuid devices in the DM table.

Giving this to the hands of fairly naive script like blkdeactive isn't the 
right choice here.

Zdenek



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG
  2021-02-25 12:01 ` [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Zdenek Kabelac
@ 2021-02-25 12:39   ` Leo Yan
  2021-02-25 12:54     ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Leo Yan @ 2021-02-25 12:39 UTC (permalink / raw)
  To: lvm-devel

Hi Zdenek,

On Thu, Feb 25, 2021 at 01:01:36PM +0100, Zdenek Kabelac wrote:
> Dne 25. 02. 21 v 12:04 Leo Yan napsal(a):
> > From: Zhang Huan <zhanghuan@huayun.com>
> > 
> > This patch introduces new option "forcevg" for LVM, the main purpose is
> > to flush in-flight I/O operations and replace the LV's device mapper
> > table with 'error' target, this is accomplished by using command
> > "dmsetup wipe_table".
> 
> Hi
> 
> For this moment - I'd prefer different solution in upstream.
> 
> We should first see what are the chance to support forcible deactivation
> within lvm2 code directly.
> 
> This 'out-of-control' external logic may cause significant data loss
> breakages - as the external tools have not so good idea about connections
> between targets.
> 
> There need to be something like: vgchange -an --force --yes

I have to admit that I am not familiar with the LVM internal, so it's
quite possible that my understanding is not mature, but let me bring
up the question.

If we use "vgchange -an --force --yes" or "lvchange" commands to
disable VG or LV, seems to me this is likely to cause the egg and
chicken problem, and at the end it easily leads to "deadlock" result.

The reason is when the lock manager invokes "lvmlockctl --kill" to
kill VG, usually this means the lock manager has detected the drive
failures (e.g. the sanlock lock manager finds it cannot renew lease
due to the I/O failures), for this case, it's likely the node has no
chance to access metadata anymore.  So if we use "vgchange" command to
disable VG, it's possible to stuck for long time.

And before we use "vgchange" to deactivate the VG, here I have another
concern is if the LV is mounted (so the device-mapper is in used),
then there have dependency to need firstly umount LV, otherwise also
might cause problem for using "vgchange" command.

How about think for this?

Thanks for suggestions,
Leo

> tried first when the lvm2 'metadata' are present
> and eventually support work with targets withot 'metadata' but
> with  LVM-  uuid devices in the DM table.
> 
> Giving this to the hands of fairly naive script like blkdeactive isn't the
> right choice here.
> 
> Zdenek
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG
  2021-02-25 12:39   ` Leo Yan
@ 2021-02-25 12:54     ` Zdenek Kabelac
  2021-02-25 16:47       ` David Teigland
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2021-02-25 12:54 UTC (permalink / raw)
  To: lvm-devel

Dne 25. 02. 21 v 13:39 Leo Yan napsal(a):
> Hi Zdenek,
> 
> On Thu, Feb 25, 2021 at 01:01:36PM +0100, Zdenek Kabelac wrote:
>> Dne 25. 02. 21 v 12:04 Leo Yan napsal(a):
>>> From: Zhang Huan <zhanghuan@huayun.com>
>>>
>>> This patch introduces new option "forcevg" for LVM, the main purpose is
>>> to flush in-flight I/O operations and replace the LV's device mapper
>>> table with 'error' target, this is accomplished by using command
>>> "dmsetup wipe_table".
>>
>> Hi
>>
>> For this moment - I'd prefer different solution in upstream.
>>
>> We should first see what are the chance to support forcible deactivation
>> within lvm2 code directly.
>>
>> This 'out-of-control' external logic may cause significant data loss
>> breakages - as the external tools have not so good idea about connections
>> between targets.
>>
>> There need to be something like: vgchange -an --force --yes
> 
> I have to admit that I am not familiar with the LVM internal, so it's
> quite possible that my understanding is not mature, but let me bring
> up the question.
> 
> If we use "vgchange -an --force --yes" or "lvchange" commands to
> disable VG or LV, seems to me this is likely to cause the egg and
> chicken problem, and at the end it easily leads to "deadlock" result.

lvm supports options --nolocking  --noudevsync
so there should be a mechanism to bypass many problem when necessary -

it's just we shouldn't use the biggest hammer as the first thing in the row.

the option --force should be able to 'gradualy' raise 'actions'.

Depending on what kind of trouble is user expecting.

But under-cutting device with error target should be definitely the last.

> The reason is when the lock manager invokes "lvmlockctl --kill" to
> kill VG, usually this means the lock manager has detected the drive
> failures (e.g. the sanlock lock manager finds it cannot renew lease
> due to the I/O failures), for this case, it's likely the node has no
> chance to access metadata anymore.  So if we use "vgchange" command to
> disable VG, it's possible to stuck for long time.

Yep - there likely need to be also improved mechanism to recognize locking
manager is in some limbo state.

Zdenek



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG
  2021-02-25 12:54     ` Zdenek Kabelac
@ 2021-02-25 16:47       ` David Teigland
  0 siblings, 0 replies; 9+ messages in thread
From: David Teigland @ 2021-02-25 16:47 UTC (permalink / raw)
  To: lvm-devel

On Thu, Feb 25, 2021 at 01:54:35PM +0100, Zdenek Kabelac wrote:
> lvm supports options --nolocking  --noudevsync
> so there should be a mechanism to bypass many problem when necessary -
> 
> it's just we shouldn't use the biggest hammer as the first thing in the row.
> 
> the option --force should be able to 'gradualy' raise 'actions'.
> 
> Depending on what kind of trouble is user expecting.
> 
> But under-cutting device with error target should be definitely the last.

I think we need a solution like you're talking about, specifically a way
to deactivate a VG without metadata, based only on info from dm devices.
It's needed for a couple of other cases I know of also.

If we had a command like that, we could also add an option for it to go
directly to wipe_table.  (In this specific sanlock scenario we know that
the devices are not responsive, so we know that intermediate steps would
just get stuck.)

Until we have that command, blkdeactivate seems to be a workable option,
and it's simple to change it later with a config setting.



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure
  2021-02-25 11:04 ` [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure Leo Yan
@ 2021-03-02  0:40   ` David Teigland
  2021-03-02 17:52     ` David Teigland
  0 siblings, 1 reply; 9+ messages in thread
From: David Teigland @ 2021-03-02  0:40 UTC (permalink / raw)
  To: lvm-devel

On Thu, Feb 25, 2021 at 07:04:51PM +0800, Leo Yan wrote:
> From: Zhang Huan <zhanghuan@huayun.com>
> 
> When the lock manager detects drive failure, it invokes command
> "lvmlockctl" to handle the faiulre; in this case, lvmlockctl
> automatically calls "blkdeactivate -l forcevg" to deactivate VG
> and calls drop_vg() to cleanup the lockspace.

Hi, I have a couple commits to replace this one, which make it
configurable.  There are a couple small things left to look at.
See this branch:

https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/dev-dct-lvmlockctl-kill-1

I've been testing this by just setting lvm.conf
	lvmlockctl_kill_command="vgchange -an"
and running lvmlockctl --kill <vgname>

We'll likely leave lvmlockctl_kill_command empty (disabled) by default for
now, with a suggestion to consider the blkdeactivate command.

Dave



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure
  2021-03-02  0:40   ` David Teigland
@ 2021-03-02 17:52     ` David Teigland
  2021-03-03  3:48       ` Leo Yan
  0 siblings, 1 reply; 9+ messages in thread
From: David Teigland @ 2021-03-02 17:52 UTC (permalink / raw)
  To: lvm-devel

After some more changes this seems to be about done, please let me know if
this works for you.  I'm thinking about holding off on the blkdeactivate
change for a while to see if we can get a better alternative in the near
term (to avoid carrying the temporary forcevg option.)

https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/dev-dct-lvmlockctl-kill-2

Dave



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure
  2021-03-02 17:52     ` David Teigland
@ 2021-03-03  3:48       ` Leo Yan
  0 siblings, 0 replies; 9+ messages in thread
From: Leo Yan @ 2021-03-03  3:48 UTC (permalink / raw)
  To: lvm-devel

Hi David,

On Tue, Mar 02, 2021 at 11:52:01AM -0600, David Teigland wrote:
> After some more changes this seems to be about done, please let me know if
> this works for you.  I'm thinking about holding off on the blkdeactivate
> change for a while to see if we can get a better alternative in the near
> term (to avoid carrying the temporary forcevg option.)
> 
> https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/dev-dct-lvmlockctl-kill-2

I verified the patches on the branch "dev-dct-lvmlockctl-kill-2",
below is the testing result.

With setting the lvm.conf with config:

  lvmlockctl_kill_command = "blkdeactivate -l forcevg"

After the failure happened, the device-mapper will be set to "error"
target and the lockspace for the failed VG will be removed.  So I
can confirm the failure handling is expected.

But I found the building failure which is caused by the patch
"lvmlockctl: use lvm.conf lvmlockctl_kill_command", you should include
below change:

diff --git a/include/configure.h.in b/include/configure.h.in
index 812cacc..9e7b127 100644
--- a/include/configure.h.in
+++ b/include/configure.h.in
@@ -643,6 +643,9 @@
 /* Path to lvm binary. */
 #undef LVM_PATH
 
+/* Path to lvm binary dir. */
+#undef LVM_DIR
+
 /* Define to 1 if `major', `minor', and `makedev' are declared in <mkdev.h>.
    */
 #undef MAJOR_IN_MKDEV


Thank you a lot!

Leo



^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-03-03  3:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-25 11:04 [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Leo Yan
2021-02-25 11:04 ` [PATCH LVM2 v1 2/2] lvmlockctl: Automatically handle failure Leo Yan
2021-03-02  0:40   ` David Teigland
2021-03-02 17:52     ` David Teigland
2021-03-03  3:48       ` Leo Yan
2021-02-25 12:01 ` [PATCH LVM2 v1 1/2] blkdeactive: Introduce option "forcevg" to forcibly deactivate VG Zdenek Kabelac
2021-02-25 12:39   ` Leo Yan
2021-02-25 12:54     ` Zdenek Kabelac
2021-02-25 16:47       ` David Teigland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.