linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] About online pvmove/lvresize on shared VG
@ 2020-07-08  3:55 Gang He
  2020-07-08 16:05 ` David Teigland
  0 siblings, 1 reply; 4+ messages in thread
From: Gang He @ 2020-07-08  3:55 UTC (permalink / raw)
  To: linux-lvm

Hello List,

I use lvm2-2.03.05, I am looking at online pvmove/lvresize on shared VG, since there are some problems in old code.
Now, I setup three node cluster, and one shared VG/LV, and a cluster file system on top of LV.
e.g.
primitive ocfs2-2 Filesystem \
params device="/dev/vg1/lv1" directory="/mnt/ocfs2" fstype=ocfs2 options=acl \
op monitor interval=20 timeout=40
primitive vg1 LVM-activate \
params vgname=vg1 vg_access_mode=lvmlockd activation_mode=shared \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=30s timeout=90s \
meta target-role=Started
group base-group dlm lvmlockd vg1 ocfs2-2

Now, I can do online LV extend from one node (good),
but I cannot do online LV reduce from one node, 
the workaround is to switch VG activation_mode to exclusive, run lvreduce command on the node where VG is activated.
Does this behaviour is by-design? or a bug?

For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt option can help this situation?

Thanks a lot.
Gang

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-09 17:01 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-08  3:55 [linux-lvm] About online pvmove/lvresize on shared VG Gang He
2020-07-08 16:05 ` David Teigland
2020-07-09  0:44   ` Gang He
2020-07-09 17:01     ` David Teigland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).