linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] About online pvmove/lvresize on shared VG
@ 2020-07-08  3:55 Gang He
  2020-07-08 16:05 ` David Teigland
  0 siblings, 1 reply; 4+ messages in thread
From: Gang He @ 2020-07-08  3:55 UTC (permalink / raw)
  To: linux-lvm

Hello List,

I use lvm2-2.03.05, I am looking at online pvmove/lvresize on shared VG, since there are some problems in old code.
Now, I setup three node cluster, and one shared VG/LV, and a cluster file system on top of LV.
e.g.
primitive ocfs2-2 Filesystem \
params device="/dev/vg1/lv1" directory="/mnt/ocfs2" fstype=ocfs2 options=acl \
op monitor interval=20 timeout=40
primitive vg1 LVM-activate \
params vgname=vg1 vg_access_mode=lvmlockd activation_mode=shared \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=30s timeout=90s \
meta target-role=Started
group base-group dlm lvmlockd vg1 ocfs2-2

Now, I can do online LV extend from one node (good),
but I cannot do online LV reduce from one node, 
the workaround is to switch VG activation_mode to exclusive, run lvreduce command on the node where VG is activated.
Does this behaviour is by-design? or a bug?

For pvmove command, I cannot do online pvmove from one node,
The workaround is to switch VG activation_mode to exclusive, run pvmove command on the node where VG is activated.
Does this behaviour is by-design? do we do some enhancements in the furture?
or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt option can help this situation?

Thanks a lot.
Gang

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] About online pvmove/lvresize on shared VG
  2020-07-08  3:55 [linux-lvm] About online pvmove/lvresize on shared VG Gang He
@ 2020-07-08 16:05 ` David Teigland
  2020-07-09  0:44   ` Gang He
  0 siblings, 1 reply; 4+ messages in thread
From: David Teigland @ 2020-07-08 16:05 UTC (permalink / raw)
  To: Gang He; +Cc: linux-lvm

On Wed, Jul 08, 2020 at 03:55:55AM +0000, Gang He wrote:
> but I cannot do online LV reduce from one node, 
> the workaround is to switch VG activation_mode to exclusive, run lvreduce command on the node where VG is activated.
> Does this behaviour is by-design? or a bug?

It was intentional since shrinking the cluster fs and LV isn't very common
(not supported for gfs2).

> For pvmove command, I cannot do online pvmove from one node,
> The workaround is to switch VG activation_mode to exclusive, run pvmove command on the node where VG is activated.
> Does this behaviour is by-design? do we do some enhancements in the furture?
> or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt option can help this situation?

pvmove is implemented with mirroring, so that mirroring would need to be
replaced with something that works with concurrent access, e.g. cluster md
raid1.  I suspect there are better approaches than pvmove to solve the
broader problem.

Dave

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] About online pvmove/lvresize on shared VG
  2020-07-08 16:05 ` David Teigland
@ 2020-07-09  0:44   ` Gang He
  2020-07-09 17:01     ` David Teigland
  0 siblings, 1 reply; 4+ messages in thread
From: Gang He @ 2020-07-09  0:44 UTC (permalink / raw)
  To: David Teigland; +Cc: linux-lvm

Hi David,

Thank for your reply.
more questions,

On 7/9/2020 12:05 AM, David Teigland wrote:
> On Wed, Jul 08, 2020 at 03:55:55AM +0000, Gang He wrote:
>> but I cannot do online LV reduce from one node,
>> the workaround is to switch VG activation_mode to exclusive, run lvreduce command on the node where VG is activated.
>> Does this behaviour is by-design? or a bug?
> 
> It was intentional since shrinking the cluster fs and LV isn't very common
> (not supported for gfs2).
OK, thank for confirmation.

> 
>> For pvmove command, I cannot do online pvmove from one node,
>> The workaround is to switch VG activation_mode to exclusive, run pvmove command on the node where VG is activated.
>> Does this behaviour is by-design? do we do some enhancements in the furture?
>> or any workaround to run pvmove under shared  activation_mode? e.g. --lockopt option can help this situation?
> 
> pvmove is implemented with mirroring, so that mirroring would need to be
> replaced with something that works with concurrent access, e.g. cluster md
> raid1.  I suspect there are better approaches than pvmove to solve the
> broader problem.
Sorry, I am a little confused.
In the future, we can do online pvmove when VG is activated in shared 
mode? from man page, I feel these limitations are temporary (or Not Yet 
Complete).
By the way, --lockopt option can help this situation? I cannot find the 
detailed description for this option in manpage.

Thanks
Gang

> 
> Dave
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] About online pvmove/lvresize on shared VG
  2020-07-09  0:44   ` Gang He
@ 2020-07-09 17:01     ` David Teigland
  0 siblings, 0 replies; 4+ messages in thread
From: David Teigland @ 2020-07-09 17:01 UTC (permalink / raw)
  To: Gang He; +Cc: linux-lvm

On Thu, Jul 09, 2020 at 08:44:23AM +0800, Gang He wrote:
> > pvmove is implemented with mirroring, so that mirroring would need to be
> > replaced with something that works with concurrent access, e.g. cluster md
> > raid1.  I suspect there are better approaches than pvmove to solve the
> > broader problem.
> Sorry, I am a little confused.
> In the future, we can do online pvmove when VG is activated in shared mode?

I think it's unlikely, but possible if someone has interest.  I'd first
encourage them to think about new ways to solve the problem.

> from man page, I feel these limitations are temporary (or Not Yet Complete).

wording should probably change

> By the way, --lockopt option can help this situation? I cannot find the
> detailed description for this option in manpage.

No, lockopt is for some one-off special cases where we might need to work
around ordinary behaviors (like --nolocking.)  There's no workaround to
the problem that dm-mirror doesn't work under a shared LV.

Dave

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-09 17:01 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-08  3:55 [linux-lvm] About online pvmove/lvresize on shared VG Gang He
2020-07-08 16:05 ` David Teigland
2020-07-09  0:44   ` Gang He
2020-07-09 17:01     ` David Teigland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).