linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
@ 2022-11-01  5:36 Zhiyong Ye
  2022-11-01 14:42 ` David Teigland
  0 siblings, 1 reply; 9+ messages in thread
From: Zhiyong Ye @ 2022-11-01  5:36 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: damon.devops, David Teigland

Hi all,

I want to implement live migration of VMs in the lvm + lvmlockd + 
sanlock environment. There are multiple hosts in the cluster using the 
same iscsi connection, and the VMs are running on this environment using 
thinlv volumes. But if want to live migrate the vm, it will be difficult 
since thinlv which from the same thin pool can only be exclusive active 
on one host.

I found a previous subject that discussed this issue:

https://lore.kernel.org/all/20180305165926.GA20527@redhat.com/

The VM in the source host will become suspended after completing the 
drain IO operation, and no new IO will be issued until the VM in the 
destination host resumes again during the live migration. Dave 
recommends to uninstall volumes at the source and activate at the 
destination within this time window.

However, executing the activate/deactivate command for thinlv volumes 
during a VM live migration will cause the VM Guest received an acpi 
message and the Guest will suppose that the disk device has been unmounted.

Or maybe my understanding is off. Can I ask for your help?

Regards,

Zhiyong Ye

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01  5:36 [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd Zhiyong Ye
@ 2022-11-01 14:42 ` David Teigland
  2022-11-01 17:02   ` Zhiyong Ye
  2022-11-01 18:08   ` Stuart D Gathman
  0 siblings, 2 replies; 9+ messages in thread
From: David Teigland @ 2022-11-01 14:42 UTC (permalink / raw)
  To: Zhiyong Ye; +Cc: damon.devops, LVM general discussion and development

On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
> Hi all,
> 
> I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
> environment. There are multiple hosts in the cluster using the same iscsi
> connection, and the VMs are running on this environment using thinlv
> volumes. But if want to live migrate the vm, it will be difficult since
> thinlv which from the same thin pool can only be exclusive active on one
> host.
> 
> I found a previous subject that discussed this issue:
> 
> https://lore.kernel.org/all/20180305165926.GA20527@redhat.com/

Hi, in that email I tried to point out that the real problem is not the
locking, but rather the inability of dm-thin to share a thin pool among
multiple hosts.  The locking restrictions just reflect that technical
limitation.

Dave
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 14:42 ` David Teigland
@ 2022-11-01 17:02   ` Zhiyong Ye
  2022-11-01 17:57     ` David Teigland
  2022-11-01 18:08   ` Stuart D Gathman
  1 sibling, 1 reply; 9+ messages in thread
From: Zhiyong Ye @ 2022-11-01 17:02 UTC (permalink / raw)
  To: David Teigland; +Cc: damon.devops, LVM general discussion and development

Hi Dave,

Thank you for your reply!

Does this mean that there is no way to live migrate VMs when using lvmlockd?

As you describe, the granularity of thinlv's sharing/unsharing is per 
read/write IO, except that lvmlockd reinforces this limitation for the 
lvm activation command.

Is it possible to modify the code of lvmlockd to break this limitation 
and let libvirt/qemu guarantee the mutual exclusivity of each read/write 
IO across hosts when live migration?

Thanks!

Zhiyong

在 11/1/22 10:42 PM, David Teigland 写道:
> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
>> Hi all,
>>
>> I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
>> environment. There are multiple hosts in the cluster using the same iscsi
>> connection, and the VMs are running on this environment using thinlv
>> volumes. But if want to live migrate the vm, it will be difficult since
>> thinlv which from the same thin pool can only be exclusive active on one
>> host.
>>
>> I found a previous subject that discussed this issue:
>>
>> https://lore.kernel.org/all/20180305165926.GA20527@redhat.com/
> 
> Hi, in that email I tried to point out that the real problem is not the
> locking, but rather the inability of dm-thin to share a thin pool among
> multiple hosts.  The locking restrictions just reflect that technical
> limitation.
> 
> Dave
> 

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 17:02   ` Zhiyong Ye
@ 2022-11-01 17:57     ` David Teigland
  2022-11-01 18:15       ` Demi Marie Obenour
  2022-11-02  9:01       ` Zhiyong Ye
  0 siblings, 2 replies; 9+ messages in thread
From: David Teigland @ 2022-11-01 17:57 UTC (permalink / raw)
  To: Zhiyong Ye; +Cc: damon.devops, LVM general discussion and development

On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
> Hi Dave,
> 
> Thank you for your reply!
> 
> Does this mean that there is no way to live migrate VMs when using lvmlockd?

You could by using linear LVs, ovirt does this using sanlock directly,
since lvmlockd arrived later.

> As you describe, the granularity of thinlv's sharing/unsharing is per
> read/write IO, except that lvmlockd reinforces this limitation for the lvm
> activation command.
> 
> Is it possible to modify the code of lvmlockd to break this limitation and
> let libvirt/qemu guarantee the mutual exclusivity of each read/write IO
> across hosts when live migration?

lvmlockd locking does not apply to the dm i/o layers.  The kind of
multi-host locking that you seem to be talking about would need to be
implemented inside dm-thin to protect on-disk data structures that it
modifies.  In reality you would need to write a new dm target with locking
and data structures designed for that kind of sharing.

Dave
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 14:42 ` David Teigland
  2022-11-01 17:02   ` Zhiyong Ye
@ 2022-11-01 18:08   ` Stuart D Gathman
  2022-11-02  9:31     ` Zhiyong Ye
  1 sibling, 1 reply; 9+ messages in thread
From: Stuart D Gathman @ 2022-11-01 18:08 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Zhiyong Ye, damon.devops

Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
>> I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
>> environment. There are multiple hosts in the cluster using the same iscsi
>> connection, and the VMs are running on this environment using thinlv
>> volumes. But if want to live migrate the vm, it will be difficult since
>> thinlv which from the same thin pool can only be exclusive active on one
>> host.

I just expose the LV (thin or not - I prefer not) as an iSCSI target
that the VM boots from.  There is only one host that manages a thin pool, 
and that is a single point of failure, but no locking issues.  You
issue the LVM commands on the iSCSI server (which I guess they call NAS
these days).

If you need a way for a VM to request enlarging an LV it accesses, or
similar interaction, I would make a simple API where each VM gets a
token that determines what LVs it has access to and how much total
storage it can consume.  Maybe someone has already done that.
I just issue the commands on the LVM/NAS/iSCSI host.

I haven't done this, but there can be more than one thin pool, each on
it's own NAS/iSCSI server.  So if one storage server crashes, then
only the VMs attached to it crash.  You can only (simply) migrate a VM 
to another VM host on the same storage server.

BUT, you can migrate a VM to another host less instantly using DRBD
or other remote mirroring driver.  I have done this.  You get the
remote LV mirror mostly synced, suspend the VM (to a file if you need
to rsync that to the remote), finish the sync of the LV(s), resume the
VM on the new server - in another city.  Handy when you have a few hours
notice of a natural disaster (hurricane/flood).

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 17:57     ` David Teigland
@ 2022-11-01 18:15       ` Demi Marie Obenour
  2022-11-02  9:18         ` Zhiyong Ye
  2022-11-02  9:01       ` Zhiyong Ye
  1 sibling, 1 reply; 9+ messages in thread
From: Demi Marie Obenour @ 2022-11-01 18:15 UTC (permalink / raw)
  To: LVM general discussion and development, Zhiyong Ye; +Cc: damon.devops


[-- Attachment #1.1: Type: text/plain, Size: 547 bytes --]

On Tue, Nov 01, 2022 at 12:57:56PM -0500, David Teigland wrote:
> On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
> > Hi Dave,
> > 
> > Thank you for your reply!
> > 
> > Does this mean that there is no way to live migrate VMs when using lvmlockd?
> 
> You could by using linear LVs, ovirt does this using sanlock directly,
> since lvmlockd arrived later.

Another approach would be to use thin provisioning on the SAN instead of
at the LVM level.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 17:57     ` David Teigland
  2022-11-01 18:15       ` Demi Marie Obenour
@ 2022-11-02  9:01       ` Zhiyong Ye
  1 sibling, 0 replies; 9+ messages in thread
From: Zhiyong Ye @ 2022-11-02  9:01 UTC (permalink / raw)
  To: David Teigland; +Cc: damon.devops, LVM general discussion and development



在 11/2/22 1:57 AM, David Teigland 写道:
> On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
>> Hi Dave,
>>
>> Thank you for your reply!
>>
>> Does this mean that there is no way to live migrate VMs when using lvmlockd?
> 
> You could by using linear LVs, ovirt does this using sanlock directly,
> since lvmlockd arrived later.
> 

Yes, standard LV is theoretically capable of live migration because it 
supports multiple hosts using the same LV concurrently with a shared 
lock (lvchange -asy). But I want to support the live migration feature 
for both LVs (thin LV and standard LV).

>> As you describe, the granularity of thinlv's sharing/unsharing is per
>> read/write IO, except that lvmlockd reinforces this limitation for the lvm
>> activation command.
>>
>> Is it possible to modify the code of lvmlockd to break this limitation and
>> let libvirt/qemu guarantee the mutual exclusivity of each read/write IO
>> across hosts when live migration?
> 
> lvmlockd locking does not apply to the dm i/o layers.  The kind of
> multi-host locking that you seem to be talking about would need to be
> implemented inside dm-thin to protect on-disk data structures that it
> modifies.  In reality you would need to write a new dm target with locking
> and data structures designed for that kind of sharing.

I can try to write a new dm thin target or make some modifications based 
on the existing dm-thin target to support this feature, if it is 
technically feasible. But I'm curious why the current dm-thin doesn't 
support multi-host shared access, just like dm-linear does.

Regards!

Zhiyong

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 18:15       ` Demi Marie Obenour
@ 2022-11-02  9:18         ` Zhiyong Ye
  0 siblings, 0 replies; 9+ messages in thread
From: Zhiyong Ye @ 2022-11-02  9:18 UTC (permalink / raw)
  To: Demi Marie Obenour, LVM general discussion and development; +Cc: damon.devops


Hi Demi,

Thank you for your reply!

Using thin provisioning on the storage server (SAN) side would make the 
problem much easier, but my scenario is to support different types of 
SAN, which means that the SAN may not support this feature.

在 11/2/22 2:15 AM, Demi Marie Obenour 写道:
> On Tue, Nov 01, 2022 at 12:57:56PM -0500, David Teigland wrote:
>> On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
>>> Hi Dave,
>>>
>>> Thank you for your reply!
>>>
>>> Does this mean that there is no way to live migrate VMs when using lvmlockd?
>>
>> You could by using linear LVs, ovirt does this using sanlock directly,
>> since lvmlockd arrived later.
> 
> Another approach would be to use thin provisioning on the SAN instead of
> at the LVM level.

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
  2022-11-01 18:08   ` Stuart D Gathman
@ 2022-11-02  9:31     ` Zhiyong Ye
  0 siblings, 0 replies; 9+ messages in thread
From: Zhiyong Ye @ 2022-11-02  9:31 UTC (permalink / raw)
  To: Stuart D Gathman, LVM general discussion and development; +Cc: damon.devops

Hi Gathman,

Thank you so much for sharing your usage scenario, and I learn a lot 
from your experience.

Regards!

Zhiyong

在 11/2/22 2:08 AM, Stuart D Gathman 写道:
> Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
>>> I want to implement live migration of VMs in the lvm + lvmlockd + 
>>> sanlock
>>> environment. There are multiple hosts in the cluster using the same 
>>> iscsi
>>> connection, and the VMs are running on this environment using thinlv
>>> volumes. But if want to live migrate the vm, it will be difficult since
>>> thinlv which from the same thin pool can only be exclusive active on one
>>> host.
> 
> I just expose the LV (thin or not - I prefer not) as an iSCSI target
> that the VM boots from.  There is only one host that manages a thin 
> pool, and that is a single point of failure, but no locking issues.  You
> issue the LVM commands on the iSCSI server (which I guess they call NAS
> these days).
> 
> If you need a way for a VM to request enlarging an LV it accesses, or
> similar interaction, I would make a simple API where each VM gets a
> token that determines what LVs it has access to and how much total
> storage it can consume.  Maybe someone has already done that.
> I just issue the commands on the LVM/NAS/iSCSI host.
> 
> I haven't done this, but there can be more than one thin pool, each on
> it's own NAS/iSCSI server.  So if one storage server crashes, then
> only the VMs attached to it crash.  You can only (simply) migrate a VM 
> to another VM host on the same storage server.
> 
> BUT, you can migrate a VM to another host less instantly using DRBD
> or other remote mirroring driver.  I have done this.  You get the
> remote LV mirror mostly synced, suspend the VM (to a file if you need
> to rsync that to the remote), finish the sync of the LV(s), resume the
> VM on the new server - in another city.  Handy when you have a few hours
> notice of a natural disaster (hurricane/flood).

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-11-03  7:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-01  5:36 [linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd Zhiyong Ye
2022-11-01 14:42 ` David Teigland
2022-11-01 17:02   ` Zhiyong Ye
2022-11-01 17:57     ` David Teigland
2022-11-01 18:15       ` Demi Marie Obenour
2022-11-02  9:18         ` Zhiyong Ye
2022-11-02  9:01       ` Zhiyong Ye
2022-11-01 18:08   ` Stuart D Gathman
2022-11-02  9:31     ` Zhiyong Ye

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).