linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes
@ 2018-03-05  8:37 Damon Wang
  2018-03-05 16:59 ` David Teigland
  0 siblings, 1 reply; 5+ messages in thread
From: Damon Wang @ 2018-03-05  8:37 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1109 bytes --]

Hi all,

I made a environment lvm + lvmlockd + sanlock.

After active a lv with exclusive, we can see lock status via "lvmlockctl
-i" or "sanlock status" and it must be agreed.

But if use "sanlock client release -r xxxxxxx" to release the lock
manually, "lvmlockctl -i" shows as previous which means lvmlockd still
thinks the lock held, meanwhile other hosts now can get the lock.

So is there any way to refresh lock lease inside lvmlockd?

Thanks!

Damon


P.S

Why I have such question and environment?

I want to run vms on some hosts with a SAN, my plan is all hosts will login
to SAN and provide a lun as lvm pv. Each vm gets a thin lv from lvm as root
volume, and maybe some other thin lvs as data volume. So lvmlockd will
assurance only one host will change metadata at same time, and lvmthin will
provide thin provision.

But if want to live migrate the vm, it could be difficult since thin lv can
only be exclusive active on one host, if you want to active on another
host, the only way I find is use sanlock to release it manually. If you
have a better way, please tell me and thanks a loooot !!!

[-- Attachment #2: Type: text/html, Size: 2236 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes
  2018-03-05  8:37 [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes Damon Wang
@ 2018-03-05 16:59 ` David Teigland
  2018-03-07  5:50   ` Damon Wang
  2018-03-07  8:14   ` Damon Wang
  0 siblings, 2 replies; 5+ messages in thread
From: David Teigland @ 2018-03-05 16:59 UTC (permalink / raw)
  To: Damon Wang; +Cc: linux-lvm

On Mon, Mar 05, 2018 at 04:37:58PM +0800, Damon Wang wrote:
> to SAN and provide a lun as lvm pv. Each vm gets a thin lv from lvm as root
> volume, and maybe some other thin lvs as data volume. So lvmlockd will
> assurance only one host will change metadata at same time, and lvmthin will
> provide thin provision.

thin lv's from the same thin pool cannot be used from different hosts
concurrently.  It's not because of lvm metadata, it's because of the way
dm-thin manages blocks that are shared between thin lvs.  This block
sharing/unsharing occurs as each read/write happens on the block device,
not on LV activation or any lvm command.

lvmlockd uses locks on the thin pool to enforce the dm-thin limitations.
If you manually remove the locks, you'll get a corrupted thin pool.

> But if want to live migrate the vm, it could be difficult since thin lv can
> only be exclusive active on one host, if you want to active on another
> host, the only way I find is use sanlock to release it manually. If you
> have a better way, please tell me and thanks a loooot !!!

I suggest trying https://ovirt.org

You need to release the lock on the source host after the vm is suspended,
and acquire the lock on the destination host before the vm is resumed.
There are hooks in libvirt to do this.  The LV shouldn't be active on both
hosts at once.  

Dave

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes
  2018-03-05 16:59 ` David Teigland
@ 2018-03-07  5:50   ` Damon Wang
  2018-03-07  7:11     ` Damon Wang
  2018-03-07  8:14   ` Damon Wang
  1 sibling, 1 reply; 5+ messages in thread
From: Damon Wang @ 2018-03-07  5:50 UTC (permalink / raw)
  To: David Teigland; +Cc: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1660 bytes --]

Hi Dave,

Thank you for your reply!

>
thin lv's from the same thin pool cannot be used from different hosts
> concurrently.  It's not because of lvm metadata, it's because of the way
> dm-thin manages blocks that are shared between thin lvs.  This block
> sharing/unsharing occurs as each read/write happens on the block device,
> not on LV activation or any lvm command.
>

My plan is one vm has one thin lv as root volume, and each thin lv get its
own
thin lv pool, Is this a way to avoid the problem of block share within thin
lv pool?

I suggest trying https://ovirt.org


I did some research on ovirt, there are two designs now
(https://github.com/oVirt/vdsm/blob/master/doc/thin-provisioning.md)
and I found it really relies on SPM host, once SPM host fails, all vms'
availability
will be influenced, which is we don't want to see.


> You need to release the lock on the source host after the vm is suspended,
> and acquire the lock on the destination host before the vm is resumed.
> There are hooks in libvirt to do this.  The LV shouldn't be active on both
> hosts at once.
>

 I did some experiments on this since I have read the libvirt migrate hook
page
(https://libvirt.org/hooks.html#qemu_migration) and it seems useless.
I wrote a simple script and confirm that the hook execute process is:

   1. on dest host, do "migrate begin", "prepare begin", "start begin",
   "started begin"
   2. after a while (usually a few seconds), on source host, do "stopped
   end" and
   "release end"

In a word, it not provide a way to do some thing on the time of vm suspend
and resume.🙁

Thanks!

Damon

[-- Attachment #2: Type: text/html, Size: 3414 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes
  2018-03-07  5:50   ` Damon Wang
@ 2018-03-07  7:11     ` Damon Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Damon Wang @ 2018-03-07  7:11 UTC (permalink / raw)
  To: David Teigland; +Cc: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 223 bytes --]

Besides, during migrating, libvirt will make sure only one host is using
(read/write) the lv,
and I'm trying to find a way to deactivate lv after migrating, so that
there's always
only one host has I/O on a thin lv.

Damon

[-- Attachment #2: Type: text/html, Size: 412 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes
  2018-03-05 16:59 ` David Teigland
  2018-03-07  5:50   ` Damon Wang
@ 2018-03-07  8:14   ` Damon Wang
  1 sibling, 0 replies; 5+ messages in thread
From: Damon Wang @ 2018-03-07  8:14 UTC (permalink / raw)
  To: David Teigland; +Cc: linux-lvm

Hi Dave,

Thank you for your reply!

> thin lv's from the same thin pool cannot be used from different hosts
> concurrently.  It's not because of lvm metadata, it's because of the way
> dm-thin manages blocks that are shared between thin lvs.  This block
> sharing/unsharing occurs as each read/write happens on the block device,
> not on LV activation or any lvm command.

My plan is one vm has one thin lv as root volume, and each thin lv get its own
thin lv pool, Is this a way to avoid the problem of block share within
thin lv pool?

Besides, during migrating, libvirt will make sure only one host is
using (read/write) the lv,
and I'm trying to find a way to deactivate lv after migrating, so that
there's always
only one host has I/O on a thin lv.

> I suggest trying https://ovirt.org

I did some research on ovirt, there are two designs now
(https://github.com/oVirt/vdsm/blob/master/doc/thin-provisioning.md)
and I found it really relies on SPM host, once SPM host fails, all
vms' availability
will be influenced, which is we don't want to see.


> You need to release the lock on the source host after the vm is suspended,
> and acquire the lock on the destination host before the vm is resumed.
> There are hooks in libvirt to do this.  The LV shouldn't be active on both
> hosts at once.


 I did some experiments on this since I have read the libvirt migrate hook page
(https://libvirt.org/hooks.html#qemu_migration) and it seems useless.
I wrote a simple script and confirm that the hook execute process is:

on dest host, do "migrate begin", "prepare begin", "start begin",
"started begin"
after a while (usually a few seconds), on source host, do "stopped end" and
"release end"

In a word, it not provide a way to do some thing on the time of vm
suspend and resume. :(

Thanks!

Damon

P.S sorry that previous post is html format and is seems dropped by Mailman

2018-03-06 0:59 GMT+08:00 David Teigland <teigland@redhat.com>:
> On Mon, Mar 05, 2018 at 04:37:58PM +0800, Damon Wang wrote:
>> to SAN and provide a lun as lvm pv. Each vm gets a thin lv from lvm as root
>> volume, and maybe some other thin lvs as data volume. So lvmlockd will
>> assurance only one host will change metadata at same time, and lvmthin will
>> provide thin provision.
>
> thin lv's from the same thin pool cannot be used from different hosts
> concurrently.  It's not because of lvm metadata, it's because of the way
> dm-thin manages blocks that are shared between thin lvs.  This block
> sharing/unsharing occurs as each read/write happens on the block device,
> not on LV activation or any lvm command.
>
> lvmlockd uses locks on the thin pool to enforce the dm-thin limitations.
> If you manually remove the locks, you'll get a corrupted thin pool.
>
>> But if want to live migrate the vm, it could be difficult since thin lv can
>> only be exclusive active on one host, if you want to active on another
>> host, the only way I find is use sanlock to release it manually. If you
>> have a better way, please tell me and thanks a loooot !!!
>
> I suggest trying https://ovirt.org
>
> You need to release the lock on the source host after the vm is suspended,
> and acquire the lock on the destination host before the vm is resumed.
> There are hooks in libvirt to do this.  The LV shouldn't be active on both
> hosts at once.
>
> Dave

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-03-07  8:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-05  8:37 [linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes Damon Wang
2018-03-05 16:59 ` David Teigland
2018-03-07  5:50   ` Damon Wang
2018-03-07  7:11     ` Damon Wang
2018-03-07  8:14   ` Damon Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).