All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm] Why do lvcreate with clvmd insist on VG being available on all nodes?
@ 2012-11-14 15:16 Jacek Konieczny
  2012-11-15  9:09 ` Zdenek Kabelac
  0 siblings, 1 reply; 6+ messages in thread
From: Jacek Konieczny @ 2012-11-14 15:16 UTC (permalink / raw)
  To: linux-lvm

Hello,

I am building a system where I use clustered LVM on a DRBD to provide
shared block devices in a cluster. And I don't quite like and quite not
understand some behaviour.

Currently I have two nodes in the cluster, running: Corosync, DLM,
clvmd, DRBD, Pacemaker and my service.

Everything works fine when both nodes are up. When I put one to standby
with 'crm node node1 standby' (which, among others, stops the DRBD) the
other note is not fully functional.

If I leave DLM and CLVMD running on the inactive node, then:

lvchange -aey shared_vg/vol_name
lvchange -aen shared_vg/vol_name

work properly, as I would expect (make the volume available/unavailable
on the node). But an attempt to create a new volume:

lvcreate -n new_volume -L 1M shared_vg

fails with:

Error locking on node 1: Volume group for uuid not found: Hlk5NeaVF0qhDF20RBq61EZaIj5yyUJgGyMo5AQcLfZpJS0DZUcgj7QMd3QPWICL

Indeed, the VG is not available at the standby node at that moment. But,
as it is not available there, I see no point in locking it there.



Is there some real, important reason to block lvcreate in such case?




I have also tried stopping (cleanly) dlm_controld and clvmd on the standby node,
hoping LVM will then work like in a single-node cluster, but then even
volume activation fails with:

  cluster request failed: Host is down

…until I restart clvmd on the active host with 'clvmd -S'.

When clvmd is stopped on the inactive node and 'clvmd -S' has been run
on the active node, then both 'lvchange' and 'lvcreate' work as
expected, but that doesn't look like a graceful switch-over. And another
'clvmd -S' stopped clvmd all together (this seems like a bug to me)

And one more thing bothers me… my system would be very scalable to many
nodes, where only two share active storage (when using DRBD). But this
won't work if LVM would refuse some operations when any VG is not
available on all nodes.

Greets,
        Jacek

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-11-15 16:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-14 15:16 [linux-lvm] Why do lvcreate with clvmd insist on VG being available on all nodes? Jacek Konieczny
2012-11-15  9:09 ` Zdenek Kabelac
2012-11-15 10:08   ` Jacek Konieczny
2012-11-15 11:01     ` Zdenek Kabelac
2012-11-15 13:30       ` Jacek Konieczny
2012-11-15 16:40         ` Zdenek Kabelac

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.