linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Shared VG, Separate LVs
       [not found]                   ` <CALuPYL2sLeTe8M7KgBGPbMkZWBQAGaOkkj0SS6ovHTg_0SCmuQ@mail.gmail.com>
@ 2017-10-07  4:28                     ` Indivar Nair
  2017-10-13  9:11                       ` Eric Ren
  0 siblings, 1 reply; 9+ messages in thread
From: Indivar Nair @ 2017-10-07  4:28 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 288 bytes --]

Hi...,

With CLVM / HA-LVM on a 2 node cluster -

Is it possible to have a shared VG but separate LVs, with each LV
exclusively activated on different nodes in a 2 node cluster.
In case of a failure, the LV of the failed node will be activated on the
other node.

Regards,


Indivar Nair

[-- Attachment #2: Type: text/html, Size: 567 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-10-07  4:28                     ` [linux-lvm] Shared VG, Separate LVs Indivar Nair
@ 2017-10-13  9:11                       ` Eric Ren
  2017-10-13 10:40                         ` Indivar Nair
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Ren @ 2017-10-13  9:11 UTC (permalink / raw)
  To: LVM general discussion and development, Indivar Nair

Hi,

> With CLVM / HA-LVM on a 2 node cluster -
>
> Is it possible to have a shared VG but separate LVs, with each LV 
> exclusively activated on different nodes in a 2 node cluster.
> In case of a failure, the LV of the failed node will be activated on 
> the other node.

I think clvm can do what you want if you perform LVM commands by hand. 
But, with HA cluster manager (pacemaker) you cannot
do it with the current resource agents (clvm + LVM) [1] [2], because 
they do failover at a VG basis.

We are currently working on new resource agents [3] for lvmlockd [4]. 
The new agents can do activation on LV basis, but I won't
recommend doing that if there's no strong reason. It makes things much 
more complicated.

[1] 
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm
[2] https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM
[3] https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
[4] https://github.com/ClusterLabs/resource-agents/pull/1040

Eric

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-10-13  9:11                       ` Eric Ren
@ 2017-10-13 10:40                         ` Indivar Nair
  2017-10-16  3:06                           ` Eric Ren
  0 siblings, 1 reply; 9+ messages in thread
From: Indivar Nair @ 2017-10-13 10:40 UTC (permalink / raw)
  To: Eric Ren; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 1957 bytes --]

Thanks Eric,

I want to keep a single VG so that I can get the bandwidth (LVM Striping)
of all the disks (PVs)
  PLUS
the flexibility to adjust the space allocation between both LVs. Each LV
will be used by  different departments. With 1 LV on different hosts, I can
distribute the Network Bandwidth too.
I would also like to take snapshots of each LV before backing up.

I have been reading more about CLVM+Pacemaker options.
I can see that it is possible to have the same VG activated on multiple
hosts for a GFSv2 filesystem.
In which case, it is the same PVs, VG and LV getting activated on all hosts.

In my case, we will have the same PVs and VG activated on both hosts, but
LV1 on Host01 and LV2 on Host02. I paln to use ext4 or XFS filesystems.

Is there some possibility that it would work?

Regards,


Indivar Nair



On Fri, Oct 13, 2017 at 2:41 PM, Eric Ren <zren@suse.com> wrote:

> Hi,
>
> With CLVM / HA-LVM on a 2 node cluster -
>>
>> Is it possible to have a shared VG but separate LVs, with each LV
>> exclusively activated on different nodes in a 2 node cluster.
>> In case of a failure, the LV of the failed node will be activated on the
>> other node.
>>
>
> I think clvm can do what you want if you perform LVM commands by hand.
> But, with HA cluster manager (pacemaker) you cannot
> do it with the current resource agents (clvm + LVM) [1] [2], because they
> do failover at a VG basis.
>
> We are currently working on new resource agents [3] for lvmlockd [4]. The
> new agents can do activation on LV basis, but I won't
> recommend doing that if there's no strong reason. It makes things much
> more complicated.
>
> [1] https://github.com/ClusterLabs/resource-agents/blob/master/
> heartbeat/clvm
> [2] https://github.com/ClusterLabs/resource-agents/blob/master/
> heartbeat/LVM
> [3] https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
> [4] https://github.com/ClusterLabs/resource-agents/pull/1040
>
> Eric
>
>
>

[-- Attachment #2: Type: text/html, Size: 3239 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-10-13 10:40                         ` Indivar Nair
@ 2017-10-16  3:06                           ` Eric Ren
  2017-11-03  6:38                             ` Indivar Nair
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Ren @ 2017-10-16  3:06 UTC (permalink / raw)
  To: Indivar Nair; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 2190 bytes --]

Hi,

On 10/13/2017 06:40 PM, Indivar Nair wrote:
> Thanks Eric,
>
> I want to keep a single VG so that I can get the bandwidth (LVM 
> Striping) of all the disks (PVs)
>   PLUS
> the flexibility to adjust the space allocation between both LVs. Each 
> LV will be used by  different departments. With 1 LV on different 
> hosts, I can distribute the Network Bandwidth too.
> I would also like to take snapshots of each LV before backing up.
>
> I have been reading more about CLVM+Pacemaker options.
> I can see that it is possible to have the same VG activated on 
> multiple hosts for a GFSv2 filesystem.
> In which case, it is the same PVs, VG and LV getting activated on all 
> hosts.

OK! It sounds reasonable.

>
> In my case, we will have the same PVs and VG activated on both hosts, 
> but LV1 on Host01 and LV2 on Host02. I paln to use ext4 or XFS 
> filesystems.
>
> Is there some possibility that it would work?

As said in the last mail, the new resource agent [4] will probably work 
for you, but I didn't test this case yet. It's easy to have a try - the 
RA is just shell
script, you can just copy LVM-activate to 
/usr/lib/ocf/resource.d/heartbeat/ (assume you've installed 
resource-agents package), and then configure
"clvm + LVM-activate" for pacemaker [5]. Please report back if it 
doesn't work for you.

The LVM-activate RA is WIP. We are thinking if we should merge it into 
the old LVM RA. So it may changes at any time.

[5] 
https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_clvm_config.html

>
>
>
>     [1]
>     https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm
>     <https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm>
>     [2]
>     https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM
>     <https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM>
>     [3]
>     https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
>     <https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html>
>     [4] https://github.com/ClusterLabs/resource-agents/pull/1040
>     <https://github.com/ClusterLabs/resource-agents/pull/1040>\
>

Eric


[-- Attachment #2: Type: text/html, Size: 4276 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-10-16  3:06                           ` Eric Ren
@ 2017-11-03  6:38                             ` Indivar Nair
  2017-11-14  4:52                               ` Eric Ren
  0 siblings, 1 reply; 9+ messages in thread
From: Indivar Nair @ 2017-11-03  6:38 UTC (permalink / raw)
  To: Eric Ren; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 4914 bytes --]

Hi Eric, All,

Thanks for the input. I have got it working.

Here is what I did -
-------------------------------------------------------------------------------------------------------------------------------------------------------
Cluster Setup:
2 Nodes with CentOS 7.x: clstr01-nd01, clstr01-nd02
Common storage array between both nodes (8 shared volumes, presented as
/dev/mapper/mpatha to /dev/mapper/mpathh)
2 Port NICs, bonded (bond0) in each node

Resource group grp_xxx (nd01 preferred) -
Mount Point: /clstr01-xxx
Cluster IP: 172.16.0.101/24

Resource group grp_yyy (nd02 preferred) -
Mount Point: /clstr01-yyy
Cluster IP: 172.16.0.102/24


On both nodes:
--------------
Edit /etc/lvm/lvm.conf, and configure 'filter' and 'global_filter'
parameters to scan only the required (local and shared) devices.

Then run -
# /sbin/lvmconf --enable-cluster
Rebuild initramfs -
# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img-orig
# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

Reboot both nodes.
--------------



After rebooting both nodes, run the following commands on any one node:
--------------
# pcs cluster start --all
# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s
on-fail=fence clone interleave=true ordered=true
# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s
on-fail=fence clone interleave=true ordered=true
# pcs constraint order start dlm-clone then clvmd-clone
# pcs constraint colocation add clvmd-clone with dlm-clone


# pvcreate /dev/mapper/mpath{a,b,c,d,e,f,g,h}
# vgcreate -Ay -cy clstr_vg01 /dev/mapper/mpath{a,b,c,d,e,f,g,h}
# lvcreate -L 100T -n lv01 clstr_vg01
# mkfs.xfs /dev/clstr_vg01/lv01
# lvcreate -L 100T -n lv02 clstr_vg01
# mkfs.xfs /dev/clstr_vg01/lv02


# pcs resource create xxx_mount ocf:heartbeat:Filesystem
device=/dev/clstr_vg01/lv01 directory=/clstr01-xxx fstype=xfs --group
xxx_grp --disabled

# pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.101
cidr_netmask=24 nic=bond0:0 op monitor interval=30s --group xxx_grp
--disabled

# pcs constraint location xxx_grp prefers clstr01-nd01=50
# pcs constraint order start clvmd-clone then xxx_grp

# pcs resource enable xxx_mount
# pcs resource enable xxx_ip_01


# pcs resource create yyy_mount ocf:heartbeat:Filesystem
device=/dev/clstr_vg01/lv02 directory=/clstr01-yyy fstype=xfs --group
yyy_grp --disabled

# pcs resource create yyy_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.102
cidr_netmask=24 nic=bond0:1 op monitor interval=30s --group yyy_grp
--disabled

# pcs constraint location yyy_grp prefers clstr01-nd02=50
# pcs constraint order start clvmd-clone then yyy_grp

# pcs resource enable yyy_mount
# pcs resource enable yyy_ip_01
--------------


# pcs resource show
--------------
-------------------------------------------------------------------------------------------------------------------------------------------------------


Regards,


Indivar Nair

On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren <zren@suse.com> wrote:

> Hi,
> On 10/13/2017 06:40 PM, Indivar Nair wrote:
>
> Thanks Eric,
>
> I want to keep a single VG so that I can get the bandwidth (LVM Striping)
> of all the disks (PVs)
>   PLUS
> the flexibility to adjust the space allocation between both LVs. Each LV
> will be used by  different departments. With 1 LV on different hosts, I can
> distribute the Network Bandwidth too.
> I would also like to take snapshots of each LV before backing up.
>
> I have been reading more about CLVM+Pacemaker options.
> I can see that it is possible to have the same VG activated on multiple
> hosts for a GFSv2 filesystem.
> In which case, it is the same PVs, VG and LV getting activated on all
> hosts.
>
>
> OK! It sounds reasonable.
>
>
> In my case, we will have the same PVs and VG activated on both hosts, but
> LV1 on Host01 and LV2 on Host02. I paln to use ext4 or XFS filesystems.
>
> Is there some possibility that it would work?
>
>
> As said in the last mail, the new resource agent [4] will probably work
> for you, but I didn't test this case yet. It's easy to have a try - the RA
> is just shell
> script, you can just copy LVM-activate to /usr/lib/ocf/resource.d/heartbeat/
> (assume you've installed resource-agents package), and then configure
> "clvm + LVM-activate" for pacemaker [5]. Please report back if it doesn't
> work for you.
>
> The LVM-activate RA is WIP. We are thinking if we should merge it into the
> old LVM RA. So it may changes at any time.
>
> [5] https://www.suse.com/documentation/sle-ha-12/book_
> sleha/data/sec_ha_clvm_config.html
>
>
>
>
>> [1] https://github.com/ClusterLabs/resource-agents/blob/master/h
>> eartbeat/clvm
>> [2] https://github.com/ClusterLabs/resource-agents/blob/master/h
>> eartbeat/LVM
>> [3] https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
>> [4] https://github.com/ClusterLabs/resource-agents/pull/1040\
>>
>
> Eric
>
>

[-- Attachment #2: Type: text/html, Size: 8432 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-11-03  6:38                             ` Indivar Nair
@ 2017-11-14  4:52                               ` Eric Ren
  2017-11-22  5:49                                 ` Indivar Nair
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Ren @ 2017-11-14  4:52 UTC (permalink / raw)
  To: LVM general discussion and development, Indivar Nair

[-- Attachment #1: Type: text/plain, Size: 6417 bytes --]

Had a look at your setup, I have one question:

Did you check if your active-passive model HA stack can always work 
correctly and stably by
putting one node into offline state?

I noticed you didn't configure LVM resource agent to manage your VG's 
(de)activation task,
not sure if it can always work as expect, so have more exceptional 
checking :)

Eric


On 11/03/2017 02:38 PM, Indivar Nair wrote:
> Hi Eric, All,
>
> Thanks for the input. I have got it working.
>
> Here is what I did -
> -------------------------------------------------------------------------------------------------------------------------------------------------------
> Cluster Setup:
> 2 Nodes with CentOS 7.x: clstr01-nd01, clstr01-nd02
> Common storage array between both nodes (8 shared volumes, presented 
> as /dev/mapper/mpatha to /dev/mapper/mpathh)
> 2 Port NICs, bonded (bond0) in each node
>
> Resource group grp_xxx (nd01 preferred) -
> Mount Point: /clstr01-xxx
> Cluster IP: 172.16.0.101/24 <http://172.16.0.101/24>
>
> Resource group grp_yyy (nd02 preferred) -
> Mount Point: /clstr01-yyy
> Cluster IP: 172.16.0.102/24 <http://172.16.0.102/24>
>
>
> On both nodes:
> --------------
> Edit /etc/lvm/lvm.conf, and configure 'filter' and 'global_filter' 
> parameters to scan only the required (local and shared) devices.
>
> Then run -
> # /sbin/lvmconf --enable-cluster
> Rebuild initramfs -
> # mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img-orig
> # dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
>
> Reboot both nodes.
> --------------
>
>
>
> After rebooting both nodes, run the following commands on any one node:
> --------------
> # pcs cluster start --all
> # pcs resource create dlm ocf:pacemaker:controld op monitor 
> interval=30s on-fail=fence clone interleave=true ordered=true
> # pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s 
> on-fail=fence clone interleave=true ordered=true
> # pcs constraint order start dlm-clone then clvmd-clone
> # pcs constraint colocation add clvmd-clone with dlm-clone
>
>
> # pvcreate /dev/mapper/mpath{a,b,c,d,e,f,g,h}
> # vgcreate -Ay -cy clstr_vg01 /dev/mapper/mpath{a,b,c,d,e,f,g,h}
> # lvcreate -L 100T -n lv01 clstr_vg01
> # mkfs.xfs /dev/clstr_vg01/lv01
> # lvcreate -L 100T -n lv02 clstr_vg01
> # mkfs.xfs /dev/clstr_vg01/lv02
>
>
> # pcs resource create xxx_mount ocf:heartbeat:Filesystem 
> device=/dev/clstr_vg01/lv01 directory=/clstr01-xxx fstype=xfs --group 
> xxx_grp --disabled
>
> # pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.101 
> cidr_netmask=24 nic=bond0:0 op monitor interval=30s --group xxx_grp 
> --disabled
>
> # pcs constraint location xxx_grp prefers clstr01-nd01=50
> # pcs constraint order start clvmd-clone then xxx_grp
>
> # pcs resource enable xxx_mount
> # pcs resource enable xxx_ip_01
>
>
> # pcs resource create yyy_mount ocf:heartbeat:Filesystem 
> device=/dev/clstr_vg01/lv02 directory=/clstr01-yyy fstype=xfs --group 
> yyy_grp --disabled
>
> # pcs resource create yyy_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.102 
> cidr_netmask=24 nic=bond0:1 op monitor interval=30s --group yyy_grp 
> --disabled
>
> # pcs constraint location yyy_grp prefers clstr01-nd02=50
> # pcs constraint order start clvmd-clone then yyy_grp
>
> # pcs resource enable yyy_mount
> # pcs resource enable yyy_ip_01
> --------------
>
>
> # pcs resource show
> --------------
> -------------------------------------------------------------------------------------------------------------------------------------------------------
>
>
> Regards,
>
>
> Indivar Nair
>
> On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren <zren@suse.com 
> <mailto:zren@suse.com>> wrote:
>
>     Hi,
>
>     On 10/13/2017 06:40 PM, Indivar Nair wrote:
>>     Thanks Eric,
>>
>>     I want to keep a single VG so that I can get the bandwidth (LVM
>>     Striping) of all the disks (PVs)
>>       PLUS
>>     the flexibility to adjust the space allocation between both LVs.
>>     Each LV will be used by  different departments. With 1 LV on
>>     different hosts, I can distribute the Network Bandwidth too.
>>     I would also like to take snapshots of each LV before backing up.
>>
>>     I have been reading more about CLVM+Pacemaker options.
>>     I can see that it is possible to have the same VG activated on
>>     multiple hosts for a GFSv2 filesystem.
>>     In which case, it is the same PVs, VG and LV getting activated on
>>     all hosts.
>
>     OK! It sounds reasonable.
>
>>
>>     In my case, we will have the same PVs and VG activated on both
>>     hosts, but LV1 on Host01 and LV2 on Host02. I paln to use ext4 or
>>     XFS filesystems.
>>
>>     Is there some possibility that it would work?
>
>     As said in the last mail, the new resource agent [4] will probably
>     work for you, but I didn't test this case yet. It's easy to have a
>     try - the RA is just shell
>     script, you can just copy LVM-activate to
>     /usr/lib/ocf/resource.d/heartbeat/ (assume you've installed
>     resource-agents package), and then configure
>     "clvm + LVM-activate" for pacemaker [5]. Please report back if it
>     doesn't work for you.
>
>     The LVM-activate RA is WIP. We are thinking if we should merge it
>     into the old LVM RA. So it may changes at any time.
>
>     [5]
>     https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_clvm_config.html
>     <https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_clvm_config.html>
>
>>
>>
>>
>>         [1]
>>         https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm
>>         <https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm>
>>         [2]
>>         https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM
>>         <https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM>
>>         [3]
>>         https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
>>         <https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html>
>>         [4] https://github.com/ClusterLabs/resource-agents/pull/1040
>>         <https://github.com/ClusterLabs/resource-agents/pull/1040>\
>>
>
>     Eric
>
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[-- Attachment #2: Type: text/html, Size: 13218 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-11-14  4:52                               ` Eric Ren
@ 2017-11-22  5:49                                 ` Indivar Nair
  2017-11-23 13:46                                   ` Eric Ren
  0 siblings, 1 reply; 9+ messages in thread
From: Indivar Nair @ 2017-11-22  5:49 UTC (permalink / raw)
  To: Eric Ren; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 6947 bytes --]

Hi Eric,

Answering your queries -


*"Did you check if your active-passive model HA stack can always work
correctly and stably byputting one node into offline state?"*

           Yes, it works perfectly while failing over and failing back.



*"I noticed you didn't configure LVM resource agent to manage your VG's
(de)activation task,not sure if it can always work as expect, so have more
exceptional checking :)"*

             Strangely the Pacemaker active-passive configuration example
shows VG controlled by Pacemaker, while the active-active one does not. I
have taken the active-active configuration for Pacemaker and created 2 LVs,
then instead of formatting it using the GFS2 clustered filesystem, I used
normal XFS and made sure that it is mounted only on one node at a time.
(lv01 on node 2, lv02 on node2)


https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-clustsetup-gfs2

             I can see the clustered VG and LVs as soon ocf:heartbeat:clvm
is started.

Is there anything I am missing here?

Regards,


Indivar Nair

On Tue, Nov 14, 2017 at 10:22 AM, Eric Ren <zren@suse.com> wrote:

> Had a look at your setup, I have one question:
>
> Did you check if your active-passive model HA stack can always work
> correctly and stably by
> putting one node into offline state?
>
> I noticed you didn't configure LVM resource agent to manage your VG's
> (de)activation task,
> not sure if it can always work as expect, so have more exceptional
> checking :)
>
> Eric
>
> On 11/03/2017 02:38 PM, Indivar Nair wrote:
>
> Hi Eric, All,
>
> Thanks for the input. I have got it working.
>
> Here is what I did -
> ------------------------------------------------------------
> ------------------------------------------------------------
> -------------------------------
> Cluster Setup:
> 2 Nodes with CentOS 7.x: clstr01-nd01, clstr01-nd02
> Common storage array between both nodes (8 shared volumes, presented as
> /dev/mapper/mpatha to /dev/mapper/mpathh)
> 2 Port NICs, bonded (bond0) in each node
>
> Resource group grp_xxx (nd01 preferred) -
> Mount Point: /clstr01-xxx
> Cluster IP: 172.16.0.101/24
>
> Resource group grp_yyy (nd02 preferred) -
> Mount Point: /clstr01-yyy
> Cluster IP: 172.16.0.102/24
>
>
> On both nodes:
> --------------
> Edit /etc/lvm/lvm.conf, and configure 'filter' and 'global_filter'
> parameters to scan only the required (local and shared) devices.
>
> Then run -
> # /sbin/lvmconf --enable-cluster
> Rebuild initramfs -
> # mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img-orig
> # dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
>
> Reboot both nodes.
> --------------
>
>
>
> After rebooting both nodes, run the following commands on any one node:
> --------------
> # pcs cluster start --all
> # pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s
> on-fail=fence clone interleave=true ordered=true
> # pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s
> on-fail=fence clone interleave=true ordered=true
> # pcs constraint order start dlm-clone then clvmd-clone
> # pcs constraint colocation add clvmd-clone with dlm-clone
>
>
> # pvcreate /dev/mapper/mpath{a,b,c,d,e,f,g,h}
> # vgcreate -Ay -cy clstr_vg01 /dev/mapper/mpath{a,b,c,d,e,f,g,h}
> # lvcreate -L 100T -n lv01 clstr_vg01
> # mkfs.xfs /dev/clstr_vg01/lv01
> # lvcreate -L 100T -n lv02 clstr_vg01
> # mkfs.xfs /dev/clstr_vg01/lv02
>
>
> # pcs resource create xxx_mount ocf:heartbeat:Filesystem
> device=/dev/clstr_vg01/lv01 directory=/clstr01-xxx fstype=xfs --group
> xxx_grp --disabled
>
> # pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.101
> cidr_netmask=24 nic=bond0:0 op monitor interval=30s --group xxx_grp
> --disabled
>
> # pcs constraint location xxx_grp prefers clstr01-nd01=50
> # pcs constraint order start clvmd-clone then xxx_grp
>
> # pcs resource enable xxx_mount
> # pcs resource enable xxx_ip_01
>
>
> # pcs resource create yyy_mount ocf:heartbeat:Filesystem
> device=/dev/clstr_vg01/lv02 directory=/clstr01-yyy fstype=xfs --group
> yyy_grp --disabled
>
> # pcs resource create yyy_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.102
> cidr_netmask=24 nic=bond0:1 op monitor interval=30s --group yyy_grp
> --disabled
>
> # pcs constraint location yyy_grp prefers clstr01-nd02=50
> # pcs constraint order start clvmd-clone then yyy_grp
>
> # pcs resource enable yyy_mount
> # pcs resource enable yyy_ip_01
> --------------
>
>
> # pcs resource show
> --------------
> ------------------------------------------------------------
> ------------------------------------------------------------
> -------------------------------
>
>
> Regards,
>
>
> Indivar Nair
>
> On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren <zren@suse.com> wrote:
>
>> Hi,
>> On 10/13/2017 06:40 PM, Indivar Nair wrote:
>>
>> Thanks Eric,
>>
>> I want to keep a single VG so that I can get the bandwidth (LVM Striping)
>> of all the disks (PVs)
>>   PLUS
>> the flexibility to adjust the space allocation between both LVs. Each LV
>> will be used by  different departments. With 1 LV on different hosts, I can
>> distribute the Network Bandwidth too.
>> I would also like to take snapshots of each LV before backing up.
>>
>> I have been reading more about CLVM+Pacemaker options.
>> I can see that it is possible to have the same VG activated on multiple
>> hosts for a GFSv2 filesystem.
>> In which case, it is the same PVs, VG and LV getting activated on all
>> hosts.
>>
>>
>> OK! It sounds reasonable.
>>
>>
>> In my case, we will have the same PVs and VG activated on both hosts, but
>> LV1 on Host01 and LV2 on Host02. I paln to use ext4 or XFS filesystems.
>>
>> Is there some possibility that it would work?
>>
>>
>> As said in the last mail, the new resource agent [4] will probably work
>> for you, but I didn't test this case yet. It's easy to have a try - the RA
>> is just shell
>> script, you can just copy LVM-activate to /usr/lib/ocf/resource.d/heartbeat/
>> (assume you've installed resource-agents package), and then configure
>> "clvm + LVM-activate" for pacemaker [5]. Please report back if it doesn't
>> work for you.
>>
>> The LVM-activate RA is WIP. We are thinking if we should merge it into
>> the old LVM RA. So it may changes at any time.
>>
>> [5] https://www.suse.com/documentation/sle-ha-12/book_sleha/
>> data/sec_ha_clvm_config.html
>>
>>
>>
>>
>>> [1] https://github.com/ClusterLabs/resource-agents/blob/master/h
>>> eartbeat/clvm
>>> [2] https://github.com/ClusterLabs/resource-agents/blob/master/h
>>> eartbeat/LVM
>>> [3] https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
>>> [4] https://github.com/ClusterLabs/resource-agents/pull/1040\
>>>
>>
>> Eric
>>
>>
>
>
> _______________________________________________
> linux-lvm mailing listlinux-lvm@redhat.comhttps://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
>

[-- Attachment #2: Type: text/html, Size: 15280 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-11-22  5:49                                 ` Indivar Nair
@ 2017-11-23 13:46                                   ` Eric Ren
  2017-11-23 16:14                                     ` Indivar Nair
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Ren @ 2017-11-23 13:46 UTC (permalink / raw)
  To: Indivar Nair; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 1187 bytes --]

Hi,

>
> /"I noticed you didn't configure LVM resource agent to manage your 
> VG's (de)activation task,
> not sure if it can always work as expect, so have more exceptional 
> checking :)"
> /
>
>              Strangely the Pacemaker active-passive configuration 
> example shows VG controlled by Pacemaker, while the active-active one 
> does not. I have taken the active-active configuration for Pacemaker 
> and created 2 LVs, then instead of formatting it using the GFS2 
> clustered filesystem, I used normal XFS and made sure that it is 
> mounted only on one node at a time. (lv01 on node 2, lv02 on node2)
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-clustsetup-gfs2
>
>              I can see the clustered VG and LVs as soon 
> ocf:heartbeat:clvm is started.
>
> Is there anything I am missing here?

Good. "clvm" will activate all VGs by default. If you have more than one 
VG in your cluster,  you may want to
activate/deactivate one VG for each group of "vg" and "xfs", then you 
may need to look at LVM for each VG:

https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM

Eric

[-- Attachment #2: Type: text/html, Size: 2270 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] Shared VG, Separate LVs
  2017-11-23 13:46                                   ` Eric Ren
@ 2017-11-23 16:14                                     ` Indivar Nair
  0 siblings, 0 replies; 9+ messages in thread
From: Indivar Nair @ 2017-11-23 16:14 UTC (permalink / raw)
  To: Eric Ren; +Cc: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 1342 bytes --]

Sure. Will keep that in mind.
Thanks a lot, Eric.

Regards,


Indivar Nair

On Thu, Nov 23, 2017 at 7:16 PM, Eric Ren <zren@suse.com> wrote:

> Hi,
>
>
>
>
> *"I noticed you didn't configure LVM resource agent to manage your VG's
> (de)activation task, not sure if it can always work as expect, so have more
> exceptional checking :)" *
>
>              Strangely the Pacemaker active-passive configuration example
> shows VG controlled by Pacemaker, while the active-active one does not. I
> have taken the active-active configuration for Pacemaker and created 2 LVs,
> then instead of formatting it using the GFS2 clustered filesystem, I used
> normal XFS and made sure that it is mounted only on one node at a time.
> (lv01 on node 2, lv02 on node2)
>
>              https://access.redhat.com/documentation/en-us/red_hat_
> enterprise_linux/7/html/global_file_system_2/ch-clustsetup-gfs2
>
>              I can see the clustered VG and LVs as soon ocf:heartbeat:clvm
> is started.
>
> Is there anything I am missing here?
>
>
> Good. "clvm" will activate all VGs by default. If you have more than one
> VG in your cluster,  you may want to
> activate/deactivate one VG for each group of "vg" and "xfs", then you may
> need to look at LVM for each VG:
>
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM
>
> Eric
>

[-- Attachment #2: Type: text/html, Size: 2822 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-11-23 16:14 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CALuPYL3fiHTGbxBFCeQw+HMEAwZBKs2HBMBuThnDFAHDCsDAsQ@mail.gmail.com>
     [not found] ` <CALuPYL3NMm7xrfemcbQ7Kt511rL=VrS7cKi0J0HBKjq8z7QDyg@mail.gmail.com>
     [not found]   ` <CALuPYL2R1vLb5TY6CHEcC_m653oSyRHF2UBOU3if5fg2R8bCPg@mail.gmail.com>
     [not found]     ` <CALuPYL0ohSNtaC4N97WWUHbW3J8DfnzhT1Pd=iTe_pFvhN2tjw@mail.gmail.com>
     [not found]       ` <CALuPYL3EY2EjGER-Fhz0QW8Mu+LBE-L8oOAYo8=Eu9WwawCbgg@mail.gmail.com>
     [not found]         ` <CALuPYL1KRtc5e-ZQOqCKE4fyNt9WAgKzBNDdn-6N52yEEQxvEA@mail.gmail.com>
     [not found]           ` <CALuPYL26eMycV=h5ZLHTejWNSh-kwYdMKrNnfR7EfKNM83VBLQ@mail.gmail.com>
     [not found]             ` <CALuPYL1uwtmaCS8PoVd6F4Y9u4qqhqtyST0yZi77H7fNdi2XUQ@mail.gmail.com>
     [not found]               ` <CALuPYL1=MUfFawNOdSpSO-9az3m1vvWF3Y3jCrertFr5M3NewA@mail.gmail.com>
     [not found]                 ` <CALuPYL2=kUoNv=VhvKqzqEM9E0QOe5vjeopGAC4oZ8_=0TUhDg@mail.gmail.com>
     [not found]                   ` <CALuPYL2sLeTe8M7KgBGPbMkZWBQAGaOkkj0SS6ovHTg_0SCmuQ@mail.gmail.com>
2017-10-07  4:28                     ` [linux-lvm] Shared VG, Separate LVs Indivar Nair
2017-10-13  9:11                       ` Eric Ren
2017-10-13 10:40                         ` Indivar Nair
2017-10-16  3:06                           ` Eric Ren
2017-11-03  6:38                             ` Indivar Nair
2017-11-14  4:52                               ` Eric Ren
2017-11-22  5:49                                 ` Indivar Nair
2017-11-23 13:46                                   ` Eric Ren
2017-11-23 16:14                                     ` Indivar Nair

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).