From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx1.redhat.com (ext-mx10.extmail.prod.ext.phx2.redhat.com [10.5.110.39]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 126EF5D96F for ; Fri, 3 Nov 2017 06:39:15 +0000 (UTC) Received: from mail-ot0-f176.google.com (mail-ot0-f176.google.com [74.125.82.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 174765F795 for ; Fri, 3 Nov 2017 06:39:13 +0000 (UTC) Received: by mail-ot0-f176.google.com with SMTP id n74so1630172ota.8 for ; Thu, 02 Nov 2017 23:39:13 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: <611a15f8-dd20-ac38-8be4-afd050a82293@suse.com> From: Indivar Nair Date: Fri, 3 Nov 2017 12:08:41 +0530 Message-ID: Content-Type: multipart/alternative; boundary="001a11c0c15edbb93c055d0e5cd0" Subject: Re: [linux-lvm] Shared VG, Separate LVs Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: To: Eric Ren Cc: LVM general discussion and development --001a11c0c15edbb93c055d0e5cd0 Content-Type: text/plain; charset="UTF-8" Hi Eric, All, Thanks for the input. I have got it working. Here is what I did - ------------------------------------------------------------------------------------------------------------------------------------------------------- Cluster Setup: 2 Nodes with CentOS 7.x: clstr01-nd01, clstr01-nd02 Common storage array between both nodes (8 shared volumes, presented as /dev/mapper/mpatha to /dev/mapper/mpathh) 2 Port NICs, bonded (bond0) in each node Resource group grp_xxx (nd01 preferred) - Mount Point: /clstr01-xxx Cluster IP: 172.16.0.101/24 Resource group grp_yyy (nd02 preferred) - Mount Point: /clstr01-yyy Cluster IP: 172.16.0.102/24 On both nodes: -------------- Edit /etc/lvm/lvm.conf, and configure 'filter' and 'global_filter' parameters to scan only the required (local and shared) devices. Then run - # /sbin/lvmconf --enable-cluster Rebuild initramfs - # mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img-orig # dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) Reboot both nodes. -------------- After rebooting both nodes, run the following commands on any one node: -------------- # pcs cluster start --all # pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true # pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true # pcs constraint order start dlm-clone then clvmd-clone # pcs constraint colocation add clvmd-clone with dlm-clone # pvcreate /dev/mapper/mpath{a,b,c,d,e,f,g,h} # vgcreate -Ay -cy clstr_vg01 /dev/mapper/mpath{a,b,c,d,e,f,g,h} # lvcreate -L 100T -n lv01 clstr_vg01 # mkfs.xfs /dev/clstr_vg01/lv01 # lvcreate -L 100T -n lv02 clstr_vg01 # mkfs.xfs /dev/clstr_vg01/lv02 # pcs resource create xxx_mount ocf:heartbeat:Filesystem device=/dev/clstr_vg01/lv01 directory=/clstr01-xxx fstype=xfs --group xxx_grp --disabled # pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.101 cidr_netmask=24 nic=bond0:0 op monitor interval=30s --group xxx_grp --disabled # pcs constraint location xxx_grp prefers clstr01-nd01=50 # pcs constraint order start clvmd-clone then xxx_grp # pcs resource enable xxx_mount # pcs resource enable xxx_ip_01 # pcs resource create yyy_mount ocf:heartbeat:Filesystem device=/dev/clstr_vg01/lv02 directory=/clstr01-yyy fstype=xfs --group yyy_grp --disabled # pcs resource create yyy_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.102 cidr_netmask=24 nic=bond0:1 op monitor interval=30s --group yyy_grp --disabled # pcs constraint location yyy_grp prefers clstr01-nd02=50 # pcs constraint order start clvmd-clone then yyy_grp # pcs resource enable yyy_mount # pcs resource enable yyy_ip_01 -------------- # pcs resource show -------------- ------------------------------------------------------------------------------------------------------------------------------------------------------- Regards, Indivar Nair On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren wrote: > Hi, > On 10/13/2017 06:40 PM, Indivar Nair wrote: > > Thanks Eric, > > I want to keep a single VG so that I can get the bandwidth (LVM Striping) > of all the disks (PVs) > PLUS > the flexibility to adjust the space allocation between both LVs. Each LV > will be used by different departments. With 1 LV on different hosts, I can > distribute the Network Bandwidth too. > I would also like to take snapshots of each LV before backing up. > > I have been reading more about CLVM+Pacemaker options. > I can see that it is possible to have the same VG activated on multiple > hosts for a GFSv2 filesystem. > In which case, it is the same PVs, VG and LV getting activated on all > hosts. > > > OK! It sounds reasonable. > > > In my case, we will have the same PVs and VG activated on both hosts, but > LV1 on Host01 and LV2 on Host02. I paln to use ext4 or XFS filesystems. > > Is there some possibility that it would work? > > > As said in the last mail, the new resource agent [4] will probably work > for you, but I didn't test this case yet. It's easy to have a try - the RA > is just shell > script, you can just copy LVM-activate to /usr/lib/ocf/resource.d/heartbeat/ > (assume you've installed resource-agents package), and then configure > "clvm + LVM-activate" for pacemaker [5]. Please report back if it doesn't > work for you. > > The LVM-activate RA is WIP. We are thinking if we should merge it into the > old LVM RA. So it may changes at any time. > > [5] https://www.suse.com/documentation/sle-ha-12/book_ > sleha/data/sec_ha_clvm_config.html > > > > >> [1] https://github.com/ClusterLabs/resource-agents/blob/master/h >> eartbeat/clvm >> [2] https://github.com/ClusterLabs/resource-agents/blob/master/h >> eartbeat/LVM >> [3] https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html >> [4] https://github.com/ClusterLabs/resource-agents/pull/1040\ >> > > Eric > > --001a11c0c15edbb93c055d0e5cd0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Eric, All,

Thanks for the input. I h= ave got it working.=C2=A0

Here is what I did -
--------------------------------------------------------------------= ---------------------------------------------------------------------------= --------
Cluster Setup:
2 Nodes with Cent= OS 7.x: clstr01-nd01, clstr01-nd02
Common storage array between b= oth nodes (8 shared volumes, presented as /dev/mapper/mpatha to /dev/mapper= /mpathh)
2 Port NICs, bonded (bond0) in each node

<= /div>
Resource group grp_xxx (nd01 preferred) -=C2=A0
Mount P= oint: /clstr01-xxx=C2=A0
Cluster IP: 172.16.0.101/24

Resource group grp_yyy = (nd02 preferred) -=C2=A0
Mount Point: /clstr01-yyy
Clus= ter IP: 172.16.0.102/24

On both nodes:
--------------
Edit /etc/lvm/lvm.conf, and configure 'filter' and 'global_f= ilter' parameters to scan only the required (local and shared) devices.=

Then run -=C2=A0
# /sbin/lvmconf --enab= le-cluster
Rebuild initramfs -=C2=A0
# mv /boot/initram= fs-$(uname -r).img /boot/initramfs-$(uname -r).img-orig
# dracut = -H -f /boot/initramfs-$(uname -r).img $(uname -r)

= Reboot both nodes.
--------------



After rebooting both nodes, run the following comma= nds on any one node:
--------------
# pcs cluster start= --all
# pcs resource create dlm ocf:pacemaker:controld op monito= r interval=3D30s on-fail=3Dfence clone interleave=3Dtrue ordered=3Dtrue
# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval= =3D30s on-fail=3Dfence clone interleave=3Dtrue ordered=3Dtrue
# p= cs constraint order start dlm-clone then clvmd-clone
# pcs constr= aint colocation add clvmd-clone with dlm-clone

# pvcreate /dev/mapper/mpath{a,b,c,d,e,f,g,h}
# vgcre= ate -Ay -cy clstr_vg01 /dev/mapper/mpath{a,b,c,d,e,f,g,h}
# lvcre= ate -L 100T -n lv01 clstr_vg01
# mkfs.xfs /dev/clstr_vg01/lv01
# lvcreate -L 100T -n lv02 clstr_vg01
# mkfs.xfs /dev/cls= tr_vg01/lv02


# pcs resource create = xxx_mount ocf:heartbeat:Filesystem device=3D/dev/clstr_vg01/lv01 directory= =3D/clstr01-xxx fstype=3Dxfs --group xxx_grp --disabled

# pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2 ip=3D172.16.0.1= 01 cidr_netmask=3D24 nic=3Dbond0:0 op monitor interval=3D30s --group xxx_gr= p --disabled

# pcs constraint location xxx_grp pre= fers clstr01-nd01=3D50
# pcs constraint order start clvmd-clone t= hen xxx_grp

# pcs resource enable xxx_mount
<= div># pcs resource enable xxx_ip_01


# pcs resource create yyy_mount ocf:heartbeat:Filesystem device=3D/dev/cls= tr_vg01/lv02 directory=3D/clstr01-yyy fstype=3Dxfs --group yyy_grp --disabl= ed

# pcs resource create yyy_ip_01 ocf:heartbeat:I= Paddr2 ip=3D172.16.0.102 cidr_netmask=3D24 nic=3Dbond0:1 op monitor interva= l=3D30s --group yyy_grp --disabled

# pcs constrain= t location yyy_grp prefers clstr01-nd02=3D50
# pcs constraint ord= er start clvmd-clone then yyy_grp

# pcs resource e= nable yyy_mount
# pcs resource enable yyy_ip_01
-------= -------


# pcs resource show
--------------
-------------------------------------= ---------------------------------------------------------------------------= ---------------------------------------

=
Regards,


Indivar Nai= r

On M= on, Oct 16, 2017 at 8:36 AM, Eric Ren <zren@suse.com> wrote:
=
=20 =20 =20

Hi,

On 10/13/2017 06:4= 0 PM, Indivar Nair wrote:
Thanks Eric,

I want to keep a single VG so that I can get the bandwidth (LVM Striping) of all the disks (PVs) =C2=A0
=C2=A0 PLUS=C2=A0
the flexibility to adjust the space allocation between both LVs. Each LV will be used by =C2=A0different departments. With 1 = LV on different hosts, I can distribute the Network Bandwidth too.
I would also like to take snapshots of each LV before backing up.

I have been reading more about CLVM+Pacemaker options.
I can see that it is possible to have the same VG activated on multiple hosts for a GFSv2 filesystem.
In which case, it is the same PVs, VG and LV getting activated on all hosts.

OK! It sounds reasonable.


In my case, we will have the same PVs and VG activated on both hosts, but LV1 on Host01 and LV2 on Host02. I paln to use ext4 or XFS filesystems.

Is there some possibility that it would work?

As said in the last mail, the new resource agent [4] will probably work for you, but I didn't test this case yet. It's easy to hav= e a try - the RA is just shell
script, you can just copy LVM-activate to /usr/lib/ocf/resource.d/heartbeat/ (assume you've installed resource-agents package), and then configure
"clvm + LVM-activate" for pacemaker [5]. Please report back i= f it doesn't work for you.

The LVM-activate RA is WIP. We are thinking if we should merge it into the old LVM RA. So it may changes at any time.

[5] https://www.suse.com/documentation/sle-ha-12/book_<= wbr>sleha/data/sec_ha_clvm_config.html




Eric


--001a11c0c15edbb93c055d0e5cd0--