From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx1.redhat.com (ext-mx18.extmail.prod.ext.phx2.redhat.com [10.5.110.47]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3145B60BA3 for ; Tue, 29 May 2018 07:55:42 +0000 (UTC) Received: from mail-io0-f170.google.com (mail-io0-f170.google.com [209.85.223.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7BB63311583C for ; Tue, 29 May 2018 07:55:30 +0000 (UTC) Received: by mail-io0-f170.google.com with SMTP id l19-v6so6761399ioj.5 for ; Tue, 29 May 2018 00:55:30 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: From: kAja Ziegler Date: Tue, 29 May 2018 09:55:29 +0200 Message-ID: Content-Type: multipart/alternative; boundary="000000000000d66a6f056d538e0b" Subject: Re: [linux-lvm] Move LV with GFS to new LUN (pvmove) in the cluster Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: To: LVM general discussion and development --000000000000d66a6f056d538e0b Content-Type: text/plain; charset="UTF-8" On Thu, May 24, 2018 at 10:13 AM, emmanuel segura wrote: > I used this procedure to archive what you need to do. > > 1: active cmirror on every cluster nodes > 2: lvconvert -m 1 vg00/lvdata /dev/mapper/mapth1 --corelog #where mpath1 > is the new lun > > When the lvdata lv is in sync, now you can the dettach the old lun with > > lvconvert -m 0 vg00/lvdata /dev/mapper/mapth0 > > > 2018-05-23 14:31 GMT+02:00 kAja Ziegler : > >> Hi all, >> >> I want to ask if it is possible and safe to move online the clustered LV >> with GFS on the one PV (multipathed LUN from the old storage) to an other >> one (multipathed LUN on the new storage)? >> >> I found these articles in Red Hat knowledgebase: >> >> - Can I perform a pvmove on a clustered logical volume? - >> https://access.redhat.com/solutions/39894 >> - How to migrate SAN LUNs which has Clustered LVM configured on it? - >> https://access.redhat.com/solutions/466533 >> >> With regard to the mentioned articles it can be done, it is only needed >> to install and run the cmirror service. Should I expect any problems or >> other prerequisites? >> >> >> My clustered environment: >> >> - 8 nodes - CentOS 6.9 >> - LVM version: 2.02.143(2)-RHEL6 (2016-12-13) >> Library version: 1.02.117-RHEL6 (2016-12-13) >> Driver version: 4.33.1 >> - 7 clustered VGs overall >> - 1 LV with GFS mounted on all nodes >> >> >> - 1 clustered VG with 1 PV and 1 LV on which it is GFS: >> >> [root@...]# pvdisplay /dev/mapper/35001b4d01b1da512 >> --- Physical volume --- >> PV Name /dev/mapper/35001b4d01b1da512 >> VG Name vg_1 >> PV Size 4.55 TiB / not usable 2.00 MiB >> Allocatable yes >> PE Size 4.00 MiB >> Total PE 1192092 >> Free PE 1115292 >> Allocated PE 76800 >> PV UUID jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90 >> >> [root@...]# lvdisplay vg_1/lv_gfs >> --- Logical volume --- >> LV Path /dev/vg_1/lv_gfs >> LV Name lv_gfs >> VG Name vg_1 >> LV UUID OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY >> LV Write Access read/write >> LV Creation host, time , >> LV Status available >> # open 1 >> LV Size 300.00 GiB >> Current LE 76800 >> Segments 1 >> Allocation inherit >> Read ahead sectors auto >> - currently set to 256 >> Block device 253:418 >> >> [root@...]# vgdisplay vg_1 >> --- Volume group --- >> VG Name vg_1 >> System ID >> Format lvm2 >> Metadata Areas 1 >> Metadata Sequence No 3898 >> VG Access read/write >> VG Status resizable >> Clustered yes >> Shared no >> MAX LV 0 >> Cur LV 1 >> Open LV 1 >> Max PV 0 >> Cur PV 1 >> Act PV 1 >> VG Size 4.55 TiB >> PE Size 4.00 MiB >> Total PE 1192092 >> Alloc PE / Size 76800 / 300.00 GiB >> Free PE / Size 1115292 / 4.25 TiB >> VG UUID PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR >> >> >> - IO activity on the PV (LUN) is very low - from iostat and average per >> node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s in 1 minute. >> >> >> Thank you for your opinions and experience. >> >> Have a great day and with best regards, >> -- >> Karel Ziegler >> >> >> _______________________________________________ >> linux-lvm mailing list >> linux-lvm@redhat.com >> https://www.redhat.com/mailman/listinfo/linux-lvm >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ >> > > > > -- > .~. > /V\ > // \\ > /( )\ > ^`~'^ > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://www.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > Hi Emmanuel and the others, so it is better to perform lvconvert or pvmove (if it is supported) on a clustered logical volume? With best regards, -- Karel Ziegler --000000000000d66a6f056d538e0b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Thu, May 24, 2018 at 10:13 A= M, emmanuel segura <emi2fast@gmail.com> wrote:
I used this procedure to a= rchive what you need to do.

1: active cmirror on every cluster= nodes
2: lvconvert -m 1 vg00/lvdata /dev/mapper/mapth1 --corelog = #where mpath1 is the new lun

When the lvdata lv i= s in sync, now you can the dettach the old lun with

lvconvert = -m 0 vg00/lvdata /dev/mapper/mapth0

<= div class=3D"gmail_extra">
2018-05-23 14:31 G= MT+02:00 kAja Ziegler <ziegleka@gmail.com>:
Hi all,

=C2=A0I want to ask if it is possible and safe to move online the cluste= red LV with GFS on the one PV (multipathed LUN from the old storage) to an = other one (multipathed LUN on the new storage)?

I = found these articles in Red Hat knowledgebase:

- Can I perform a p= vmove on a clustered logical volume? - https://access.redhat.com/solution= s/39894
- How to migrate SAN LUNs which has Clustered LVM configured= on it? - https://access.redhat.com/solutions/466533

With regard to the mentioned articles it can be done, it is only needed to instal= l and run the cmirror service. Should I expect any problems or other prereq= uisites?


My clustered environ= ment:

- 8 nodes - CentOS 6.9
- = LVM versi= on:=C2=A0=C2=A0=C2=A0=C2=A0 2.02.143(2)-RHEL6 (2016-12-13)
=C2=A0 Librar= y version: 1.02.117-RHEL6 (2016-12-13)

=C2=A0 D= river version: =C2=A04.33.1
- 7 clustered VGs overall
- 1 = LV with GFS mounted on all nodes


- 1 clustere= d VG with 1 PV and 1 LV on which it is GFS:

[root= @...]# pvdisplay /dev/mapper/35001b4d01b1da512
=C2=A0 --- Physical volum= e ---
=C2=A0 PV Name=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /dev/mapper/35001b4d01b1da512
=C2=A0 V= G Name=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 vg_1
=C2=A0 PV Size=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4.55 TiB / not usable 2.00= MiB
=C2=A0 Allocatable=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 yes
=C2=A0 PE Size=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4.00 MiB
=C2=A0 Total PE= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 1192092
=C2=A0 Free PE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1115292
=C2=A0 Allocated PE=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 76800
=C2=A0 PV UUID= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90

[root@...]# lvdisplay vg_1/lv_gfs
=C2=A0 --- Logical volume ---<= br>=C2=A0 LV Path=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /dev/vg_1/lv_gfs
=C2=A0 LV Name=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 lv_gfs
=C2=A0 VG Name=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 vg_1
=C2=A0 LV UUID= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY
=C2=A0 LV Wr= ite Access=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 read/write
=C2=A0 L= V Creation host, time ,
=C2=A0 LV Status=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 available
=C2=A0 # open= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 1
=C2=A0 LV Size=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 300.00 GiB
=C2= =A0 Current LE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 76800
=C2=A0 Segments=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
=C2=A0 Allocation=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 inher= it
=C2=A0 Read ahead sectors=C2=A0=C2=A0=C2=A0=C2=A0 auto
=C2=A0 - cu= rrently set to=C2=A0=C2=A0=C2=A0=C2=A0 256
=C2=A0 Block device=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 253:418
<= br>
[root@...]# vgdisplay vg_1
=C2=A0 --- Volume group ---
= =C2=A0 VG Name=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 vg_1
=C2=A0 System ID
=C2=A0 Format=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 lvm2
=C2=A0 Metadata Areas=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 1
=C2=A0 Metadata Sequence No=C2=A0 3898
=C2=A0 VG Access=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 read/wri= te
=C2=A0 VG Status=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 resizable
=C2=A0 Clustered=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 yes
=C2=A0 Shared=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 no
=C2=A0 MAX LV=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0
=C2=A0 Cur LV=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 1
=C2=A0 Open LV=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
=C2=A0 Max PV=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0<= br>=C2=A0 Cur PV=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
=C2=A0 Act PV=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1
= =C2=A0 VG Size=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 4.55 TiB
=C2=A0 PE Size=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4.00 MiB
= =C2=A0 Total PE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 1192092
=C2=A0 Alloc PE / Size=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 76800 / 300.00 GiB
=C2=A0 Free=C2=A0 PE / Size=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 1115292 / 4.25 TiB
=C2=A0 VG UUID=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR

<= br>
- IO activity on the PV (LUN) is very low - fro= m iostat and average per node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s = in 1 minute.


Thank you for your o= pinions and experience.

Have a= great day and with best regards,
<= font color=3D"#888888">
--
Karel Ziegl= er


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.= com
https://www.redhat.com/mailman/listinfo/linux-= lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
=



--
=C2=A0 .~.
=C2=A0 /V\<= br>=C2=A0// =C2=A0\\
/( =C2=A0 )\
^`~'^

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-= lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
=


Hi Emmanuel and the others,

=C2=A0so= it is better to perform lvconvert or pvmove (if it is supported) on a clus= tered logical volume?

With best regards,
--
Karel Ziegler

--000000000000d66a6f056d538e0b--