linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: kAja Ziegler <ziegleka@gmail.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Move LV with GFS to new LUN (pvmove) in the cluster
Date: Tue, 29 May 2018 09:55:29 +0200	[thread overview]
Message-ID: <CAMuNeAu1OnQC0KZf=YTVPYy0QaX43=JEXa08GPx2rod7qmD61A@mail.gmail.com> (raw)
In-Reply-To: <CAE7pJ3B-SkTihF=fvXvwW7sBZ=y8O4saGmLj+odY0ZeRx_5fvQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 4267 bytes --]

On Thu, May 24, 2018 at 10:13 AM, emmanuel segura <emi2fast@gmail.com>
wrote:

> I used this procedure to archive what you need to do.
>
> 1: active cmirror on every cluster nodes
> 2: lvconvert -m 1 vg00/lvdata /dev/mapper/mapth1 --corelog #where mpath1
> is the new lun
>
> When the lvdata lv is in sync, now you can the dettach the old lun with
>
> lvconvert -m 0 vg00/lvdata /dev/mapper/mapth0
>
>
> 2018-05-23 14:31 GMT+02:00 kAja Ziegler <ziegleka@gmail.com>:
>
>> Hi all,
>>
>>  I want to ask if it is possible and safe to move online the clustered LV
>> with GFS on the one PV (multipathed LUN from the old storage) to an other
>> one (multipathed LUN on the new storage)?
>>
>> I found these articles in Red Hat knowledgebase:
>>
>> - Can I perform a pvmove on a clustered logical volume? -
>> https://access.redhat.com/solutions/39894
>> - How to migrate SAN LUNs which has Clustered LVM configured on it? -
>> https://access.redhat.com/solutions/466533
>>
>> With regard to the mentioned articles it can be done, it is only needed
>> to install and run the cmirror service. Should I expect any problems or
>> other prerequisites?
>>
>>
>> My clustered environment:
>>
>> - 8 nodes - CentOS 6.9
>> - LVM version:     2.02.143(2)-RHEL6 (2016-12-13)
>>   Library version: 1.02.117-RHEL6 (2016-12-13)
>>   Driver version:  4.33.1
>> - 7 clustered VGs overall
>> - 1 LV with GFS mounted on all nodes
>>
>>
>> - 1 clustered VG with 1 PV and 1 LV on which it is GFS:
>>
>> [root@...]# pvdisplay /dev/mapper/35001b4d01b1da512
>>   --- Physical volume ---
>>   PV Name               /dev/mapper/35001b4d01b1da512
>>   VG Name               vg_1
>>   PV Size               4.55 TiB / not usable 2.00 MiB
>>   Allocatable           yes
>>   PE Size               4.00 MiB
>>   Total PE              1192092
>>   Free PE               1115292
>>   Allocated PE          76800
>>   PV UUID               jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90
>>
>> [root@...]# lvdisplay vg_1/lv_gfs
>>   --- Logical volume ---
>>   LV Path                /dev/vg_1/lv_gfs
>>   LV Name                lv_gfs
>>   VG Name                vg_1
>>   LV UUID                OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY
>>   LV Write Access        read/write
>>   LV Creation host, time ,
>>   LV Status              available
>>   # open                 1
>>   LV Size                300.00 GiB
>>   Current LE             76800
>>   Segments               1
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>   - currently set to     256
>>   Block device           253:418
>>
>> [root@...]# vgdisplay vg_1
>>   --- Volume group ---
>>   VG Name               vg_1
>>   System ID
>>   Format                lvm2
>>   Metadata Areas        1
>>   Metadata Sequence No  3898
>>   VG Access             read/write
>>   VG Status             resizable
>>   Clustered             yes
>>   Shared                no
>>   MAX LV                0
>>   Cur LV                1
>>   Open LV               1
>>   Max PV                0
>>   Cur PV                1
>>   Act PV                1
>>   VG Size               4.55 TiB
>>   PE Size               4.00 MiB
>>   Total PE              1192092
>>   Alloc PE / Size       76800 / 300.00 GiB
>>   Free  PE / Size       1115292 / 4.25 TiB
>>   VG UUID               PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR
>>
>>
>> - IO activity on the PV (LUN) is very low - from iostat and average per
>> node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s in 1 minute.
>>
>>
>> Thank you for your opinions and experience.
>>
>> Have a great day and with best regards,
>> --
>> Karel Ziegler
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>
>
>
> --
>   .~.
>   /V\
>  //  \\
> /(   )\
> ^`~'^
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>


Hi Emmanuel and the others,

 so it is better to perform lvconvert or pvmove (if it is supported) on a
clustered logical volume?

With best regards,
-- 
Karel Ziegler

[-- Attachment #2: Type: text/html, Size: 12060 bytes --]

      reply	other threads:[~2018-05-29  7:55 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-23 12:31 [linux-lvm] Move LV with GFS to new LUN (pvmove) in the cluster kAja Ziegler
2018-05-24  8:13 ` emmanuel segura
2018-05-29  7:55   ` kAja Ziegler [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMuNeAu1OnQC0KZf=YTVPYy0QaX43=JEXa08GPx2rod7qmD61A@mail.gmail.com' \
    --to=ziegleka@gmail.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).