archive mirror
 help / color / mirror / Atom feed
From: kAja Ziegler <>
Subject: [linux-lvm] Move LV with GFS to new LUN (pvmove) in the cluster
Date: Wed, 23 May 2018 14:31:32 +0200	[thread overview]
Message-ID: <> (raw)

[-- Attachment #1: Type: text/plain, Size: 2877 bytes --]

Hi all,

 I want to ask if it is possible and safe to move online the clustered LV
with GFS on the one PV (multipathed LUN from the old storage) to an other
one (multipathed LUN on the new storage)?

I found these articles in Red Hat knowledgebase:

- Can I perform a pvmove on a clustered logical volume? -
- How to migrate SAN LUNs which has Clustered LVM configured on it? -

With regard to the mentioned articles it can be done, it is only needed to
install and run the cmirror service. Should I expect any problems or other

My clustered environment:

- 8 nodes - CentOS 6.9
- LVM version:     2.02.143(2)-RHEL6 (2016-12-13)
  Library version: 1.02.117-RHEL6 (2016-12-13)
  Driver version:  4.33.1
- 7 clustered VGs overall
- 1 LV with GFS mounted on all nodes

- 1 clustered VG with 1 PV and 1 LV on which it is GFS:

[root@...]# pvdisplay /dev/mapper/35001b4d01b1da512
  --- Physical volume ---
  PV Name               /dev/mapper/35001b4d01b1da512
  VG Name               vg_1
  PV Size               4.55 TiB / not usable 2.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1192092
  Free PE               1115292
  Allocated PE          76800
  PV UUID               jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90

[root@...]# lvdisplay vg_1/lv_gfs
  --- Logical volume ---
  LV Path                /dev/vg_1/lv_gfs
  LV Name                lv_gfs
  VG Name                vg_1
  LV UUID                OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                300.00 GiB
  Current LE             76800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:418

[root@...]# vgdisplay vg_1
  --- Volume group ---
  VG Name               vg_1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3898
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.55 TiB
  PE Size               4.00 MiB
  Total PE              1192092
  Alloc PE / Size       76800 / 300.00 GiB
  Free  PE / Size       1115292 / 4.25 TiB
  VG UUID               PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR

- IO activity on the PV (LUN) is very low - from iostat and average per
node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s in 1 minute.

Thank you for your opinions and experience.

Have a great day and with best regards,
Karel Ziegler

[-- Attachment #2: Type: text/html, Size: 6165 bytes --]

             reply	other threads:[~2018-05-23 12:31 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-23 12:31 kAja Ziegler [this message]
2018-05-24  8:13 ` [linux-lvm] Move LV with GFS to new LUN (pvmove) in the cluster emmanuel segura
2018-05-29  7:55   ` kAja Ziegler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).