linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Snapshot of RAID 6 Volume
@ 2019-07-04 18:52 Adam Puleo
  2019-08-11 18:24 ` Adam Puleo
  0 siblings, 1 reply; 2+ messages in thread
From: Adam Puleo @ 2019-07-04 18:52 UTC (permalink / raw)
  To: linux-lvm

Hello,

I’m trying to create a LVM snapshot of a RAID 6 volume. From the error messages it appears that I do not have enough space in the volume group and / or on the physical volumes. How do I calculate the amount of space required?

Thank you,
-Adam

[root@nas scripts]# lvcreate --size 1G --snapshot --name lv_data_snapshot /dev/vg_data/lv_data 
  device-mapper: resume ioctl on  (253:24) failed: Invalid argument
  Unable to resume vg_data-lv_data_snapshot (253:24).
  device-mapper: resume ioctl on  (253:5) failed: Invalid argument
  Unable to resume vg_data-lv_data (253:5).
  Problem reactivating logical volume vg_data/lv_data.
  Aborting. Manual intervention required.
  Releasing activation in critical section.
  libdevmapper exiting with 1 device(s) still suspended.

/var/log/messages:
Jul  4 10:54:56 nas kernel: md/raid:mdX: not clean -- starting background reconstruction
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-10 operational as raid disk 0
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-12 operational as raid disk 1
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-16 operational as raid disk 2
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-18 operational as raid disk 3
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-22 operational as raid disk 4
Jul  4 10:54:56 nas kernel: md/raid:mdX: raid level 6 active with 5 out of 5 devices, algorithm 8
Jul  4 10:54:56 nas lvm[16521]: No longer monitoring RAID device vg_data-lv_data for events.
Jul  4 10:54:57 nas kernel: device-mapper: table: 253:24: dm-20 too small for target: start=0, len=5788164096, dev_size=5788139520
Jul  4 10:54:57 nas kernel: device-mapper: table: 253:5: dm-20 too small for target: start=0, len=5788164096, dev_size=5788139520
Jul  4 10:54:57 nas kernel: md: resync of RAID array mdX
Jul  4 10:54:57 nas kernel: md: mdX: resync done.

[root@nas backups]# vgdisplay -v vg_data
  --- Volume group ---
  VG Name               vg_data
  System ID             
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  65
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               <4.55 TiB
  PE Size               4.00 MiB
  Total PE              1192139
  Alloc PE / Size       1177610 / 4.49 TiB
  Free  PE / Size       14529 / 56.75 GiB
  VG UUID               zvZJna-UMgX-oWOM-LLy8-En5o-q4sM-PpQA2Q
   
  --- Logical volume ---
  LV Path                /dev/vg_data/lv_data
  LV Name                lv_data
  VG Name                vg_data
  LV UUID                KjiDLl-Dk67-LarA-TU3W-UPEK-Z4kH-iKgQnj
  LV Write Access        read/write
  LV Creation host, time nas.local, 2019-06-27 23:24:46 -0700
  LV Status              available
  # open                 1
  LV Size                <2.70 TiB
  Current LE             706563
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1280
  Block device           253:5
   
  --- Physical volumes ---
  PV Name               /dev/sdc     
  PV UUID               AEYk3P-Ka2X-qCHh-dXPx-R0BZ-lKkI-SSNtIP
  PV Status             allocatable
  Total PE / Free PE    238467 / 2945
   
  PV Name               /dev/sdb1     
  PV UUID               sTU60I-uJdh-ylgC-Cf4l-JWte-34CN-jpGDq3
  PV Status             allocatable
  Total PE / Free PE    238418 / 2896
   
  PV Name               /dev/sda1     
  PV UUID               a0Ick4-1nTm-VUJL-Vosl-OEbW-Btr8-I3pv2M
  PV Status             allocatable
  Total PE / Free PE    238418 / 2896
   
  PV Name               /dev/sde1     
  PV UUID               dgd702-ginH-5vAG-IPUJ-H0fm-6143-Jq8Nco
  PV Status             allocatable
  Total PE / Free PE    238418 / 2896
   
  PV Name               /dev/sdd1     
  PV UUID               gEpYRx-wJxa-VVMl-fNao-hsfF-QXKp-Z1dpBC
  PV Status             allocatable
  Total PE / Free PE    238418 / 2896

[root@nas backups]# dmsetup status
vg_data-lv_data_rimage_4: 0 1929388032 linear 
cl_nas-opt: 0 209715200 linear 
cl_nas-opt: 209715200 8192 linear 
vg_pc_backups-lv_pc_backups_rimage_1: 0 5583462400 linear 
vg_data-lv_data_rimage_3: 0 8192 linear 
vg_data-lv_data_rimage_3: 8192 1929379840 linear 
vg_pc_backups-lv_pc_backups_rimage_0: 0 5583462400 linear 
vg_data-lv_data_rimage_2: 0 8192 linear 
vg_data-lv_data_rimage_2: 8192 1929379840 linear 
vg_data-lv_data_rimage_1: 0 8192 linear 
vg_data-lv_data_rimage_1: 8192 1929379840 linear 
vg_backups-lv_backups: 0 9767911424 linear 
cl_nas-home: 0 154787840 linear 
vg_data-lv_data_rimage_0: 0 8192 linear 
vg_data-lv_data_rimage_0: 8192 1929379840 linear 
cl_nas-swap: 0 16515072 linear 
cl_nas-root: 0 104857600 linear 
vg_data-lv_data_rmeta_4: 0 8192 linear 
vg_data-lv_data_rmeta_3: 0 8192 linear 
vg_data-lv_data: 0 5788139520 raid raid6_zr 5 AAAAA 1929379840/1929379840 idle 0 8192 -
vg_pc_backups-lv_pc_backups: 0 5583462400 raid raid1 2 AA 5583462400/5583462400 idle 0 0 -
vg_data-lv_data_rmeta_2: 0 8192 linear 
vg_data-lv_data_rmeta_1: 0 8192 linear 
vg_slack-lv_slack: 0 3907403776 linear 
vg_pc_backups-lv_pc_backups_rmeta_1: 0 8192 linear 
vg_data-lv_data_rmeta_0: 0 8192 linear 
vg_pc_backups-lv_pc_backups_rmeta_0: 0 8192 linear 

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [linux-lvm] Snapshot of RAID 6 Volume
  2019-07-04 18:52 [linux-lvm] Snapshot of RAID 6 Volume Adam Puleo
@ 2019-08-11 18:24 ` Adam Puleo
  0 siblings, 0 replies; 2+ messages in thread
From: Adam Puleo @ 2019-08-11 18:24 UTC (permalink / raw)
  To: LVM general discussion and development

Hello,

Is it possible to create a snapshot of a RAID 6 LVM?

I have tried the following command: lvcreate --extents 50%FREE --snapshot --name lv_data_snapshot /dev/mapper/vg_data-lv_data

I receive the same errors as noted below.

How is it running out of space with the 50%FREE parameter?

Thanks,
-Adam

[root@nas ~]# lvm version
  LVM version:     2.02.180(2)-RHEL7 (2018-07-20)
  Library version: 1.02.149-RHEL7 (2018-07-20)
  Driver version:  4.40.0
  Configuration:   ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-default-dm-run-dir=/run --with-default-run-dir=/run/lvm --with-default-pid-dir=/run --with-default-locking-dir=/run/lock/lvm --with-usrlibdir=/usr/lib64 --enable-lvm1_fallback --enable-fsadm --with-pool=internal --enable-write_install --with-user= --with-group= --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --enable-pkgconfig --enable-applib --enable-cmdlib --enable-dmeventd --enable-blkid_wiping --enable-python2-bindings --with-cluster=internal --with-clvmd=corosync --enable-cmirrord --with-udevdir=/usr/lib/udev/rules.d --enable-udev_sync --with-thin=internal --enable-lvmetad --with-cache=internal --enable-lvmpolld --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-dmfilemapd


On Jul 4, 2019, at 11:52 AM, Adam Puleo <adam.puleo@icloud.com> wrote:

Hello,

I’m trying to create a LVM snapshot of a RAID 6 volume. From the error messages it appears that I do not have enough space in the volume group and / or on the physical volumes. How do I calculate the amount of space required?

Thank you,
-Adam

[root@nas scripts]# lvcreate --size 1G --snapshot --name lv_data_snapshot /dev/vg_data/lv_data 
 device-mapper: resume ioctl on  (253:24) failed: Invalid argument
 Unable to resume vg_data-lv_data_snapshot (253:24).
 device-mapper: resume ioctl on  (253:5) failed: Invalid argument
 Unable to resume vg_data-lv_data (253:5).
 Problem reactivating logical volume vg_data/lv_data.
 Aborting. Manual intervention required.
 Releasing activation in critical section.
 libdevmapper exiting with 1 device(s) still suspended.

/var/log/messages:
Jul  4 10:54:56 nas kernel: md/raid:mdX: not clean -- starting background reconstruction
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-10 operational as raid disk 0
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-12 operational as raid disk 1
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-16 operational as raid disk 2
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-18 operational as raid disk 3
Jul  4 10:54:56 nas kernel: md/raid:mdX: device dm-22 operational as raid disk 4
Jul  4 10:54:56 nas kernel: md/raid:mdX: raid level 6 active with 5 out of 5 devices, algorithm 8
Jul  4 10:54:56 nas lvm[16521]: No longer monitoring RAID device vg_data-lv_data for events.
Jul  4 10:54:57 nas kernel: device-mapper: table: 253:24: dm-20 too small for target: start=0, len=5788164096, dev_size=5788139520
Jul  4 10:54:57 nas kernel: device-mapper: table: 253:5: dm-20 too small for target: start=0, len=5788164096, dev_size=5788139520
Jul  4 10:54:57 nas kernel: md: resync of RAID array mdX
Jul  4 10:54:57 nas kernel: md: mdX: resync done.

[root@nas backups]# vgdisplay -v vg_data
 --- Volume group ---
 VG Name               vg_data
 System ID             
 Format                lvm2
 Metadata Areas        5
 Metadata Sequence No  65
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                1
 Open LV               1
 Max PV                0
 Cur PV                5
 Act PV                5
 VG Size               <4.55 TiB
 PE Size               4.00 MiB
 Total PE              1192139
 Alloc PE / Size       1177610 / 4.49 TiB
 Free  PE / Size       14529 / 56.75 GiB
 VG UUID               zvZJna-UMgX-oWOM-LLy8-En5o-q4sM-PpQA2Q

 --- Logical volume ---
 LV Path                /dev/vg_data/lv_data
 LV Name                lv_data
 VG Name                vg_data
 LV UUID                KjiDLl-Dk67-LarA-TU3W-UPEK-Z4kH-iKgQnj
 LV Write Access        read/write
 LV Creation host, time nas.local, 2019-06-27 23:24:46 -0700
 LV Status              available
 # open                 1
 LV Size                <2.70 TiB
 Current LE             706563
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     1280
 Block device           253:5

 --- Physical volumes ---
 PV Name               /dev/sdc     
 PV UUID               AEYk3P-Ka2X-qCHh-dXPx-R0BZ-lKkI-SSNtIP
 PV Status             allocatable
 Total PE / Free PE    238467 / 2945

 PV Name               /dev/sdb1     
 PV UUID               sTU60I-uJdh-ylgC-Cf4l-JWte-34CN-jpGDq3
 PV Status             allocatable
 Total PE / Free PE    238418 / 2896

 PV Name               /dev/sda1     
 PV UUID               a0Ick4-1nTm-VUJL-Vosl-OEbW-Btr8-I3pv2M
 PV Status             allocatable
 Total PE / Free PE    238418 / 2896

 PV Name               /dev/sde1     
 PV UUID               dgd702-ginH-5vAG-IPUJ-H0fm-6143-Jq8Nco
 PV Status             allocatable
 Total PE / Free PE    238418 / 2896

 PV Name               /dev/sdd1     
 PV UUID               gEpYRx-wJxa-VVMl-fNao-hsfF-QXKp-Z1dpBC
 PV Status             allocatable
 Total PE / Free PE    238418 / 2896

[root@nas backups]# dmsetup status
vg_data-lv_data_rimage_4: 0 1929388032 linear 
cl_nas-opt: 0 209715200 linear 
cl_nas-opt: 209715200 8192 linear 
vg_pc_backups-lv_pc_backups_rimage_1: 0 5583462400 linear 
vg_data-lv_data_rimage_3: 0 8192 linear 
vg_data-lv_data_rimage_3: 8192 1929379840 linear 
vg_pc_backups-lv_pc_backups_rimage_0: 0 5583462400 linear 
vg_data-lv_data_rimage_2: 0 8192 linear 
vg_data-lv_data_rimage_2: 8192 1929379840 linear 
vg_data-lv_data_rimage_1: 0 8192 linear 
vg_data-lv_data_rimage_1: 8192 1929379840 linear 
vg_backups-lv_backups: 0 9767911424 linear 
cl_nas-home: 0 154787840 linear 
vg_data-lv_data_rimage_0: 0 8192 linear 
vg_data-lv_data_rimage_0: 8192 1929379840 linear 
cl_nas-swap: 0 16515072 linear 
cl_nas-root: 0 104857600 linear 
vg_data-lv_data_rmeta_4: 0 8192 linear 
vg_data-lv_data_rmeta_3: 0 8192 linear 
vg_data-lv_data: 0 5788139520 raid raid6_zr 5 AAAAA 1929379840/1929379840 idle 0 8192 -
vg_pc_backups-lv_pc_backups: 0 5583462400 raid raid1 2 AA 5583462400/5583462400 idle 0 0 -
vg_data-lv_data_rmeta_2: 0 8192 linear 
vg_data-lv_data_rmeta_1: 0 8192 linear 
vg_slack-lv_slack: 0 3907403776 linear 
vg_pc_backups-lv_pc_backups_rmeta_1: 0 8192 linear 
vg_data-lv_data_rmeta_0: 0 8192 linear 
vg_pc_backups-lv_pc_backups_rmeta_0: 0 8192 linear 


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO@http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-08-11 18:24 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-04 18:52 [linux-lvm] Snapshot of RAID 6 Volume Adam Puleo
2019-08-11 18:24 ` Adam Puleo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).