* [linux-lvm] Question about vgreduce on raid lvm cache
@ 2019-11-29 8:35 Heming Zhao
2020-01-03 3:10 ` Heming Zhao
0 siblings, 1 reply; 2+ messages in thread
From: Heming Zhao @ 2019-11-29 8:35 UTC (permalink / raw)
To: linux-lvm
Hello list,
I met a lvm cache issue recently.
The machine had an LVM VG configured with:
dev1 + dev2 -> data_lv with raid 1 mirror
dev3 + dev4 -> data_cache_lv with raid 1 mirror and type cache
These were assembled to a cached volume with:
"lvconvert --type cache --cachemode writeback --cachepool data/data_cache_lv data/data_lv"
The device dev4 then died and was removed from the system.
When attempting to remove the device using "vgreduce --removemissing", lvm removed both data_cache_lv and data_lv from the data vg.
This is unexpected behaviour, as dev1, dev2, dev3 were all still present.
The expected behaviour should:
```
vgreduce --removemissing
lvconvert -m0 system/lv
```
When it's a non-cached type, and you performe vgreduce, lvs are NOT removed.
the (reproduced) command list:
```
pvcreate /dev/mapper/dev1 /dev/mapper/dev2 /dev/mapper/dev3 /dev/mapper/dev4
vgcreate VG /dev/mapper/dev1 /dev/mapper/dev2 /dev/mapper/dev3 /dev/mapper/dev4
lvcreate --type raid1 -m 1 -L150 -n data_lv VG /dev/mapper/dev1 /dev/mapper/dev2
lvcreate --type raid1 -m 1 -L150 -n data_cache_lv VG /dev/mapper/dev3 /dev/mapper/dev4
lvconvert --type cache --cachemode writeback --cachepool VG/data_cache_lv VG/data_lv
dmsetup remove -f dev4
vgreduce --removemissing --force VG
```
Thanks.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [linux-lvm] Question about vgreduce on raid lvm cache
2019-11-29 8:35 [linux-lvm] Question about vgreduce on raid lvm cache Heming Zhao
@ 2020-01-03 3:10 ` Heming Zhao
0 siblings, 0 replies; 2+ messages in thread
From: Heming Zhao @ 2020-01-03 3:10 UTC (permalink / raw)
To: linux-lvm
Hello list,
I found below question solution, executing removemissing with "--mirrorsonly" is mandatory.
```
vgreduce --removemissing --mirrorsonly --force VG
lvconvert -m0 VG/data_lv
```
But there raise another issue. the default error message is not clear.
When vgreduce on mirrored & cached LVs, user normally runs below command & get below failed messages:
```
# vgreduce --removemissing VG
WARNING: Couldn't find device with uuid UQZkRE-P8X7-Pkgv-mE5b-q6CY-2DmX-ew2BCn.
WARNING: VG VG is missing PV UQZkRE-P8X7-Pkgv-mE5b-q6CY-2DmX-ew2BCn.
WARNING: Couldn't find all devices for LV VG/data_lv_corig_rimage_1 while checking used and assumed devices.
WARNING: Couldn't find all devices for LV VG/data_lv_corig_rmeta_1 while checking used and assumed devices.
WARNING: Couldn't find device with uuid UQZkRE-P8X7-Pkgv-mE5b-q6CY-2DmX-ew2BCn.
WARNING: Partial LV data_lv needs to be repaired or removed.
WARNING: Partial LV data_lv_corig_rimage_1 needs to be repaired or removed.
WARNING: Partial LV data_lv_corig_rmeta_1 needs to be repaired or removed.
WARNING: Partial LV data_lv_corig needs to be repaired or removed.
There are still partial LVs in VG VG.
To remove them unconditionally use: vgreduce --removemissing --force. <===== this command is very dangerous for this mail's scenario
WARNING: Proceeding to remove empty missing PVs.
WARNING: Couldn't find device with uuid UQZkRE-P8X7-Pkgv-mE5b-q6CY-2DmX-ew2BCn.
```
Currently the error info are not clear. This very easy to guide user to execute "--removemissing --force", then the LVs will be destroied.
So I filed a patch for this little issue on next mail.
thanks you.
On 11/29/19 4:35 PM, Heming Zhao wrote:
> Hello list,
>
> I met a lvm cache issue recently.
>
> The machine had an LVM VG configured with:
> dev1 + dev2 -> data_lv with raid 1 mirror
> dev3 + dev4 -> data_cache_lv with raid 1 mirror and type cache
>
> These were assembled to a cached volume with:
> "lvconvert --type cache --cachemode writeback --cachepool data/data_cache_lv data/data_lv"
>
> The device dev4 then died and was removed from the system.
>
> When attempting to remove the device using "vgreduce --removemissing", lvm removed both data_cache_lv and data_lv from the data vg.
>
> This is unexpected behaviour, as dev1, dev2, dev3 were all still present.
> The expected behaviour should:
> ```
> vgreduce --removemissing
> lvconvert -m0 system/lv
> ```
>
> When it's a non-cached type, and you performe vgreduce, lvs are NOT removed.
>
> the (reproduced) command list:
> ```
> pvcreate /dev/mapper/dev1 /dev/mapper/dev2 /dev/mapper/dev3 /dev/mapper/dev4
> vgcreate VG /dev/mapper/dev1 /dev/mapper/dev2 /dev/mapper/dev3 /dev/mapper/dev4
> lvcreate --type raid1 -m 1 -L150 -n data_lv VG /dev/mapper/dev1 /dev/mapper/dev2
> lvcreate --type raid1 -m 1 -L150 -n data_cache_lv VG /dev/mapper/dev3 /dev/mapper/dev4
> lvconvert --type cache --cachemode writeback --cachepool VG/data_cache_lv VG/data_lv
> dmsetup remove -f dev4
> vgreduce --removemissing --force VG
> ```
>
> Thanks.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-01-03 3:17 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-29 8:35 [linux-lvm] Question about vgreduce on raid lvm cache Heming Zhao
2020-01-03 3:10 ` Heming Zhao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).