linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
@ 2018-10-08 10:23 Gang He
  2018-10-08 15:00 ` David Teigland
  0 siblings, 1 reply; 16+ messages in thread
From: Gang He @ 2018-10-08 10:23 UTC (permalink / raw)
  To: linux-lvm

Hello List

The system uses lvm based on raid1. 
It seems that the PV of the raid1 is found also on the single disks that build the raid1 device:
[  147.121725] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on /dev/md1.
[  147.123427] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on /dev/md1.
[  147.369863] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[  147.370597] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.

This is a regression bug? since the user did not encounter this problem with lvm2 v2.02.177.


Thanks
Gang

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-08 10:23 [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180 Gang He
@ 2018-10-08 15:00 ` David Teigland
  2018-10-15  5:39   ` Gang He
  2018-10-17 17:11   ` Sven Eschenberg
  0 siblings, 2 replies; 16+ messages in thread
From: David Teigland @ 2018-10-08 15:00 UTC (permalink / raw)
  To: Gang He; +Cc: linux-lvm

On Mon, Oct 08, 2018 at 04:23:27AM -0600, Gang He wrote:
> Hello List
> 
> The system uses lvm based on raid1. 
> It seems that the PV of the raid1 is found also on the single disks that build the raid1 device:
> [  147.121725] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on /dev/md1.
> [  147.123427] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on /dev/md1.
> [  147.369863] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
> [  147.370597] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.

Do these warnings only appear from "dracut-initqueue"?  Can you run and
send 'vgs -vvvv' from the command line?  If they don't appear from the
command line, then is "dracut-initqueue" using a different lvm.conf?
lvm.conf settings can effect this (filter, md_component_detection,
external_device_info_source).

> This is a regression bug? since the user did not encounter this problem with lvm2 v2.02.177.

It could be, since the new scanning changed how md detection works.  The
md superblock version effects how lvm detects this.  md superblock 1.0 (at
the end of the device) is not detected as easily as newer md versions
(1.1, 1.2) where the superblock is at the beginning.  Do you know which
this is?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-08 15:00 ` David Teigland
@ 2018-10-15  5:39   ` Gang He
  2018-10-15 15:26     ` David Teigland
  2018-10-17 17:11   ` Sven Eschenberg
  1 sibling, 1 reply; 16+ messages in thread
From: Gang He @ 2018-10-15  5:39 UTC (permalink / raw)
  To: teigland; +Cc: linux-lvm

Hell David,

>>> On 2018/10/8 at 23:00, in message <20181008150016.GB21471@redhat.com>, David
Teigland <teigland@redhat.com> wrote:
> On Mon, Oct 08, 2018 at 04:23:27AM -0600, Gang He wrote:
>> Hello List
>> 
>> The system uses lvm based on raid1. 
>> It seems that the PV of the raid1 is found also on the single disks that 
> build the raid1 device:
>> [  147.121725] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on 
> /dev/md1.
>> [  147.123427] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on 
> /dev/md1.
>> [  147.369863] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device 
> size is correct.
>> [  147.370597] linux-472a dracut-initqueue[391]: WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device 
> size is correct.
>> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG 
> vghome while PVs appear on duplicate devices.
> 
> Do these warnings only appear from "dracut-initqueue"?  Can you run and
> send 'vgs -vvvv' from the command line?  If they don't appear from the
> command line, then is "dracut-initqueue" using a different lvm.conf?
> lvm.conf settings can effect this (filter, md_component_detection,
> external_device_info_source).

mdadm --detail --scan -vvv
/dev/md/linux:0:
           Version : 1.0
     Creation Time : Sun Jul 22 22:49:21 2012
        Raid Level : raid1
        Array Size : 513012 (500.99 MiB 525.32 MB)
     Used Dev Size : 513012 (500.99 MiB 525.32 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Jul 16 00:29:19 2018
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : linux:0
              UUID : 160998c8:7e21bcff:9cea0bbc:46454716
            Events : 469

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
/dev/md/linux:1:
           Version : 1.0
     Creation Time : Sun Jul 22 22:49:22 2012
        Raid Level : raid1
        Array Size : 1953000312 (1862.53 GiB 1999.87 GB)
     Used Dev Size : 1953000312 (1862.53 GiB 1999.87 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Oct 12 20:16:25 2018
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : linux:1
              UUID : 17426969:03d7bfa7:5be33b0b:8171417a
            Events : 326248

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2

Thanks
Gang

> 
>> This is a regression bug? since the user did not encounter this problem with 
> lvm2 v2.02.177.
> 
> It could be, since the new scanning changed how md detection works.  The
> md superblock version effects how lvm detects this.  md superblock 1.0 (at
> the end of the device) is not detected as easily as newer md versions
> (1.1, 1.2) where the superblock is at the beginning.  Do you know which
> this is?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-15  5:39   ` Gang He
@ 2018-10-15 15:26     ` David Teigland
  2018-10-17  5:16       ` Gang He
  0 siblings, 1 reply; 16+ messages in thread
From: David Teigland @ 2018-10-15 15:26 UTC (permalink / raw)
  To: Gang He; +Cc: linux-lvm

On Sun, Oct 14, 2018 at 11:39:20PM -0600, Gang He wrote:
> >> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG 
> > vghome while PVs appear on duplicate devices.
> > 
> > Do these warnings only appear from "dracut-initqueue"?  Can you run and
> > send 'vgs -vvvv' from the command line?  If they don't appear from the
> > command line, then is "dracut-initqueue" using a different lvm.conf?
> > lvm.conf settings can effect this (filter, md_component_detection,
> > external_device_info_source).
> 
> mdadm --detail --scan -vvv
> /dev/md/linux:0:
>            Version : 1.0

It has the old superblock version 1.0 located at the end of the device, so
lvm will not always see it.  (lvm will look for it when it's writing to
new devices to ensure it doesn't clobber an md component.)

(Also keep in mind that this md superblock is no longer recommended:
raid.wiki.kernel.org/index.php/RAID_superblock_formats)

There are various ways to make lvm handle this:

- allow_changes_with_duplicate_pvs=1
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter

> > It could be, since the new scanning changed how md detection works.  The
> > md superblock version effects how lvm detects this.  md superblock 1.0 (at
> > the end of the device) is not detected as easily as newer md versions
> > (1.1, 1.2) where the superblock is at the beginning.  Do you know which
> > this is?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-15 15:26     ` David Teigland
@ 2018-10-17  5:16       ` Gang He
  2018-10-17 14:10         ` David Teigland
  0 siblings, 1 reply; 16+ messages in thread
From: Gang He @ 2018-10-17  5:16 UTC (permalink / raw)
  To: teigland; +Cc: linux-lvm

Hello David,

>>> On 2018/10/15 at 23:26, in message <20181015152648.GB29274@redhat.com>, David
Teigland <teigland@redhat.com> wrote:
> On Sun, Oct 14, 2018 at 11:39:20PM -0600, Gang He wrote:
>> >> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG 
>> > vghome while PVs appear on duplicate devices.
>> > 
>> > Do these warnings only appear from "dracut-initqueue"?  Can you run and
>> > send 'vgs -vvvv' from the command line?  If they don't appear from the
>> > command line, then is "dracut-initqueue" using a different lvm.conf?
>> > lvm.conf settings can effect this (filter, md_component_detection,
>> > external_device_info_source).
>> 
>> mdadm --detail --scan -vvv
>> /dev/md/linux:0:
>>            Version : 1.0
> 
> It has the old superblock version 1.0 located at the end of the device, so
> lvm will not always see it.  (lvm will look for it when it's writing to
> new devices to ensure it doesn't clobber an md component.)
> 
> (Also keep in mind that this md superblock is no longer recommended:
> raid.wiki.kernel.org/index.php/RAID_superblock_formats)
> 
> There are various ways to make lvm handle this:
> 
> - allow_changes_with_duplicate_pvs=1
> - external_device_info_source="udev"
> - reject sda2, sdb2 in lvm filter
> 
There are some feedback as below from our user's environment (since I can not reproduce this problem in my local
environment).

I tested one by one option in the lvm.conf.

The good news - enabling 
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter

both work! The system enables the proper lvm raid1 device again.

The first option does not work.
systemctl status lvm2-pvscan@9:126 results in:

● lvm2-pvscan@9:126.service - LVM2 PV scan on device 9:126
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2018-10-16 22:53:57 CEST; 3min 4s ago
     Docs: man:pvscan(8)
  Process: 849 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 (code=exited, status=5)
 Main PID: 849 (code=exited, status=5)

Oct 16 22:53:57 linux-dnetctw lvm[849]:   WARNING: Not using device /dev/md126 for PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 16 22:53:57 linux-dnetctw lvm[849]:   WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2
because of previous preference.
Oct 16 22:53:57 linux-dnetctw lvm[849]:   WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2
because of previous preference.
Oct 16 22:53:57 linux-dnetctw lvm[849]:   device-mapper: reload ioctl on  (254:0) failed: Device or resource busy
Oct 16 22:53:57 linux-dnetctw lvm[849]:   device-mapper: reload ioctl on  (254:0) failed: Device or resource busy
Oct 16 22:53:57 linux-dnetctw lvm[849]:   0 logical volume(s) in volume group "vghome" now active
Oct 16 22:53:57 linux-dnetctw lvm[849]:   vghome: autoactivation failed.
Oct 16 22:53:57 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Main process exited, code=exited,
status=5/NOTINSTALLED
Oct 16 22:53:57 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Failed with result 'exit-code'.
Oct 16 22:53:57 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on device 9:126.

pvs shows:
  /dev/sde: open failed: No medium found
  WARNING: found device with duplicate /dev/sdc2
  WARNING: found device with duplicate /dev/md126
  WARNING: Disabling lvmetad cache which does not support duplicate PVs.
  WARNING: Scan found duplicate PVs.
  WARNING: Not using lvmetad because cache update failed.
  /dev/sde: open failed: No medium found
  WARNING: Not using device /dev/sdc2 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
  WARNING: Not using device /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
  WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
  WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
  PV         VG     Fmt  Attr PSize PFree  
  /dev/sdb2  vghome lvm2 a--  1.82t 202.52g

My questions are as follows,
1) why did the solution 1 not work? since the method looks more close to fix this problem. 
2) Could we back-port some code from v2.02.177 source files to keep the compatibility? to avoid modifying some items
manually. 
   or, we have to accept this problem from v2.02.180 (maybe 178?) due to by-design? 

Thanks
Gang

>> > It could be, since the new scanning changed how md detection works.  The
>> > md superblock version effects how lvm detects this.  md superblock 1.0 (at
>> > the end of the device) is not detected as easily as newer md versions
>> > (1.1, 1.2) where the superblock is at the beginning.  Do you know which
>> > this is?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-17  5:16       ` Gang He
@ 2018-10-17 14:10         ` David Teigland
  2018-10-17 18:42           ` David Teigland
  0 siblings, 1 reply; 16+ messages in thread
From: David Teigland @ 2018-10-17 14:10 UTC (permalink / raw)
  To: Gang He; +Cc: linux-lvm

On Tue, Oct 16, 2018 at 11:16:28PM -0600, Gang He wrote:
> > - allow_changes_with_duplicate_pvs=1
> > - external_device_info_source="udev"
> > - reject sda2, sdb2 in lvm filter

> 1) why did the solution 1 not work? since the method looks more close to
> fix this problem. 

Check if the version you are using has this commit:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bfbfb83f726ae38

If so, then I'd be interested to see the -vvvv output from that pvs command.

> 2) Could we back-port some code from v2.02.177 source files to keep the
> compatibility? to avoid modifying some items manually.  or, we have to
> accept this problem from v2.02.180 (maybe 178?) due to by-design? 

It's not clear to me exactly which code you're looking at backporting to
where.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-08 15:00 ` David Teigland
  2018-10-15  5:39   ` Gang He
@ 2018-10-17 17:11   ` Sven Eschenberg
  1 sibling, 0 replies; 16+ messages in thread
From: Sven Eschenberg @ 2018-10-17 17:11 UTC (permalink / raw)
  To: linux-lvm

Hi List,

Unfortunately I answered directly to Gang He earlier.

I'm seeing the exact same faulty behavior with 2.02.181:

   WARNING: Not using device /dev/md126 for PV 
4ZZuWE-VeJT-O3O8-rO3A-IQ6Y-M6hB-C3jJXo.
   WARNING: PV 4ZZuWE-VeJT-O3O8-rO3A-IQ6Y-M6hB-C3jJXo prefers device 
/dev/sda because of previous preference.
   WARNING: Device /dev/sda has size of 62533296 sectors which is 
smaller than corresponding PV size of 125065216 sectors. Was device resized?

So lvm decices to pull up the PV based on the component device metadata, 
even though the raid is already up and running. Things worked as usual 
with a .16* version.

Additionally I see:
   /dev/sdj: open failed: No medium found
   /dev/sdk: open failed: No medium found
   /dev/sdl: open failed: No medium found
   /dev/sdm: open failed: No medium found

In what crazy scenario would a removeable medium be part of an VG and 
why in god's name would one even cosinder including removeable drives in 
the scan as a default?

For the time being I added a filter, as this is the only workaround. 
Funny enough, even though filtered, I am still getting the no medium 
messages - this makes absolutely no sense at all.

Regards

-Sven


Am 08.10.2018 um 17:00 schrieb David Teigland:
> On Mon, Oct 08, 2018 at 04:23:27AM -0600, Gang He wrote:
>> Hello List
>>
>> The system uses lvm based on raid1.
>> It seems that the PV of the raid1 is found also on the single disks that build the raid1 device:
>> [  147.121725] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on /dev/md1.
>> [  147.123427] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on /dev/md1.
>> [  147.369863] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
>> [  147.370597] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
>> [  147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.
> 
> Do these warnings only appear from "dracut-initqueue"?  Can you run and
> send 'vgs -vvvv' from the command line?  If they don't appear from the
> command line, then is "dracut-initqueue" using a different lvm.conf?
> lvm.conf settings can effect this (filter, md_component_detection,
> external_device_info_source).
> 
>> This is a regression bug? since the user did not encounter this problem with lvm2 v2.02.177.
> 
> It could be, since the new scanning changed how md detection works.  The
> md superblock version effects how lvm detects this.  md superblock 1.0 (at
> the end of the device) is not detected as easily as newer md versions
> (1.1, 1.2) where the superblock is at the beginning.  Do you know which
> this is?
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-17 14:10         ` David Teigland
@ 2018-10-17 18:42           ` David Teigland
  2018-10-18  8:51             ` Gang He
  0 siblings, 1 reply; 16+ messages in thread
From: David Teigland @ 2018-10-17 18:42 UTC (permalink / raw)
  To: Gang He, Sven Eschenberg; +Cc: linux-lvm

On Wed, Oct 17, 2018 at 09:10:25AM -0500, David Teigland wrote:
> Check if the version you are using has this commit:
> https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bfbfb83f726ae38

I see that this commit is missing from the stable branch:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=3fd75d1bcd714b02fb2b843d1928b2a875402f37

I'll backport that one.

Dave

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-17 18:42           ` David Teigland
@ 2018-10-18  8:51             ` Gang He
  2018-10-18 16:01               ` David Teigland
  0 siblings, 1 reply; 16+ messages in thread
From: Gang He @ 2018-10-18  8:51 UTC (permalink / raw)
  To: Sven Eschenberg, teigland; +Cc: linux-lvm

Hello David,

Thank for your help.
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?


Thanks
Gang 

>>> On 2018/10/18 at 2:42, in message <20181017184204.GC14214@redhat.com>, David
Teigland <teigland@redhat.com> wrote:
> On Wed, Oct 17, 2018 at 09:10:25AM -0500, David Teigland wrote:
>> Check if the version you are using has this commit:
>> 
> https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bf 
> bfb83f726ae38
> 
> I see that this commit is missing from the stable branch:
> https://sourceware.org/git/?p=lvm2.git;a=commit;h=3fd75d1bcd714b02fb2b843d19 
> 28b2a875402f37
> 
> I'll backport that one.
> 
> Dave

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-18  8:51             ` Gang He
@ 2018-10-18 16:01               ` David Teigland
  2018-10-18 17:59                 ` David Teigland
  0 siblings, 1 reply; 16+ messages in thread
From: David Teigland @ 2018-10-18 16:01 UTC (permalink / raw)
  To: Gang He; +Cc: Sven Eschenberg, linux-lvm

On Thu, Oct 18, 2018 at 02:51:05AM -0600, Gang He wrote:
> If I include this patch in lvm2 v2.02.180,
> LVM2 can activate LVs on the top of RAID1 automatically? or we still have to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?

I didn't need any config changes when testing this myself, but there may
be other variables I've not encountered.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-18 16:01               ` David Teigland
@ 2018-10-18 17:59                 ` David Teigland
  2018-10-19  5:42                   ` Gang He
  2018-10-23  2:19                   ` Gang He
  0 siblings, 2 replies; 16+ messages in thread
From: David Teigland @ 2018-10-18 17:59 UTC (permalink / raw)
  To: Gang He; +Cc: Sven Eschenberg, linux-lvm

On Thu, Oct 18, 2018 at 11:01:59AM -0500, David Teigland wrote:
> On Thu, Oct 18, 2018 at 02:51:05AM -0600, Gang He wrote:
> > If I include this patch in lvm2 v2.02.180,
> > LVM2 can activate LVs on the top of RAID1 automatically? or we still have to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
> 
> I didn't need any config changes when testing this myself, but there may
> be other variables I've not encountered.

See these three commits:
d1b652143abc tests: add new test for lvm on md devices
e7bb50880901 scan: enable full md filter when md 1.0 devices are present
de2863739f2e scan: use full md filter when md 1.0 devices are present

at https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-stable

(I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
this case.)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-18 17:59                 ` David Teigland
@ 2018-10-19  5:42                   ` Gang He
  2018-10-23  2:19                   ` Gang He
  1 sibling, 0 replies; 16+ messages in thread
From: Gang He @ 2018-10-19  5:42 UTC (permalink / raw)
  To: teigland; +Cc: Sven Eschenberg, linux-lvm

Hello David,

Thank for your attention.
I will let the user try these patches.

Thanks
Gang

>>> On 2018/10/19 at 1:59, in message <20181018175923.GC28661@redhat.com>, David
Teigland <teigland@redhat.com> wrote:
> On Thu, Oct 18, 2018 at 11:01:59AM -0500, David Teigland wrote:
>> On Thu, Oct 18, 2018 at 02:51:05AM -0600, Gang He wrote:
>> > If I include this patch in lvm2 v2.02.180,
>> > LVM2 can activate LVs on the top of RAID1 automatically? or we still have 
> to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
>> 
>> I didn't need any config changes when testing this myself, but there may
>> be other variables I've not encountered.
> 
> See these three commits:
> d1b652143abc tests: add new test for lvm on md devices
> e7bb50880901 scan: enable full md filter when md 1.0 devices are present
> de2863739f2e scan: use full md filter when md 1.0 devices are present
> 
> at 
> https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-sta 
> ble
> 
> (I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
> this case.)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-18 17:59                 ` David Teigland
  2018-10-19  5:42                   ` Gang He
@ 2018-10-23  2:19                   ` Gang He
  2018-10-23 15:04                     ` David Teigland
  1 sibling, 1 reply; 16+ messages in thread
From: Gang He @ 2018-10-23  2:19 UTC (permalink / raw)
  To: teigland; +Cc: Sven Eschenberg, linux-lvm

Hello David,

The user installed the lvm2 (v2.02.180) rpms with the below three patches, it looked there were still some problems in the user machine.
The feedback is as below from the user,

In a first round I installed lvm2-2.02.180-0.x86_64.rpm liblvm2cmd2_02-2.02.180-0.x86_64.rpm and liblvm2app2_2-2.02.180-0.x86_64.rpm - but no luck - after reboot still the same problem with ending up in the emergency console.
I additionally installed in the next round libdevmapper-event1_03-1.02.149-0.x86_64.rpm, ./libdevmapper1_03-1.02.149-0.x86_64.rpm and device-mapper-1.02.149-0.x86_64.rpm, again - ending up in the emergency console
systemctl status lvm2-pvscan@9:126 output: 
lvm2-pvscan@9:126.service - LVM2 PV scan on device 9:126
   Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2018-10-22 07:34:56 CEST; 5min ago
     Docs: man:pvscan(8)
  Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 (code=exited, status=5)
 Main PID: 815 (code=exited, status=5)

Oct 22 07:34:55 linux-dnetctw lvm[815]:   WARNING: Autoactivation reading from disk instead of lvmetad.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   /dev/sde: open failed: No medium found
Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome while PVs appear on duplicate devices.
Oct 22 07:34:56 linux-dnetctw lvm[815]:   0 logical volume(s) in volume group "vghome" now active
Oct 22 07:34:56 linux-dnetctw lvm[815]:   vghome: autoactivation failed.
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Main process exited, code=exited, status=5/NOTINSTALLED
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-pvscan@9:126.service: Failed with result 'exit-code'.
Oct 22 07:34:56 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on device 9:126.

What should we do in the next step for this case? 
or we have to accept the fact, to modify the related configurations manually to work around.  

Thanks
Gang


>>> On 2018/10/19 at 1:59, in message <20181018175923.GC28661@redhat.com>, David
Teigland <teigland@redhat.com> wrote:
> On Thu, Oct 18, 2018 at 11:01:59AM -0500, David Teigland wrote:
>> On Thu, Oct 18, 2018 at 02:51:05AM -0600, Gang He wrote:
>> > If I include this patch in lvm2 v2.02.180,
>> > LVM2 can activate LVs on the top of RAID1 automatically? or we still have 
> to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
>> 
>> I didn't need any config changes when testing this myself, but there may
>> be other variables I've not encountered.
> 
> See these three commits:
> d1b652143abc tests: add new test for lvm on md devices
> e7bb50880901 scan: enable full md filter when md 1.0 devices are present
> de2863739f2e scan: use full md filter when md 1.0 devices are present
> 
> at 
> https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-sta 
> ble
> 
> (I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
> this case.)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-23  2:19                   ` Gang He
@ 2018-10-23 15:04                     ` David Teigland
       [not found]                       ` <59EBFA5B020000E767ECE9F9@prv1-mh.provo.novell.com>
  0 siblings, 1 reply; 16+ messages in thread
From: David Teigland @ 2018-10-23 15:04 UTC (permalink / raw)
  To: Gang He; +Cc: Sven Eschenberg, linux-lvm

On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote:
>   Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 (code=exited, status=5)
> 
> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
> Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome while PVs appear on duplicate devices.

I'd try disabling lvmetad, I've not been testing these with lvmetad on.
We may need to make pvscan read both the start and end of every disk to
handle these md 1.0 components, and I'm not sure how to do that yet
without penalizing every pvscan.

Dave

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
       [not found]                       ` <59EBFA5B020000E767ECE9F9@prv1-mh.provo.novell.com>
@ 2018-10-24  2:23                         ` Gang He
  2018-10-24 14:47                           ` David Teigland
  0 siblings, 1 reply; 16+ messages in thread
From: Gang He @ 2018-10-24  2:23 UTC (permalink / raw)
  To: teigland; +Cc: Sven Eschenberg, linux-lvm

Hello David,

I am sorry, I can not understand your reply quickly.

>>> On 2018/10/23 at 23:04, in message <20181023150436.GB8413@redhat.com>, David
Teigland <teigland@redhat.com> wrote:
> On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote:
>>   Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
> (code=exited, status=5)
>> 
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device 
> /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
> qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because 
> of previous preference.
>> Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
> while PVs appear on duplicate devices.
> 
> I'd try disabling lvmetad, I've not been testing these with lvmetad on.
your means is, I should let the user disable lvmetad? 

> We may need to make pvscan read both the start and end of every disk to
> handle these md 1.0 components, and I'm not sure how to do that yet
> without penalizing every pvscan.
What can we do for now? it looks there needs add more code implement this logic.

Thanks
Gang

> 
> Dave

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
  2018-10-24  2:23                         ` Gang He
@ 2018-10-24 14:47                           ` David Teigland
  0 siblings, 0 replies; 16+ messages in thread
From: David Teigland @ 2018-10-24 14:47 UTC (permalink / raw)
  To: Gang He; +Cc: Sven Eschenberg, linux-lvm

On Tue, Oct 23, 2018 at 08:23:06PM -0600, Gang He wrote:
> Teigland <teigland@redhat.com> wrote:
> > On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote:
> >>   Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 
> > (code=exited, status=5)
> >> 
> >> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: Not using device 
> > /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
> >> Oct 22 07:34:56 linux-dnetctw lvm[815]:   WARNING: PV 
> > qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because 
> > of previous preference.
> >> Oct 22 07:34:56 linux-dnetctw lvm[815]:   Cannot activate LVs in VG vghome 
> > while PVs appear on duplicate devices.
> > 
> > I'd try disabling lvmetad, I've not been testing these with lvmetad on.
> your means is, I should let the user disable lvmetad? 

yes

> > We may need to make pvscan read both the start and end of every disk to
> > handle these md 1.0 components, and I'm not sure how to do that yet
> > without penalizing every pvscan.
> What can we do for now? it looks there needs add more code implement this logic.

Excluding component devices in global_filter is always the most direct way
of solving problems like this.  (I still hope to find a solution that
doesn't require that.)

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-10-24 14:47 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-08 10:23 [linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180 Gang He
2018-10-08 15:00 ` David Teigland
2018-10-15  5:39   ` Gang He
2018-10-15 15:26     ` David Teigland
2018-10-17  5:16       ` Gang He
2018-10-17 14:10         ` David Teigland
2018-10-17 18:42           ` David Teigland
2018-10-18  8:51             ` Gang He
2018-10-18 16:01               ` David Teigland
2018-10-18 17:59                 ` David Teigland
2018-10-19  5:42                   ` Gang He
2018-10-23  2:19                   ` Gang He
2018-10-23 15:04                     ` David Teigland
     [not found]                       ` <59EBFA5B020000E767ECE9F9@prv1-mh.provo.novell.com>
2018-10-24  2:23                         ` Gang He
2018-10-24 14:47                           ` David Teigland
2018-10-17 17:11   ` Sven Eschenberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).