linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Replace Drive in RAID6
@ 2021-11-03  5:56 Adam Puleo
  2021-11-03  6:41 ` Roberto Fastec
  2021-11-03 14:25 ` Andreas Schrägle
  0 siblings, 2 replies; 8+ messages in thread
From: Adam Puleo @ 2021-11-03  5:56 UTC (permalink / raw)
  To: linux-lvm

Hello,

One of my drives failed in my RAID6 and I’m trying to replace it without success.

I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild /dev/sda vg_data

But I’m receiving the error: vg_data/lv_data must be active to perform this operation.

I have tried to activate the logical volume without success.

How do I go about activating the volume so that I can rebuild the failed drive?

Thanks,
-Adam

# lvs -a -o name,segtype,devices
  LV                       Type   Devices                                                                                            
  lv_data                  raid6  lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
  [lv_data_rimage_0]       error                                                                                                     
  [lv_data_rimage_1]       linear /dev/sdc1(1)                                                                                       
  [lv_data_rimage_2]       linear /dev/sdb1(1)                                                                                       
  [lv_data_rimage_3]       linear /dev/sdf1(1)                                                                                       
  [lv_data_rimage_4]       linear /dev/sde1(2)                                                                                       
  [lv_data_rmeta_0]        error                                                                                                     
  [lv_data_rmeta_1]        linear /dev/sdc1(0)                                                                                       
  [lv_data_rmeta_2]        linear /dev/sdb1(0)                                                                                       
  [lv_data_rmeta_3]        linear /dev/sdf1(0)                                                                                       
  [lv_data_rmeta_4]        linear /dev/sde1(0)                                                                                       

# lvs -a
  LV                       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_data                  vg_data       rwi---r--- 990.00g                                                    
  [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g                                                    
  [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g                                                    
  [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g                                                    
  [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g                                                    
  [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g                                                    
  [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m                                                    
  [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m                                                    
  [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m                                                    
  [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m                                                    
  [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m                                                    



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-03  5:56 [linux-lvm] Replace Drive in RAID6 Adam Puleo
@ 2021-11-03  6:41 ` Roberto Fastec
  2021-11-03 14:25 ` Andreas Schrägle
  1 sibling, 0 replies; 8+ messages in thread
From: Roberto Fastec @ 2021-11-03  6:41 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 4402 bytes --]

If you can't for some reasons succeed 
we @ https://www.RecuperoDatiRAIDFAsTec.it 
can recover it with very low flat rates

BEWARE if one drive went failed and ALL the drives have the same age
nothing can guarantee that none of the other drives is worn too

the HINT is to clone all the drives as we do on regular base 

then work with the clones

if you wanna exchange info's, the +39348******* number on the website has WhatsApp

Kind regards
Roberto G.
Technical Manager
https://www.RecuperoDatiRAIDFAsTec.it

⁣Ottieni BlueMail per Android ​

Il giorno 3 nov 2021, 07:06, alle ore 07:06, Adam Puleo <adam.puleo@icloud.com> ha scritto:
>Hello,
>
>One of my drives failed in my RAID6 and I’m trying to replace it
>without success.
>
>I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild
>/dev/sda vg_data
>
>But I’m receiving the error: vg_data/lv_data must be active to perform
>this operation.
>
>I have tried to activate the logical volume without success.
>
>How do I go about activating the volume so that I can rebuild the
>failed drive?
>
>Thanks,
>-Adam
>
># lvs -a -o name,segtype,devices
>LV                       Type   Devices                                
>                                                           
>lv_data                  raid6 
>lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
>[lv_data_rimage_0]       error                                         
>                                                           
>[lv_data_rimage_1]       linear /dev/sdc1(1)                           
>                                                           
>[lv_data_rimage_2]       linear /dev/sdb1(1)                           
>                                                           
>[lv_data_rimage_3]       linear /dev/sdf1(1)                           
>                                                           
>[lv_data_rimage_4]       linear /dev/sde1(2)                           
>                                                           
>[lv_data_rmeta_0]        error                                         
>                                                           
>[lv_data_rmeta_1]        linear /dev/sdc1(0)                           
>                                                           
>[lv_data_rmeta_2]        linear /dev/sdb1(0)                           
>                                                           
>[lv_data_rmeta_3]        linear /dev/sdf1(0)                           
>                                                           
>[lv_data_rmeta_4]        linear /dev/sde1(0)                           
>                                                           
>
># lvs -a
>LV                       VG            Attr       LSize   Pool Origin
>Data%  Meta%  Move Log Cpy%Sync Convert
>lv_data                  vg_data       rwi---r--- 990.00g              
>                                     
>[lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g              
>                                     
>[lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g              
>                                     
>[lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g              
>                                     
>[lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g              
>                                     
>[lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g              
>                                     
>[lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m              
>                                     
>[lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m              
>                                     
>[lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m              
>                                     
>[lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m              
>                                     
>[lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m              
>                                     
>
>
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://listman.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 5433 bytes --]

[-- Attachment #2: Type: text/plain, Size: 201 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-03  5:56 [linux-lvm] Replace Drive in RAID6 Adam Puleo
  2021-11-03  6:41 ` Roberto Fastec
@ 2021-11-03 14:25 ` Andreas Schrägle
  2021-11-05  2:21   ` Adam Puleo
  1 sibling, 1 reply; 8+ messages in thread
From: Andreas Schrägle @ 2021-11-03 14:25 UTC (permalink / raw)
  To: Adam Puleo; +Cc: LVM general discussion and development

On Tue, 2 Nov 2021 22:56:18 -0700
Adam Puleo <adam.puleo@icloud.com> wrote:

> Hello,
> 
> One of my drives failed in my RAID6 and I’m trying to replace it without success.
> 
> I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild /dev/sda vg_data
> 
> But I’m receiving the error: vg_data/lv_data must be active to perform this operation.
> 
> I have tried to activate the logical volume without success.
> 
> How do I go about activating the volume so that I can rebuild the failed drive?
> 
> Thanks,
> -Adam
> 
> # lvs -a -o name,segtype,devices
>   LV                       Type   Devices                                                                                            
>   lv_data                  raid6  lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
>   [lv_data_rimage_0]       error                                                                                                     
>   [lv_data_rimage_1]       linear /dev/sdc1(1)                                                                                       
>   [lv_data_rimage_2]       linear /dev/sdb1(1)                                                                                       
>   [lv_data_rimage_3]       linear /dev/sdf1(1)                                                                                       
>   [lv_data_rimage_4]       linear /dev/sde1(2)                                                                                       
>   [lv_data_rmeta_0]        error                                                                                                     
>   [lv_data_rmeta_1]        linear /dev/sdc1(0)                                                                                       
>   [lv_data_rmeta_2]        linear /dev/sdb1(0)                                                                                       
>   [lv_data_rmeta_3]        linear /dev/sdf1(0)                                                                                       
>   [lv_data_rmeta_4]        linear /dev/sde1(0)                                                                                       
> 
> # lvs -a
>   LV                       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   lv_data                  vg_data       rwi---r--- 990.00g                                                    
>   [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g                                                    
>   [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g                                                    
>   [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g                                                    
>   [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g                                                    
>   [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g                                                    
>   [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m                                                    
>   [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m                                                    
>   [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m                                                    
>   [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m                                                    
>   [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m                                                    
> 
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Hello Adam,

how exactly have you tried to activate the LV so far?

lvchange with --activationmode degraded should work, no?

Also, are you sure that --rebuild is the correct operation?

man 7 lvmraid suggest you might want --repair or --replace instead.

Best Regards


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-03 14:25 ` Andreas Schrägle
@ 2021-11-05  2:21   ` Adam Puleo
  2021-11-08  7:06     ` Adam Puleo
  0 siblings, 1 reply; 8+ messages in thread
From: Adam Puleo @ 2021-11-05  2:21 UTC (permalink / raw)
  To: LVM general discussion and development

Hello Andreas,

After deactivating each of the individual rmage and rmeta volumes I receive this error:
# lvchange -a y --activationmode degraded vg_data/lv_data
  device-mapper: reload ioctl on  (253:12) failed: Invalid argument

In messages I see the following errors:
Nov  4 19:19:43 nas kernel: device-mapper: raid: Failed to read superblock of device at position 0
Nov  4 19:19:43 nas kernel: device-mapper: raid: New device injected into existing raid set without 'delta_disks' or 'rebuild' parameter specified
Nov  4 19:19:43 nas kernel: device-mapper: table: 253:12: raid: Unable to assemble array: Invalid superblocks
Nov  4 19:19:43 nas kernel: device-mapper: ioctl: error adding target to table

Am I not adding the new drive to the RAID correctly? I first did a pvcreate and then a vgextend.

I was using the —rebuild option because I know which physical drive is bad. In the lvmraid man page it says —repair might not know which is the correct block to use so to use —rebuild.

Thank you,
-Adam



On Nov 3, 2021, at 7:25 AM, Andreas Schrägle <linux-lvm@ajs124.de> wrote:

On Tue, 2 Nov 2021 22:56:18 -0700
Adam Puleo <adam.puleo@icloud.com> wrote:

> Hello,
> 
> One of my drives failed in my RAID6 and I’m trying to replace it without success.
> 
> I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild /dev/sda vg_data
> 
> But I’m receiving the error: vg_data/lv_data must be active to perform this operation.
> 
> I have tried to activate the logical volume without success.
> 
> How do I go about activating the volume so that I can rebuild the failed drive?
> 
> Thanks,
> -Adam
> 
> # lvs -a -o name,segtype,devices
>  LV                       Type   Devices                                                                                            
>  lv_data                  raid6  lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
>  [lv_data_rimage_0]       error                                                                                                     
>  [lv_data_rimage_1]       linear /dev/sdc1(1)                                                                                       
>  [lv_data_rimage_2]       linear /dev/sdb1(1)                                                                                       
>  [lv_data_rimage_3]       linear /dev/sdf1(1)                                                                                       
>  [lv_data_rimage_4]       linear /dev/sde1(2)                                                                                       
>  [lv_data_rmeta_0]        error                                                                                                     
>  [lv_data_rmeta_1]        linear /dev/sdc1(0)                                                                                       
>  [lv_data_rmeta_2]        linear /dev/sdb1(0)                                                                                       
>  [lv_data_rmeta_3]        linear /dev/sdf1(0)                                                                                       
>  [lv_data_rmeta_4]        linear /dev/sde1(0)                                                                                       
> 
> # lvs -a
>  LV                       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>  lv_data                  vg_data       rwi---r--- 990.00g                                                    
>  [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g                                                    
>  [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g                                                    
>  [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g                                                    
>  [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g                                                    
>  [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g                                                    
>  [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m                                                    
>  [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m                                                    
>  [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m                                                    
>  [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m                                                    
>  [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m                                                    
> 
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Hello Adam,

how exactly have you tried to activate the LV so far?

lvchange with --activationmode degraded should work, no?

Also, are you sure that --rebuild is the correct operation?

man 7 lvmraid suggest you might want --repair or --replace instead.

Best Regards


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-05  2:21   ` Adam Puleo
@ 2021-11-08  7:06     ` Adam Puleo
  2021-11-16 17:41       ` Heinz Mauelshagen
  0 siblings, 1 reply; 8+ messages in thread
From: Adam Puleo @ 2021-11-08  7:06 UTC (permalink / raw)
  To: LVM general discussion and development

Hello Everyone,

Since the sub-LV #0 has errored, LVM will not let me activate the logical volume.

Is there a way to remap the #0 sub-LV to the replaced disk or resizing the RAID6 to one less disk?

Thank you,
-Adam


On Nov 4, 2021, at 7:21 PM, Adam Puleo <adam.puleo@icloud.com> wrote:

Hello Andreas,

After deactivating each of the individual rmage and rmeta volumes I receive this error:
# lvchange -a y --activationmode degraded vg_data/lv_data
 device-mapper: reload ioctl on  (253:12) failed: Invalid argument

In messages I see the following errors:
Nov  4 19:19:43 nas kernel: device-mapper: raid: Failed to read superblock of device at position 0
Nov  4 19:19:43 nas kernel: device-mapper: raid: New device injected into existing raid set without 'delta_disks' or 'rebuild' parameter specified
Nov  4 19:19:43 nas kernel: device-mapper: table: 253:12: raid: Unable to assemble array: Invalid superblocks
Nov  4 19:19:43 nas kernel: device-mapper: ioctl: error adding target to table

Am I not adding the new drive to the RAID correctly? I first did a pvcreate and then a vgextend.

I was using the —rebuild option because I know which physical drive is bad. In the lvmraid man page it says —repair might not know which is the correct block to use so to use —rebuild.

Thank you,
-Adam



On Nov 3, 2021, at 7:25 AM, Andreas Schrägle <linux-lvm@ajs124.de> wrote:

On Tue, 2 Nov 2021 22:56:18 -0700
Adam Puleo <adam.puleo@icloud.com> wrote:

> Hello,
> 
> One of my drives failed in my RAID6 and I’m trying to replace it without success.
> 
> I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild /dev/sda vg_data
> 
> But I’m receiving the error: vg_data/lv_data must be active to perform this operation.
> 
> I have tried to activate the logical volume without success.
> 
> How do I go about activating the volume so that I can rebuild the failed drive?
> 
> Thanks,
> -Adam
> 
> # lvs -a -o name,segtype,devices
> LV                       Type   Devices                                                                                            
> lv_data                  raid6  lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
> [lv_data_rimage_0]       error                                                                                                     
> [lv_data_rimage_1]       linear /dev/sdc1(1)                                                                                       
> [lv_data_rimage_2]       linear /dev/sdb1(1)                                                                                       
> [lv_data_rimage_3]       linear /dev/sdf1(1)                                                                                       
> [lv_data_rimage_4]       linear /dev/sde1(2)                                                                                       
> [lv_data_rmeta_0]        error                                                                                                     
> [lv_data_rmeta_1]        linear /dev/sdc1(0)                                                                                       
> [lv_data_rmeta_2]        linear /dev/sdb1(0)                                                                                       
> [lv_data_rmeta_3]        linear /dev/sdf1(0)                                                                                       
> [lv_data_rmeta_4]        linear /dev/sde1(0)                                                                                       
> 
> # lvs -a
> LV                       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
> lv_data                  vg_data       rwi---r--- 990.00g                                                    
> [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g                                                    
> [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g                                                    
> [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g                                                    
> [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g                                                    
> [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g                                                    
> [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m                                                    
> [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m                                                    
> [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m                                                    
> [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m                                                    
> [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m                                                    
> 
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Hello Adam,

how exactly have you tried to activate the LV so far?

lvchange with --activationmode degraded should work, no?

Also, are you sure that --rebuild is the correct operation?

man 7 lvmraid suggest you might want --repair or --replace instead.

Best Regards


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-08  7:06     ` Adam Puleo
@ 2021-11-16 17:41       ` Heinz Mauelshagen
  2021-11-16 17:46         ` Heinz Mauelshagen
  0 siblings, 1 reply; 8+ messages in thread
From: Heinz Mauelshagen @ 2021-11-16 17:41 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 6352 bytes --]

On Mon, Nov 8, 2021 at 8:08 AM Adam Puleo <adam.puleo@icloud.com> wrote:

> Hello Everyone,
>

Hi,
for starters, which kernel/distro is this?

Also, all layout changes on RaidLVs require activated ones.



>
> Since the sub-LV #0 has errored, LVM will not let me activate the logical
> volume.
>
> Is there a way to remap the #0 sub-LV to the replaced disk or resizing the
> RAID6 to one less disk?
>

Each of the RAID6 SubLV pairs has an internal id and all data and parity
(P+Q syndromes) have been stored in a rotating pattern  >  no to the
remapping part.

Also no to the resize, as it'd need a fully operational radi6, thus
repairing the RaidLV is needed.

As mentioned, '"lvchange --rebuild ..." is inadequate to repair RaidLVs
with broken/lost PVs, "lvchange --repair $RaidLV" is.

In order to diagnose why your raid6 LV now fails to activate via "lvchange
-ay --activationmode degraded $RaidLV" which is the proper way to go about
it,
can you please describe any/all updating steps you took after the drive
failed rendering your raid6 LV degraded?  Please don't change anything
until you made
that transparent so that we keep chances to fix this...

FYI:
"lvconvert --(repair|replace) ..." difference is the former repairing
RaidLVs with failed PVs by allocating space on different, accessible PVs
hence causing the RaidLV to become fully operational after rebuilding all
missing block content by using parity stored on the remaining rimage SubLVs
vs. the latter allowing to replace mappings to intact PVs by remapping the
RAID SubLV pair to different ones (e.g. faster or less contended PVs).

Thanks,
Heinz



>
> Thank you,
> -Adam
>
>
> On Nov 4, 2021, at 7:21 PM, Adam Puleo <adam.puleo@icloud.com> wrote:
>
> Hello Andreas,
>
> After deactivating each of the individual rmage and rmeta volumes I
> receive this error:
> # lvchange -a y --activationmode degraded vg_data/lv_data
>  device-mapper: reload ioctl on  (253:12) failed: Invalid argument
>
> In messages I see the following errors:
> Nov  4 19:19:43 nas kernel: device-mapper: raid: Failed to read superblock
> of device at position 0
> Nov  4 19:19:43 nas kernel: device-mapper: raid: New device injected into
> existing raid set without 'delta_disks' or 'rebuild' parameter specified
> Nov  4 19:19:43 nas kernel: device-mapper: table: 253:12: raid: Unable to
> assemble array: Invalid superblocks
> Nov  4 19:19:43 nas kernel: device-mapper: ioctl: error adding target to
> table
>
> Am I not adding the new drive to the RAID correctly? I first did a
> pvcreate and then a vgextend.
>
> I was using the —rebuild option because I know which physical drive is
> bad. In the lvmraid man page it says —repair might not know which is the
> correct block to use so to use —rebuild.
>
> Thank you,
> -Adam
>
>
>
> On Nov 3, 2021, at 7:25 AM, Andreas Schrägle <linux-lvm@ajs124.de> wrote:
>
> On Tue, 2 Nov 2021 22:56:18 -0700
> Adam Puleo <adam.puleo@icloud.com> wrote:
>
> > Hello,
> >
> > One of my drives failed in my RAID6 and I’m trying to replace it without
> success.
> >
> > I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild
> /dev/sda vg_data
> >
> > But I’m receiving the error: vg_data/lv_data must be active to perform
> this operation.
> >
> > I have tried to activate the logical volume without success.
> >
> > How do I go about activating the volume so that I can rebuild the failed
> drive?
> >
> > Thanks,
> > -Adam
> >
> > # lvs -a -o name,segtype,devices
> > LV                       Type   Devices
>
> > lv_data                  raid6
> lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
> > [lv_data_rimage_0]       error
>
> > [lv_data_rimage_1]       linear /dev/sdc1(1)
>
> > [lv_data_rimage_2]       linear /dev/sdb1(1)
>
> > [lv_data_rimage_3]       linear /dev/sdf1(1)
>
> > [lv_data_rimage_4]       linear /dev/sde1(2)
>
> > [lv_data_rmeta_0]        error
>
> > [lv_data_rmeta_1]        linear /dev/sdc1(0)
>
> > [lv_data_rmeta_2]        linear /dev/sdb1(0)
>
> > [lv_data_rmeta_3]        linear /dev/sdf1(0)
>
> > [lv_data_rmeta_4]        linear /dev/sde1(0)
>
> >
> > # lvs -a
> > LV                       VG            Attr       LSize   Pool Origin
> Data%  Meta%  Move Log Cpy%Sync Convert
> > lv_data                  vg_data       rwi---r--- 990.00g
>
> > [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g
>
> > [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g
>
> > [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g
>
> > [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g
>
> > [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g
>
> > [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m
>
> > [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m
>
> > [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m
>
> > [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m
>
> > [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m
>
> >
> >
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com
> > https://listman.redhat.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> Hello Adam,
>
> how exactly have you tried to activate the LV so far?
>
> lvchange with --activationmode degraded should work, no?
>
> Also, are you sure that --rebuild is the correct operation?
>
> man 7 lvmraid suggest you might want --repair or --replace instead.
>
> Best Regards
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #1.2: Type: text/html, Size: 11175 bytes --]

[-- Attachment #2: Type: text/plain, Size: 201 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-16 17:41       ` Heinz Mauelshagen
@ 2021-11-16 17:46         ` Heinz Mauelshagen
  2021-11-20  7:15           ` Adam Puleo
  0 siblings, 1 reply; 8+ messages in thread
From: Heinz Mauelshagen @ 2021-11-16 17:46 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 6716 bytes --]

Erm, s/lvchange --repair/lvconvert --repair/

On Tue, Nov 16, 2021 at 6:41 PM Heinz Mauelshagen <heinzm@redhat.com> wrote:

> On Mon, Nov 8, 2021 at 8:08 AM Adam Puleo <adam.puleo@icloud.com> wrote:
>
>> Hello Everyone,
>>
>
> Hi,
> for starters, which kernel/distro is this?
>
> Also, all layout changes on RaidLVs require activated ones.
>
>
>
>>
>> Since the sub-LV #0 has errored, LVM will not let me activate the logical
>> volume.
>>
>> Is there a way to remap the #0 sub-LV to the replaced disk or resizing
>> the RAID6 to one less disk?
>>
>
> Each of the RAID6 SubLV pairs has an internal id and all data and parity
> (P+Q syndromes) have been stored in a rotating pattern  >  no to the
> remapping part.
>
> Also no to the resize, as it'd need a fully operational radi6, thus
> repairing the RaidLV is needed.
>
> As mentioned, '"lvchange --rebuild ..." is inadequate to repair RaidLVs
> with broken/lost PVs, "lvchange --repair $RaidLV" is.
>
> In order to diagnose why your raid6 LV now fails to activate via "lvchange
> -ay --activationmode degraded $RaidLV" which is the proper way to go about
> it,
> can you please describe any/all updating steps you took after the drive
> failed rendering your raid6 LV degraded?  Please don't change anything
> until you made
> that transparent so that we keep chances to fix this...
>
> FYI:
> "lvconvert --(repair|replace) ..." difference is the former repairing
> RaidLVs with failed PVs by allocating space on different, accessible PVs
> hence causing the RaidLV to become fully operational after rebuilding all
> missing block content by using parity stored on the remaining rimage SubLVs
> vs. the latter allowing to replace mappings to intact PVs by remapping the
> RAID SubLV pair to different ones (e.g. faster or less contended PVs).
>
> Thanks,
> Heinz
>
>
>
>>
>> Thank you,
>> -Adam
>>
>>
>> On Nov 4, 2021, at 7:21 PM, Adam Puleo <adam.puleo@icloud.com> wrote:
>>
>> Hello Andreas,
>>
>> After deactivating each of the individual rmage and rmeta volumes I
>> receive this error:
>> # lvchange -a y --activationmode degraded vg_data/lv_data
>>  device-mapper: reload ioctl on  (253:12) failed: Invalid argument
>>
>> In messages I see the following errors:
>> Nov  4 19:19:43 nas kernel: device-mapper: raid: Failed to read
>> superblock of device at position 0
>> Nov  4 19:19:43 nas kernel: device-mapper: raid: New device injected into
>> existing raid set without 'delta_disks' or 'rebuild' parameter specified
>> Nov  4 19:19:43 nas kernel: device-mapper: table: 253:12: raid: Unable to
>> assemble array: Invalid superblocks
>> Nov  4 19:19:43 nas kernel: device-mapper: ioctl: error adding target to
>> table
>>
>> Am I not adding the new drive to the RAID correctly? I first did a
>> pvcreate and then a vgextend.
>>
>> I was using the —rebuild option because I know which physical drive is
>> bad. In the lvmraid man page it says —repair might not know which is the
>> correct block to use so to use —rebuild.
>>
>> Thank you,
>> -Adam
>>
>>
>>
>> On Nov 3, 2021, at 7:25 AM, Andreas Schrägle <linux-lvm@ajs124.de> wrote:
>>
>> On Tue, 2 Nov 2021 22:56:18 -0700
>> Adam Puleo <adam.puleo@icloud.com> wrote:
>>
>> > Hello,
>> >
>> > One of my drives failed in my RAID6 and I’m trying to replace it
>> without success.
>> >
>> > I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild
>> /dev/sda vg_data
>> >
>> > But I’m receiving the error: vg_data/lv_data must be active to perform
>> this operation.
>> >
>> > I have tried to activate the logical volume without success.
>> >
>> > How do I go about activating the volume so that I can rebuild the
>> failed drive?
>> >
>> > Thanks,
>> > -Adam
>> >
>> > # lvs -a -o name,segtype,devices
>> > LV                       Type   Devices
>>
>> > lv_data                  raid6
>> lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
>> > [lv_data_rimage_0]       error
>>
>> > [lv_data_rimage_1]       linear /dev/sdc1(1)
>>
>> > [lv_data_rimage_2]       linear /dev/sdb1(1)
>>
>> > [lv_data_rimage_3]       linear /dev/sdf1(1)
>>
>> > [lv_data_rimage_4]       linear /dev/sde1(2)
>>
>> > [lv_data_rmeta_0]        error
>>
>> > [lv_data_rmeta_1]        linear /dev/sdc1(0)
>>
>> > [lv_data_rmeta_2]        linear /dev/sdb1(0)
>>
>> > [lv_data_rmeta_3]        linear /dev/sdf1(0)
>>
>> > [lv_data_rmeta_4]        linear /dev/sde1(0)
>>
>> >
>> > # lvs -a
>> > LV                       VG            Attr       LSize   Pool Origin
>> Data%  Meta%  Move Log Cpy%Sync Convert
>> > lv_data                  vg_data       rwi---r--- 990.00g
>>
>> > [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g
>>
>> > [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g
>>
>> > [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g
>>
>> > [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g
>>
>> > [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g
>>
>> > [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m
>>
>> > [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m
>>
>> > [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m
>>
>> > [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m
>>
>> > [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m
>>
>> >
>> >
>> >
>> > _______________________________________________
>> > linux-lvm mailing list
>> > linux-lvm@redhat.com
>> > https://listman.redhat.com/mailman/listinfo/linux-lvm
>> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>> Hello Adam,
>>
>> how exactly have you tried to activate the LV so far?
>>
>> lvchange with --activationmode degraded should work, no?
>>
>> Also, are you sure that --rebuild is the correct operation?
>>
>> man 7 lvmraid suggest you might want --repair or --replace instead.
>>
>> Best Regards
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://listman.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://listman.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://listman.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>

[-- Attachment #1.2: Type: text/html, Size: 11592 bytes --]

[-- Attachment #2: Type: text/plain, Size: 201 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [linux-lvm] Replace Drive in RAID6
  2021-11-16 17:46         ` Heinz Mauelshagen
@ 2021-11-20  7:15           ` Adam Puleo
  0 siblings, 0 replies; 8+ messages in thread
From: Adam Puleo @ 2021-11-20  7:15 UTC (permalink / raw)
  To: LVM general discussion and development


[-- Attachment #1.1: Type: text/plain, Size: 9565 bytes --]

Hello Heinz,

This was CenOS 7.x using a 5.x kernel.

I could not get the LV activated until I mapped lv_data_rmeta_0 & lv_data_rimage_0 volumes to a PV.

I did the mapping by creating a copy of the current VG config file from /etc/lvm/backup and manually adding the missing PV to each of the LVs (lv_data_rmeta_0 & lv_data_rimage_0). After I edited the file I ran vgcfgrestore.

At this point I was able to activate the LV and sync the RAID. After the synchronization was done I mounted the file system (XFS) but it was corrupted. It was missing most of my newer files. I had a backup so I restored from that.

-adam


> On Nov 16, 2021, at 10:46 AM, Heinz Mauelshagen <heinzm@redhat.com> wrote:
> 
> Erm, s/lvchange --repair/lvconvert --repair/
> 
> On Tue, Nov 16, 2021 at 6:41 PM Heinz Mauelshagen <heinzm@redhat.com <mailto:heinzm@redhat.com>> wrote:
> On Mon, Nov 8, 2021 at 8:08 AM Adam Puleo <adam.puleo@icloud.com <mailto:adam.puleo@icloud.com>> wrote:
> Hello Everyone,
> 
> Hi,
> for starters, which kernel/distro is this?
> 
> Also, all layout changes on RaidLVs require activated ones.
> 
>  
> 
> Since the sub-LV #0 has errored, LVM will not let me activate the logical volume.
> 
> Is there a way to remap the #0 sub-LV to the replaced disk or resizing the RAID6 to one less disk?
> 
> Each of the RAID6 SubLV pairs has an internal id and all data and parity (P+Q syndromes) have been stored in a rotating pattern  >  no to the remapping part.
> 
> Also no to the resize, as it'd need a fully operational radi6, thus repairing the RaidLV is needed.
> 
> As mentioned, '"lvchange --rebuild ..." is inadequate to repair RaidLVs with broken/lost PVs, "lvchange --repair $RaidLV" is.
> 
> In order to diagnose why your raid6 LV now fails to activate via "lvchange -ay --activationmode degraded $RaidLV" which is the proper way to go about it,
> can you please describe any/all updating steps you took after the drive failed rendering your raid6 LV degraded?  Please don't change anything until you made
> that transparent so that we keep chances to fix this... 
> 
> FYI:
> "lvconvert --(repair|replace) ..." difference is the former repairing RaidLVs with failed PVs by allocating space on different, accessible PVs hence causing the RaidLV to become fully operational after rebuilding all missing block content by using parity stored on the remaining rimage SubLVs vs. the latter allowing to replace mappings to intact PVs by remapping the RAID SubLV pair to different ones (e.g. faster or less contended PVs).
> 
> Thanks,
> Heinz
> 
>  
> 
> Thank you,
> -Adam
> 
> 
> On Nov 4, 2021, at 7:21 PM, Adam Puleo <adam.puleo@icloud.com <mailto:adam.puleo@icloud.com>> wrote:
> 
> Hello Andreas,
> 
> After deactivating each of the individual rmage and rmeta volumes I receive this error:
> # lvchange -a y --activationmode degraded vg_data/lv_data
>  device-mapper: reload ioctl on  (253:12) failed: Invalid argument
> 
> In messages I see the following errors:
> Nov  4 19:19:43 nas kernel: device-mapper: raid: Failed to read superblock of device at position 0
> Nov  4 19:19:43 nas kernel: device-mapper: raid: New device injected into existing raid set without 'delta_disks' or 'rebuild' parameter specified
> Nov  4 19:19:43 nas kernel: device-mapper: table: 253:12: raid: Unable to assemble array: Invalid superblocks
> Nov  4 19:19:43 nas kernel: device-mapper: ioctl: error adding target to table
> 
> Am I not adding the new drive to the RAID correctly? I first did a pvcreate and then a vgextend.
> 
> I was using the —rebuild option because I know which physical drive is bad. In the lvmraid man page it says —repair might not know which is the correct block to use so to use —rebuild.
> 
> Thank you,
> -Adam
> 
> 
> 
> On Nov 3, 2021, at 7:25 AM, Andreas Schrägle <linux-lvm@ajs124.de <mailto:linux-lvm@ajs124.de>> wrote:
> 
> On Tue, 2 Nov 2021 22:56:18 -0700
> Adam Puleo <adam.puleo@icloud.com <mailto:adam.puleo@icloud.com>> wrote:
> 
> > Hello,
> > 
> > One of my drives failed in my RAID6 and I’m trying to replace it without success.
> > 
> > I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild /dev/sda vg_data
> > 
> > But I’m receiving the error: vg_data/lv_data must be active to perform this operation.
> > 
> > I have tried to activate the logical volume without success.
> > 
> > How do I go about activating the volume so that I can rebuild the failed drive?
> > 
> > Thanks,
> > -Adam
> > 
> > # lvs -a -o name,segtype,devices
> > LV                       Type   Devices                                                                                            
> > lv_data                  raid6  lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
> > [lv_data_rimage_0]       error                                                                                                     
> > [lv_data_rimage_1]       linear /dev/sdc1(1)                                                                                       
> > [lv_data_rimage_2]       linear /dev/sdb1(1)                                                                                       
> > [lv_data_rimage_3]       linear /dev/sdf1(1)                                                                                       
> > [lv_data_rimage_4]       linear /dev/sde1(2)                                                                                       
> > [lv_data_rmeta_0]        error                                                                                                     
> > [lv_data_rmeta_1]        linear /dev/sdc1(0)                                                                                       
> > [lv_data_rmeta_2]        linear /dev/sdb1(0)                                                                                       
> > [lv_data_rmeta_3]        linear /dev/sdf1(0)                                                                                       
> > [lv_data_rmeta_4]        linear /dev/sde1(0)                                                                                       
> > 
> > # lvs -a
> > LV                       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
> > lv_data                  vg_data       rwi---r--- 990.00g                                                    
> > [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g                                                    
> > [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g                                                    
> > [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g                                                    
> > [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g                                                    
> > [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g                                                    
> > [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m                                                    
> > [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m                                                    
> > [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m                                                    
> > [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m                                                    
> > [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m                                                    
> > 
> > 
> > 
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm@redhat.com <mailto:linux-lvm@redhat.com>
> > https://listman.redhat.com/mailman/listinfo/linux-lvm <https://listman.redhat.com/mailman/listinfo/linux-lvm>
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ <http://tldp.org/HOWTO/LVM-HOWTO/>
> 
> Hello Adam,
> 
> how exactly have you tried to activate the LV so far?
> 
> lvchange with --activationmode degraded should work, no?
> 
> Also, are you sure that --rebuild is the correct operation?
> 
> man 7 lvmraid suggest you might want --repair or --replace instead.
> 
> Best Regards
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com <mailto:linux-lvm@redhat.com>
> https://listman.redhat.com/mailman/listinfo/linux-lvm <https://listman.redhat.com/mailman/listinfo/linux-lvm>
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ <http://tldp.org/HOWTO/LVM-HOWTO/>
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com <mailto:linux-lvm@redhat.com>
> https://listman.redhat.com/mailman/listinfo/linux-lvm <https://listman.redhat.com/mailman/listinfo/linux-lvm>
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ <http://tldp.org/HOWTO/LVM-HOWTO/>
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com <mailto:linux-lvm@redhat.com>
> https://listman.redhat.com/mailman/listinfo/linux-lvm <https://listman.redhat.com/mailman/listinfo/linux-lvm>
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ <http://tldp.org/HOWTO/LVM-HOWTO/>_______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[-- Attachment #1.2: Type: text/html, Size: 18850 bytes --]

[-- Attachment #2: Type: text/plain, Size: 201 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-11-22  7:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-03  5:56 [linux-lvm] Replace Drive in RAID6 Adam Puleo
2021-11-03  6:41 ` Roberto Fastec
2021-11-03 14:25 ` Andreas Schrägle
2021-11-05  2:21   ` Adam Puleo
2021-11-08  7:06     ` Adam Puleo
2021-11-16 17:41       ` Heinz Mauelshagen
2021-11-16 17:46         ` Heinz Mauelshagen
2021-11-20  7:15           ` Adam Puleo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).