All of lore.kernel.org
 help / color / mirror / Atom feed
* Raid 1 needs repair
@ 2016-02-04 14:21 Stefan Lamby
  2016-02-04 14:51 ` Phil Turmel
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Lamby @ 2016-02-04 14:21 UTC (permalink / raw)
  To: linux-raid

Hi.
My md0 is broken because of a damaged sda.
It was simple to find out which on it is thanks to lsdrv by Mr Turmel.

Now I have a new disk, same size, as sda with sda1 partition - same size -
formatted as Linux raid autodetect.
In the end I like to end up with the new sda1 as member of md0 and I need to put
my boot information there, since sdb1, also member of md0, seems to no have no
such info because the system wasn't booting with a missing sda1.

I am afraid of loosing data, so I prefer asking, what to do next.
Please guide, I have the system up and running on a sysrescd live distro.

Thank you very much.
-fuz


Here is some further information.


----------------- /sbin/lsdrv specific information
------------------------------------------
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series
Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)
├scsi 0:0:0:0 ATA      WDC WD1001FALS-0 {WD-WMATV0444833}
│└sda 931.51g [8:0] Empty/Unknown
│ └sda1 931.51g [8:1] Empty/Unknown
│  └md0 931.39g [9:0] MD v1.2 raid1 (2) active {None}
│   │                 Empty/Unknown
│   ├dm-1 93.13g [252:1] Empty/Unknown
│   │└Mounted as /dev/mapper/vg_raid1-root @ /
│   ├dm-2 9.31g [252:2] Empty/Unknown
│   └dm-3 186.26g [252:3] Empty/Unknown
│    └Mounted as /dev/mapper/vg_raid1-home @ /home
├scsi 1:0:0:0 ATA      WDC WD10EARS-00Y {WD-WCAV5D907687}
│└sdb 931.51g [8:16] Empty/Unknown
│ └sdb1 931.51g [8:17] Empty/Unknown
│  └md0 931.39g [9:0] MD v1.2 raid1 (2) active {None}
│                     Empty/Unknown
├scsi 2:x:x:x [Empty]
├scsi 3:0:0:0 ATA      SAMSUNG HD103UJ  {S13PJ9BQ900571}
│└sdc 931.51g [8:32] Empty/Unknown
│ └sdc1 931.51g [8:33] Empty/Unknown
│  └md1 931.39g [9:1] MD v1.2 raid1 (2) clean {None}
│   │                 Empty/Unknown
│   └dm-0 500.00g [252:0] Empty/Unknown
│    └Mounted as /dev/mapper/vg_backup_raid1-lv_backup @ /backup
├scsi 4:0:0:0 ATA      SAMSUNG HD103UJ  {S13PJ9BQA17616}
│└sdd 931.51g [8:48] Empty/Unknown
│ └sdd1 931.51g [8:49] Empty/Unknown
│  └md1 931.39g [9:1] MD v1.2 raid1 (2) clean {None}
│                     Empty/Unknown
└scsi 5:x:x:x [Empty]
Other Block Devices
├loop0 0.00k [7:0] Empty/Unknown
├loop1 0.00k [7:1] Empty/Unknown
├loop2 0.00k [7:2] Empty/Unknown
├loop3 0.00k [7:3] Empty/Unknown
├loop4 0.00k [7:4] Empty/Unknown
├loop5 0.00k [7:5] Empty/Unknown
├loop6 0.00k [7:6] Empty/Unknown
├loop7 0.00k [7:7] Empty/Unknown
├ram0 64.00m [1:0] Empty/Unknown
├ram1 64.00m [1:1] Empty/Unknown
├ram2 64.00m [1:2] Empty/Unknown
├ram3 64.00m [1:3] Empty/Unknown
├ram4 64.00m [1:4] Empty/Unknown
├ram5 64.00m [1:5] Empty/Unknown
├ram6 64.00m [1:6] Empty/Unknown
├ram7 64.00m [1:7] Empty/Unknown
├ram8 64.00m [1:8] Empty/Unknown
├ram9 64.00m [1:9] Empty/Unknown
├ram10 64.00m [1:10] Empty/Unknown
├ram11 64.00m [1:11] Empty/Unknown
├ram12 64.00m [1:12] Empty/Unknown
├ram13 64.00m [1:13] Empty/Unknown
├ram14 64.00m [1:14] Empty/Unknown
└ram15 64.00m [1:15] Empty/Unknown

----------------- RAID specific information (mdadm --examine /dev/sdXY)
---------------------
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 68c0c9ad:82ede879:2110f427:9f31c140
           Name : speernix15:0
  Creation Time : Sun Nov 30 18:15:35 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953259520 (931.39 GiB 1000.07 GB)
     Array Size : 976629568 (931.39 GiB 1000.07 GB)
  Used Dev Size : 1953259136 (931.39 GiB 1000.07 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=384 sectors
          State : clean
    Device UUID : 94a0a338:7876e6f4:f8eaa27c:05a34dfa

    Update Time : Thu Feb  4 05:38:26 2016
       Checksum : 7fe3fbef - correct
         Events : 700


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : a9a43761:8148b29e:5db49d7a:c4d0a219
           Name : speernix15:1
  Creation Time : Sun Dec 21 10:18:05 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953260976 (931.39 GiB 1000.07 GB)
     Array Size : 976630336 (931.39 GiB 1000.07 GB)
  Used Dev Size : 1953260672 (931.39 GiB 1000.07 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=304 sectors
          State : clean
    Device UUID : 8150c54e:3b72da54:4544c33d:36372195

    Update Time : Thu Feb  4 11:58:22 2016
       Checksum : 56cbbb8d - correct
         Events : 571


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


----------------- disk specific information (ls -l /dev/disk/by-id/)
-----------------------
insgesamt 0
lrwxrwxrwx 1 root root  9 Apr  9 09:36 ata-SAMSUNG_HD103UJ_S13PJ9BQ900571 ->
../../sdc
lrwxrwxrwx 1 root root 10 Apr  9 09:36 ata-SAMSUNG_HD103UJ_S13PJ9BQ900571-part1
-> ../../sdc1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 ata-SAMSUNG_HD103UJ_S13PJ9BQA17616 ->
../../sdd
lrwxrwxrwx 1 root root 10 Apr  9 09:36 ata-SAMSUNG_HD103UJ_S13PJ9BQA17616-part1
-> ../../sdd1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 ata-WDC_WD1001FALS-00J7B0_WD-WMATV0444833
-> ../../sda
lrwxrwxrwx 1 root root 10 Apr  9 09:36
ata-WDC_WD1001FALS-00J7B0_WD-WMATV0444833-part1 -> ../../sda1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 ata-WDC_WD10EARS-00Y5B1_WD-WCAV5D907687
-> ../../sdb
lrwxrwxrwx 1 root root 10 Apr  9 09:36
ata-WDC_WD10EARS-00Y5B1_WD-WCAV5D907687-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Apr  9 09:36 dm-name-vg_backup_raid1-lv_backup ->
../../dm-0
lrwxrwxrwx 1 root root 10 Apr  9 09:36 dm-name-vg_raid1-home -> ../../dm-3
lrwxrwxrwx 1 root root 10 Apr  9 09:36 dm-name-vg_raid1-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Apr  9 09:36 dm-name-vg_raid1-swap -> ../../dm-2
lrwxrwxrwx 1 root root 10 Apr  9 09:36
dm-uuid-LVM-esKsCrquUIckjFQ6jcwYYZeKL4MRas87bdFuJuYmp3W1AwMDgzs0twPC8CLUZfV6 ->
../../dm-0
lrwxrwxrwx 1 root root 10 Apr  9 09:36
dm-uuid-LVM-mopUewB2i49fmomlXPZ8C3nPYOXM56AY3j92rd0GrGN3Ugvuxb8mlMbC36fMWbYO ->
../../dm-2
lrwxrwxrwx 1 root root 10 Apr  9 09:36
dm-uuid-LVM-mopUewB2i49fmomlXPZ8C3nPYOXM56AYplkh4xXCaC38PiDHPMyzTTs9JE5BRIHa ->
../../dm-3
lrwxrwxrwx 1 root root 10 Apr  9 09:36
dm-uuid-LVM-mopUewB2i49fmomlXPZ8C3nPYOXM56AYWBztmrFbDYLRGYdm6Eqi4oe4vvpVir3t ->
../../dm-1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 md-name-speernix15:0 -> ../../md0
lrwxrwxrwx 1 root root  9 Apr  9 09:36 md-name-speernix15:1 -> ../../md1
lrwxrwxrwx 1 root root  9 Apr  9 09:36
md-uuid-68c0c9ad:82ede879:2110f427:9f31c140 -> ../../md0
lrwxrwxrwx 1 root root  9 Apr  9 09:36
md-uuid-a9a43761:8148b29e:5db49d7a:c4d0a219 -> ../../md1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 wwn-0x50000f0007095017 -> ../../sdc
lrwxrwxrwx 1 root root 10 Apr  9 09:36 wwn-0x50000f0007095017-part1 ->
../../sdc1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 wwn-0x50000f00071a6761 -> ../../sdd
lrwxrwxrwx 1 root root 10 Apr  9 09:36 wwn-0x50000f00071a6761-part1 ->
../../sdd1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 wwn-0x50014ee0aba9f580 -> ../../sda
lrwxrwxrwx 1 root root 10 Apr  9 09:36 wwn-0x50014ee0aba9f580-part1 ->
../../sda1
lrwxrwxrwx 1 root root  9 Apr  9 09:36 wwn-0x50014ee259f4ab35 -> ../../sdb
lrwxrwxrwx 1 root root 10 Apr  9 09:36 wwn-0x50014ee259f4ab35-part1 ->
../../sdb1


Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md1 : active raid1 sdb1[0] sdc1[1]
      976630336 blocks super 1.2 [2/2] [UU]
     
md0 : active raid1 sda1[1]
      976629568 blocks super 1.2 [2/1] [_U]
     
unused devices: <none>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 1 needs repair
  2016-02-04 14:21 Raid 1 needs repair Stefan Lamby
@ 2016-02-04 14:51 ` Phil Turmel
  2016-02-04 16:03   ` Stefan Lamby
  0 siblings, 1 reply; 8+ messages in thread
From: Phil Turmel @ 2016-02-04 14:51 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid

Good morning Stefan,

On 02/04/2016 09:21 AM, Stefan Lamby wrote:
> Hi.
> My md0 is broken because of a damaged sda.
> It was simple to find out which on it is thanks to lsdrv by Mr Turmel.

You're welcome.  Side note:  It looks like lsdrv couldn't find the pvs
and lvs utilities to fully document your LVM setup.  I'd be interested
in knowing which distro and version that environment was.

> Now I have a new disk, same size, as sda with sda1 partition - same size -
> formatted as Linux raid autodetect.
> In the end I like to end up with the new sda1 as member of md0 and I need to put
> my boot information there, since sdb1, also member of md0, seems to no have no
> such info because the system wasn't booting with a missing sda1.

From what I can see from your report, you probably only need to do
grub-install /dev/sda and grub-install /dev/sdb

The boot partition is mirrored for the bulk of grub's data, which means
normal upgrades and config changes "just work", but a new drive needs a
bootloader.  That is outside the array in the space before the first
partition.

> I am afraid of loosing data, so I prefer asking, what to do next.
> Please guide, I have the system up and running on a sysrescd live distro.

I presume that means you booted a rescue CD and then used chroot to get
a command prompt in your installed system.  If not, grub-install
probably won't work.  Note that some distros use grub2-install.

If the above doesn't work, let us know what it says.

Phil


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 1 needs repair
  2016-02-04 14:51 ` Phil Turmel
@ 2016-02-04 16:03   ` Stefan Lamby
  2016-02-04 16:45     ` Phil Turmel
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Lamby @ 2016-02-04 16:03 UTC (permalink / raw)
  To: Phil Turmel, linux-raid

Good morning Phil.
I was hoping you are around - thanks for answering.
The distro is a ubuntu lts server version 14.04.

I was able to chroot und grub-install /dev/sdb.
The system is booting again. The md0-Array needs to be fixed now.

My docu for doing so is
mdadm --manage /dev/md0 --fail /dev/sda1    # mark as failed
mdadm --manage /dev/md0 --remove /dev/sda1  # remove failed disk from array
mdadm --manage /dev/md0 --add /dev/sda1     # add new disk to array (assumed
same or greater space for partition)

Can you confirm this?

Thanks for your help.

> Phil Turmel <philip@turmel.org> hat am 4. Februar 2016 um 15:51 geschrieben:
>
>
> Good morning Stefan,
>
> On 02/04/2016 09:21 AM, Stefan Lamby wrote:
> > Hi.
> > My md0 is broken because of a damaged sda.
> > It was simple to find out which on it is thanks to lsdrv by Mr Turmel.
>
> You're welcome. Side note: It looks like lsdrv couldn't find the pvs
> and lvs utilities to fully document your LVM setup. I'd be interested
> in knowing which distro and version that environment was.
>
> > Now I have a new disk, same size, as sda with sda1 partition - same size -
> > formatted as Linux raid autodetect.
> > In the end I like to end up with the new sda1 as member of md0 and I need to
> > put
> > my boot information there, since sdb1, also member of md0, seems to no have
> > no
> > such info because the system wasn't booting with a missing sda1.
>
> >From what I can see from your report, you probably only need to do
> grub-install /dev/sda and grub-install /dev/sdb
>
> The boot partition is mirrored for the bulk of grub's data, which means
> normal upgrades and config changes "just work", but a new drive needs a
> bootloader. That is outside the array in the space before the first
> partition.
>
> > I am afraid of loosing data, so I prefer asking, what to do next.
> > Please guide, I have the system up and running on a sysrescd live distro.
>
> I presume that means you booted a rescue CD and then used chroot to get
> a command prompt in your installed system. If not, grub-install
> probably won't work. Note that some distros use grub2-install.
>
> If the above doesn't work, let us know what it says.
>
> Phil
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 1 needs repair
  2016-02-04 16:03   ` Stefan Lamby
@ 2016-02-04 16:45     ` Phil Turmel
  2016-02-04 17:41       ` Stefan Lamby
  0 siblings, 1 reply; 8+ messages in thread
From: Phil Turmel @ 2016-02-04 16:45 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid

On 02/04/2016 11:03 AM, Stefan Lamby wrote:
> Good morning Phil.
> I was hoping you are around - thanks for answering.
> The distro is a ubuntu lts server version 14.04.
> 
> I was able to chroot und grub-install /dev/sdb.
> The system is booting again. The md0-Array needs to be fixed now.
> 
> My docu for doing so is
> mdadm --manage /dev/md0 --fail /dev/sda1    # mark as failed
> mdadm --manage /dev/md0 --remove /dev/sda1  # remove failed disk from array
> mdadm --manage /dev/md0 --add /dev/sda1     # add new disk to array (assumed
> same or greater space for partition)
> 
> Can you confirm this?

You may also need "mdadm --zero-superblock /dev/sda1" before the add
command.

But if it's currently active in the array, why do you need to fail it?

Phil


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 1 needs repair
  2016-02-04 16:45     ` Phil Turmel
@ 2016-02-04 17:41       ` Stefan Lamby
  2016-02-04 21:30         ` Phil Turmel
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Lamby @ 2016-02-04 17:41 UTC (permalink / raw)
  To: Phil Turmel, linux-raid


> You may also need "mdadm --zero-superblock /dev/sda1" before the add
> command.
>
> But if it's currently active in the array, why do you need to fail it?

So this is the right way?

mdadm --manage /dev/md0 --remove /dev/sda1 # remove failed disk from array
mdadm --zero-superblock /dev/sda1
mdadm --manage /dev/md0 --add /dev/sda1 # add new disk to array (assumed same or
greater space for partition)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 1 needs repair
  2016-02-04 17:41       ` Stefan Lamby
@ 2016-02-04 21:30         ` Phil Turmel
  2016-02-05 10:02           ` Stefan Lamby
  0 siblings, 1 reply; 8+ messages in thread
From: Phil Turmel @ 2016-02-04 21:30 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid

On 02/04/2016 12:41 PM, Stefan Lamby wrote:
> 
>> You may also need "mdadm --zero-superblock /dev/sda1" before the add
>> command.
>>
>> But if it's currently active in the array, why do you need to fail it?
> 
> So this is the right way?
> 
> mdadm --manage /dev/md0 --remove /dev/sda1 # remove failed disk from array
> mdadm --zero-superblock /dev/sda1
> mdadm --manage /dev/md0 --add /dev/sda1 # add new disk to array (assumed same or
> greater space for partition)

Your mdstat and your lsdrv output disagree on the status of /dev/sdb1.
If mdstat is current, what belongs to what?  Your --examine results
suggest sda1 and sdb1 go together, but mdstat shows sdc1 running with
sdb1 in another array and sda1 all alone.  If sda1 is all alone in an
array you can't remove it.

Your device names are probably changing between boots (not unusual) and
it is confusing your report.

Please show lsdrv results after you add the proper utilities to the
environment you run lsdrv from.  At least install lvm utilities and
mdadm utilities.

Phil

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Raid 1 needs repair
  2016-02-04 21:30         ` Phil Turmel
@ 2016-02-05 10:02           ` Stefan Lamby
       [not found]             ` <1543635690.543147.1454691080103.JavaMail.open-xchange@app03.ox.hosteurope.de>
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Lamby @ 2016-02-05 10:02 UTC (permalink / raw)
  To: Phil Turmel, linux-raid


> Please show lsdrv results after you add the proper utilities to the
> environment you run lsdrv from. At least install lvm utilities and
> mdadm utilities.
 
This is the current output from the running system. sda is not the same any
more, it has been changed to a new disk.
Hope it makes sense.
 
lsdrv
PCI [ata_generic] 00:16.2 IDE interface: Intel Corporation 8 Series/C220 Series
Chipset Family IDE-r Controller (rev 04)
├scsi 0:x:x:x [Empty]
└scsi 1:x:x:x [Empty]
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series
Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)
├scsi 2:0:0:0 ATA      WDC WD1003FZEX-0 {WD-WCC3F4PNVSLF}
│└sda 931.51g [8:0] Partitioned (dos)
│ └sda1 931.51g [8:1] Empty/Unknown
├scsi 3:0:0:0 ATA      WDC WD10EARS-00Y {WD-WCAV5D907687}
│└sdb 931.51g [8:16] Partitioned (dos)
│ └sdb1 931.51g [8:17] MD raid1 (1/2) in_sync 'speernix15:0'
{68c0c9ad-82ed-e879-2110-f4279f31c140}
│  └md0 931.39g [9:0] MD v1.2 raid1 (2) active DEGRADED
{68c0c9ad:82ede879:2110f427:9f31c140}
│   │                 PV LVM2_member 288,70g used, 642,68g free
{hHvrtB-7XOz-L6j6-AnXd-Q3Wh-uHqw-bQ3RQS}
│   └VG vg_raid1 931,38g 642,68g free {mopUew-B2i4-9fmo-mlXP-Z8C3-nPYO-XM56AY}
│    ├dm-2 186.26g [252:2] LV home ext4 {8ee8d203-f306-4846-93fe-225b018f2965}
│    │└Mounted as /dev/mapper/vg_raid1-home @ /home
│    ├dm-0 93.13g [252:0] LV root ext4 {9f9568cf-d48a-4690-97a8-14576d724daf}
│    │└Mounted as /dev/mapper/vg_raid1-root @ /
│    └dm-1 9.31g [252:1] LV swap swap {9fbbac0f-0d49-47b0-a50b-d293f19f23ef}
├scsi 4:x:x:x [Empty]
├scsi 5:0:0:0 ATA      SAMSUNG HD103UJ  {S13PJ9BQ900571}
│└sdc 931.51g [8:32] Partitioned (dos)
│ └sdc1 931.51g [8:33] MD raid1 (0/2) (w/ sdd1) in_sync 'speernix15:1'
{a9a43761-8148-b29e-5db4-9d7ac4d0a219}
│  └md1 931.39g [9:1] MD v1.2 raid1 (2) clean
{a9a43761:8148b29e:5db49d7a:c4d0a219}
│   │                 PV LVM2_member 500,00g used, 431,38g free
{JIUAPd-MMX8-k2UK-1NDY-h2Od-iBHk-a1QvgX}
│   └VG vg_backup_raid1 931,38g 431,38g free
{esKsCr-quUI-ckjF-Q6jc-wYYZ-eKL4-MRas87}
│    └dm-3 500.00g [252:3] LV lv_backup ext4
{e3600494-308f-4874-9083-ecd31a66e68a}
│     └Mounted as /dev/mapper/vg_backup_raid1-lv_backup @ /backup
├scsi 6:0:0:0 ATA      SAMSUNG HD103UJ  {S13PJ9BQA17616}
│└sdd 931.51g [8:48] Partitioned (dos)
│ └sdd1 931.51g [8:49] MD raid1 (1/2) (w/ sdc1) in_sync 'speernix15:1'
{a9a43761-8148-b29e-5db4-9d7ac4d0a219}
│  └md1 931.39g [9:1] MD v1.2 raid1 (2) clean
{a9a43761:8148b29e:5db49d7a:c4d0a219}
│                     PV LVM2_member 500,00g used, 431,38g free
{JIUAPd-MMX8-k2UK-1NDY-h2Od-iBHk-a1QvgX}
└scsi 7:x:x:x [Empty]
 
 
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md1 : active raid1 sdd1[1] sdc1[0]
      976630336 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdb1[1]
      976629568 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

pvs
  PV         VG              Fmt  Attr PSize   PFree  
  /dev/md0   vg_raid1        lvm2 a--  931,38g 642,68g
  /dev/md1   vg_backup_raid1 lvm2 a--  931,38g 431,38g
 
lvs
  LV        VG              Attr      LSize   Pool Origin Data%  Move Log Copy%
 Convert
  lv_backup vg_backup_raid1 -wi-ao--- 500,00g
                                          
  home      vg_raid1        -wi-ao--- 186,26g
                                          
  root      vg_raid1        -wi-ao---  93,13g
                                          
  swap      vg_raid1        -wi-ao---   9,31g 
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [SOLVED] Re: Raid 1 needs repair
       [not found]             ` <1543635690.543147.1454691080103.JavaMail.open-xchange@app03.ox.hosteurope.de>
@ 2016-02-05 16:54               ` Phil Turmel
  0 siblings, 0 replies; 8+ messages in thread
From: Phil Turmel @ 2016-02-05 16:54 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid

On 02/05/2016 11:51 AM, Stefan Lamby wrote:
> Hi.
> I have it up and synced again.
> A simple mdadm --manage /dev/md0 --add /dev/sda1 did it, without
> anything else. No --fail no --remove no --zero-metadata needed.
> After that I "grub-install /dev/sda" and now grub is installed at both
> array members.
>  
> Thank you for your support.

Yes, that would have been my suggestion if I had caught up on my
e-mails.  I'm in the middle of a kernel bisection :-(

Phil

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-02-05 16:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-04 14:21 Raid 1 needs repair Stefan Lamby
2016-02-04 14:51 ` Phil Turmel
2016-02-04 16:03   ` Stefan Lamby
2016-02-04 16:45     ` Phil Turmel
2016-02-04 17:41       ` Stefan Lamby
2016-02-04 21:30         ` Phil Turmel
2016-02-05 10:02           ` Stefan Lamby
     [not found]             ` <1543635690.543147.1454691080103.JavaMail.open-xchange@app03.ox.hosteurope.de>
2016-02-05 16:54               ` [SOLVED] " Phil Turmel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.