All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID5 grow interrupted.
@ 2016-09-22 14:11 Axel Spallek
  2016-09-22 15:18 ` Wols Lists
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Axel Spallek @ 2016-09-22 14:11 UTC (permalink / raw)
  To: linux-raid

Hello there.

I did something wrong.

I tried to get a disk out of a RAID5 with 8 hdds (4TB) to get a Hotspare 
to change to RAID6 afterwards.

The RAID was clean and not rebuilding before I started.

The partition was only 11TB big. Not yet resized, because not created 
with 64Bit, which I wanted to do afterwards.

Therefore I issued the following commands:

mdadm --grow -n7 /dev/md1  <-- just to get the size for the next command.

mdadm --grow /dev/md1 --array-size 23441292288

mdadm --grow -n7 /dev/md1 --backup-file /var/backups/mdadm.backup


The RAID /dev/md1 is mounted on /srv, so the backupfile is safe.

After a some time, someone told me, that seafile does not work. Since I 
was in a hurry, I just rebootet the server and forgot the RAID rebuild.

The server came up again, but without /dev/md1.

I had made a Backup, which is 2 days old. Not so bad, because I have the 
seafile data on several computers. But to get the RAID back to work 
would be better.

How do I restart the rebuild process with the backup-file?

This is what I get in the console:

root@s10:~# cat /proc/mdstat
Personalities :
md1 : inactive sdh1[0](S) sda1[5](S) sdb1[6](S) sdc1[7](S) sdd1[8](S) 
sdf1[4](S) sde1[2](S) sdg1[1](S)
       31255059140 blocks super 1.2

unused devices: <none>


root@s10:~# mdadm -A --scan --verbose
mdadm: looking for devices for further assembly
mdadm: /dev/sdg1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdg
mdadm: /dev/sdd1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdd
mdadm: /dev/sdc1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdc
mdadm: /dev/sda1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: /dev/sdb1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: /dev/sdf1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdf
mdadm: /dev/sdh1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sdh
mdadm: /dev/sde1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sde
mdadm: no recogniseable superblock on /dev/sdj5
mdadm: Cannot assemble mbr metadata on /dev/sdj2
mdadm: no recogniseable superblock on /dev/sdj1
mdadm: Cannot assemble mbr metadata on /dev/sdj
mdadm: no recogniseable superblock on /dev/sdi1
mdadm: Cannot assemble mbr metadata on /dev/sdi
mdadm: No arrays found in config file or automatically


root@s10:~# mdadm --examine --scan
ARRAY /dev/md/1  metadata=1.2 UUID=48f60e15:900f47cc:6c5f42b1:82f01530 
name=s10:1


root@s10:~# mdadm --examine /dev/sd*
/dev/sda:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sda1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : 9fe5980b:be2beb6b:59537ad1:90091564

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : 1ea679f3 - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 7
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdb1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : 370c5540:2d3bdd3e:40a36449:b82309a8

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : e2331a09 - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 6
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdc1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : a9e8caf9:4c70d937:50e55bdf:b736cb97

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : 1a5e80ac - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 5
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdd1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : 50a6466d:057b9171:22865989:cabb59ce

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : a81e6ef1 - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 4
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sde1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : 1fd7fdde:3220d053:1cad772f:508ed8a7

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : 7bb13e67 - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 2
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdf1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : b2788315:a75fec1f:1d2681ee:2bba1be7

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : 7c9fc44d - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 3
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdg1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : 4e7223d6:4416983d:7812788b:a2114dec

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : fd13b855 - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 1
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdh:
    MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdh1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x4
      Array UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
            Name : s10:1  (local to host s10)
   Creation Time : Fri Sep 16 06:59:32 2016
      Raid Level : raid5
    Raid Devices : 7

  Avail Dev Size : 7813764785 (3725.89 GiB 4000.65 GB)
      Array Size : 23441292288 (22355.36 GiB 24003.88 GB)
   Used Dev Size : 7813764096 (3725.89 GiB 4000.65 GB)
     Data Offset : 262144 sectors
    Super Offset : 8 sectors
    Unused Space : before=262064 sectors, after=689 sectors
           State : clean
     Device UUID : c1226e5c:4145cbcf:0bbe6160:4e0da07b

   Reshape pos'n : 22485531648 (21443.87 GiB 23025.18 GB)
   Delta Devices : -1 (8->7)

     Update Time : Thu Sep 22 14:23:54 2016
        Checksum : 79ce3f03 - correct
          Events : 180344

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 0
    Array State : AAAAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdi:
    MBR Magic : aa55
Partition[0] :    500109312 sectors at         2048 (type 83)
mdadm: No md superblock detected on /dev/sdi1.
/dev/sdj:
    MBR Magic : aa55
Partition[0] :     56143872 sectors at         2048 (type 83)
Partition[1] :      2478082 sectors at     56147966 (type 05)
mdadm: No md superblock detected on /dev/sdj1.
/dev/sdj2:
    MBR Magic : aa55
Partition[0] :      2478080 sectors at            2 (type 82)
mdadm: No md superblock detected on /dev/sdj5.



root@s10:~# mdadm --detail /dev/md1
/dev/md1:
         Version : 1.2
      Raid Level : raid0
   Total Devices : 8
     Persistence : Superblock is persistent

           State : inactive

   Delta Devices : -1, (1->0)
       New Level : raid5
      New Layout : left-symmetric
   New Chunksize : 512K

            Name : s10:1  (local to host s10)
            UUID : 48f60e15:900f47cc:6c5f42b1:82f01530
          Events : 180344

     Number   Major   Minor   RaidDevice

        -       8        1        -        /dev/sda1
        -       8       17        -        /dev/sdb1
        -       8       33        -        /dev/sdc1
        -       8       49        -        /dev/sdd1
        -       8       65        -        /dev/sde1
        -       8       81        -        /dev/sdf1
        -       8       97        -        /dev/sdg1
        -       8      113        -        /dev/sdh1


-- 

Mit freundlichem Gruß,

Axel Spallek
Dipl.-Ing. FH IE

Hochdorfer Straße 34
88477 Schönebürg

Mobil: 01577 7929886
E-Mail: axel@spallek.org

Bankverbindung:
Volksbank Schwendi
IBAN: DE20654913200095195009
BIC: GENODES1VBL

Steuernummer: 5438000091
UST.-ID.:     DE290536647


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
  2016-09-22 14:11 RAID5 grow interrupted Axel Spallek
@ 2016-09-22 15:18 ` Wols Lists
  2016-09-22 18:44   ` Axel Spallek
  2016-09-22 17:19 ` Andreas Klauer
       [not found] ` <9c618b69-1e84-60a8-d25c-55171058c855@websitemanagers.com.au>
  2 siblings, 1 reply; 8+ messages in thread
From: Wols Lists @ 2016-09-22 15:18 UTC (permalink / raw)
  To: Axel Spallek, linux-raid

On 22/09/16 15:11, Axel Spallek wrote:
> The server came up again, but without /dev/md1.
> 
> I had made a Backup, which is 2 days old. Not so bad, because I have the
> seafile data on several computers. But to get the RAID back to work
> would be better.
> 
> How do I restart the rebuild process with the backup-file?
> 
> This is what I get in the console:
> 
> root@s10:~# cat /proc/mdstat
> Personalities :
> md1 : inactive sdh1[0](S) sda1[5](S) sdb1[6](S) sdc1[7](S) sdd1[8](S)
> sdf1[4](S) sde1[2](S) sdg1[1](S)
>       31255059140 blocks super 1.2
> 
> unused devices: <none>

Okay, quick response here. Won't do any damage, might work.

Stop and reassemble the array ...

mdadm --stop /dev/md1

mdadm --assemble --scan

What you describe sounds like something tried to start the array while
it was half-assembled. There's a bunch of interactions between udev,
systemd, and mdadm, which seem to get badly off-kilter if something
hiccups anywhere.

My suggestion won't do any harm - it might fix the problem, and it'll
give the experts the chance to chime in.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
  2016-09-22 14:11 RAID5 grow interrupted Axel Spallek
  2016-09-22 15:18 ` Wols Lists
@ 2016-09-22 17:19 ` Andreas Klauer
  2016-09-22 18:59   ` Axel Spallek
       [not found] ` <9c618b69-1e84-60a8-d25c-55171058c855@websitemanagers.com.au>
  2 siblings, 1 reply; 8+ messages in thread
From: Andreas Klauer @ 2016-09-22 17:19 UTC (permalink / raw)
  To: Axel Spallek; +Cc: linux-raid

On Thu, Sep 22, 2016 at 04:11:28PM +0200, Axel Spallek wrote:
> I tried to get a disk out of a RAID5 with 8 hdds (4TB) to get a Hotspare 
> to change to RAID6 afterwards.

I'm a bit confused here.

You started out with 8 disk RAID-5? And final result is to be what?
RAID-6 with 8 disks?

> Therefore I issued the following commands:
> 
> mdadm --grow -n7 /dev/md1  <-- just to get the size for the next command.
> 
> mdadm --grow /dev/md1 --array-size 23441292288
> 
> mdadm --grow -n7 /dev/md1 --backup-file /var/backups/mdadm.backup

I'm not sure, but you might not have needed any of these intermediate steps. 
But now you started it you might have to finish them.
That said there is a reshape revert (which is not yet in the manpage?) 
which might help if the regular reshape just won't resume.

> root@s10:~# mdadm -A --scan --verbose
> mdadm: looking for devices for further assembly
> mdadm: /dev/sdg1 is busy - skipping

It says busy because already assembled (or in use by something else).
So you have to stop first before assembeling again.

I didn't see anything out of place in your --examine.
It seems to be in sync, event count identical, ....

Good luck
Andreas Klauer

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
       [not found] ` <9c618b69-1e84-60a8-d25c-55171058c855@websitemanagers.com.au>
@ 2016-09-22 18:41   ` Axel Spallek
  2016-09-22 19:45     ` Andreas Klauer
  0 siblings, 1 reply; 8+ messages in thread
From: Axel Spallek @ 2016-09-22 18:41 UTC (permalink / raw)
  To: Adam Goryachev, linux-raid

Here the output:


root@s10:~# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
root@s10:~# cat /proc/mdstat
Personalities :
unused devices: <none>
root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1
mdadm: /dev/md1: Need a backup file to complete reshape of this array.
mdadm: Please provided one with "--backup-file=..."
root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1 
--backup-file=/var/backups/mdadm.backup
mdadm: /dev/sda1 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: /dev/sde1 is busy - skipping
mdadm: /dev/sdf1 is busy - skipping
mdadm: /dev/sdg1 is busy - skipping
mdadm: /dev/sdh1 is busy - skipping
root@s10:~# cat /proc/mdstat
Personalities :
md1 : inactive sdh1[0](S) sda1[5](S) sdb1[6](S) sdc1[7](S) sdd1[8](S) 
sdf1[4](S) sde1[2](S) sdg1[1](S)
       31255059140 blocks super 1.2

unused devices: <none>


Am 22.09.2016 um 16:27 schrieb Adam Goryachev:
>
>
> On 23/09/2016 00:11, Axel Spallek wrote:
>> Hello there.
>>
>> I did something wrong.
>>
>> I tried to get a disk out of a RAID5 with 8 hdds (4TB) to get a 
>> Hotspare to change to RAID6 afterwards.
>>
>> The RAID was clean and not rebuilding before I started.
>>
>> The partition was only 11TB big. Not yet resized, because not created 
>> with 64Bit, which I wanted to do afterwards.
>>
>> Therefore I issued the following commands:
>>
>> mdadm --grow -n7 /dev/md1  <-- just to get the size for the next 
>> command.
>>
>> mdadm --grow /dev/md1 --array-size 23441292288
>>
>> mdadm --grow -n7 /dev/md1 --backup-file /var/backups/mdadm.backup 
>> UST.-ID.: DE290536647

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
  2016-09-22 15:18 ` Wols Lists
@ 2016-09-22 18:44   ` Axel Spallek
  0 siblings, 0 replies; 8+ messages in thread
From: Axel Spallek @ 2016-09-22 18:44 UTC (permalink / raw)
  To: Wols Lists, linux-raid

root@s10:~# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
root@s10:~#
root@s10:~# mdadm --assemble --scan
mdadm: /dev/md/1: Need a backup file to complete reshape of this array.
mdadm: Please provided one with "--backup-file=..."
mdadm: No arrays found in config file or automatically
root@s10:~# mdadm --assemble --scan --backup-file=/var/backups/mdadm.backup
mdadm: --backup_file not meaningful with a --scan assembly.


Am 22.09.2016 um 17:18 schrieb Wols Lists:
> mdadm --assemble --scan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
  2016-09-22 17:19 ` Andreas Klauer
@ 2016-09-22 18:59   ` Axel Spallek
  0 siblings, 0 replies; 8+ messages in thread
From: Axel Spallek @ 2016-09-22 18:59 UTC (permalink / raw)
  To: Andreas Klauer, linux-raid



Am 22.09.2016 um 19:19 schrieb Andreas Klauer:
> On Thu, Sep 22, 2016 at 04:11:28PM +0200, Axel Spallek wrote:
>> I tried to get a disk out of a RAID5 with 8 hdds (4TB) to get a Hotspare
>> to change to RAID6 afterwards.
> I'm a bit confused here.
>
> You started out with 8 disk RAID-5? And final result is to be what?
> RAID-6 with 8 disks?
Correct.
>
>> Therefore I issued the following commands:
>>
>> mdadm --grow -n7 /dev/md1  <-- just to get the size for the next command.
>>
>> mdadm --grow /dev/md1 --array-size 23441292288
>>
>> mdadm --grow -n7 /dev/md1 --backup-file /var/backups/mdadm.backup
> I'm not sure, but you might not have needed any of these intermediate steps.
Hmmm. Had RAID5 with 8 disks. To convert to RAID6 you need a Hotspare, 
but all my drive-cage is full.
> But now you started it you might have to finish them.
> That said there is a reshape revert (which is not yet in the manpage?)
> which might help if the regular reshape just won't resume.
Yes. That info would help.
>
>> root@s10:~# mdadm -A --scan --verbose
>> mdadm: looking for devices for further assembly
>> mdadm: /dev/sdg1 is busy - skipping
> It says busy because already assembled (or in use by something else).
> So you have to stop first before assembeling again.
>
> I didn't see anything out of place in your --examine.
> It seems to be in sync, event count identical, ....
>
> Good luck
> Andreas Klauer

-- 

Mit freundlichem Gruß,

Axel Spallek
Dipl.-Ing. FH IE

Hochdorfer Straße 34
88477 Schönebürg

Mobil: 01577 7929886
E-Mail: axel@spallek.org

Bankverbindung:
Volksbank Schwendi
IBAN: DE20654913200095195009
BIC: GENODES1VBL

Steuernummer: 5438000091
UST.-ID.:     DE290536647


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
  2016-09-22 18:41   ` Axel Spallek
@ 2016-09-22 19:45     ` Andreas Klauer
  2016-09-23  5:23       ` Axel Spallek
  0 siblings, 1 reply; 8+ messages in thread
From: Andreas Klauer @ 2016-09-22 19:45 UTC (permalink / raw)
  To: Axel Spallek; +Cc: Adam Goryachev, linux-raid

On Thu, Sep 22, 2016 at 08:41:59PM +0200, Axel Spallek wrote:
> root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1
> mdadm: /dev/md1: Need a backup file to complete reshape of this array.
> mdadm: Please provided one with "--backup-file=..."

I assume it was assembled anyway at this point so --stop again...

> root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1 
> --backup-file=/var/backups/mdadm.backup
> mdadm: /dev/sda1 is busy - skipping
> mdadm: /dev/sdb1 is busy - skipping

regards
Andreas Klauer

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 grow interrupted.
  2016-09-22 19:45     ` Andreas Klauer
@ 2016-09-23  5:23       ` Axel Spallek
  0 siblings, 0 replies; 8+ messages in thread
From: Axel Spallek @ 2016-09-23  5:23 UTC (permalink / raw)
  To: Andreas Klauer, linux-raid

This one did it, but not the first time I issued this command. Maybe 
--assemble --scan was needed first.

Thanks to all!


root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1 
--backup-file=/var/backups/mdadm.backup
mdadm: Need to backup 21504K of critical section..
mdadm: /dev/md1 has been started with 8 drives.
root@s10:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sdh1[0] sda1[5] sdb1[6] sdc1[7] sdd1[8] sdf1[4] 
sde1[2] sdg1[1]
       23441292288 blocks super 1.2 level 5, 512k chunk, algorithm 2 
[7/7] [UUUUUUU]
       [>....................]  reshape =  4.0% (159293440/3906882048) 
finish=50748595.7min speed=0K/sec

unused devices: <none>

Aksels




Am 22.09.2016 um 21:45 schrieb Andreas Klauer:
> On Thu, Sep 22, 2016 at 08:41:59PM +0200, Axel Spallek wrote:
>> root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1
>> mdadm: /dev/md1: Need a backup file to complete reshape of this array.
>> mdadm: Please provided one with "--backup-file=..."
> I assume it was assembled anyway at this point so --stop again...
>
>> root@s10:~# mdadm --assemble /dev/md1 /dev/sd[a-h]1
>> --backup-file=/var/backups/mdadm.backup
>> mdadm: /dev/sda1 is busy - skipping
>> mdadm: /dev/sdb1 is busy - skipping
> regards
> Andreas Klauer

-- 

Mit freundlichem Gruß,

Axel Spallek
Dipl.-Ing. FH IE

Hochdorfer Straße 34
88477 Schönebürg

Mobil: 01577 7929886
E-Mail: axel@spallek.org

Bankverbindung:
Volksbank Schwendi
IBAN: DE20654913200095195009
BIC: GENODES1VBL

Steuernummer: 5438000091
UST.-ID.:     DE290536647


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-09-23  5:23 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-22 14:11 RAID5 grow interrupted Axel Spallek
2016-09-22 15:18 ` Wols Lists
2016-09-22 18:44   ` Axel Spallek
2016-09-22 17:19 ` Andreas Klauer
2016-09-22 18:59   ` Axel Spallek
     [not found] ` <9c618b69-1e84-60a8-d25c-55171058c855@websitemanagers.com.au>
2016-09-22 18:41   ` Axel Spallek
2016-09-22 19:45     ` Andreas Klauer
2016-09-23  5:23       ` Axel Spallek

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.