All of lore.kernel.org
 help / color / mirror / Atom feed
* Brocken Raid & LUKS
@ 2013-02-19 16:01 stone
  2013-02-19 17:57 ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: stone @ 2013-02-19 16:01 UTC (permalink / raw)
  To: linux-raid

hi guys.

yesterday my raid5 of 4 disk is broken and i dont find a better way as 
to re-create it new.
mdadm --create /dev/md2 --assume-clean --verbose --level=5 
--raid-devices=4 /dev/sdc1 /dev/sdd1 missing /dev/sdf1

my problem is now that i cannot open die LUKS on the device md2
cryptsetup luksOpen /dev/md2 md2_nas
Device /dev/md2 is not a valid LUKS device.

i found with an hexdump on the disk sdc1 and sdf1 the LUKS header
hexdump -C /dev/sdc1 | head -40
.....
00100000  4c 55 4b 53 ba be 00 01  61 65 73 00 00 00 00 00 
|LUKS....aes.....|
....

so i think the header must be also on the md2 device but not on the 
beginning.
is my raid constrct false? must i reconstruct my array new?

who can i bring my raid so up that i can open the LUKS and save all my data?

thank you.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-19 16:01 Brocken Raid & LUKS stone
@ 2013-02-19 17:57 ` Phil Turmel
       [not found]   ` <5123E4E9.3020609@heisl.org>
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-19 17:57 UTC (permalink / raw)
  To: stone; +Cc: linux-raid

On 02/19/2013 11:01 AM, stone@heisl.org wrote:
> hi guys.
> 
> yesterday my raid5 of 4 disk is broken and i dont find a better way as
> to re-create it new.
> mdadm --create /dev/md2 --assume-clean --verbose --level=5
> --raid-devices=4 /dev/sdc1 /dev/sdd1 missing /dev/sdf1
> 
> my problem is now that i cannot open die LUKS on the device md2
> cryptsetup luksOpen /dev/md2 md2_nas
> Device /dev/md2 is not a valid LUKS device.
> 
> i found with an hexdump on the disk sdc1 and sdf1 the LUKS header
> hexdump -C /dev/sdc1 | head -40
> .....
> 00100000  4c 55 4b 53 ba be 00 01  61 65 73 00 00 00 00 00
> |LUKS....aes.....|
> ....
> 
> so i think the header must be also on the md2 device but not on the
> beginning.
> is my raid constrct false? must i reconstruct my array new?
> 
> who can i bring my raid so up that i can open the LUKS and save all my
> data?

Please post your "mdadm -E" reports for your disks from *before* you did
"mdadm --create".  If you do not have these reports, some guessing may
be required.  (And why did you choose mdadm --create?  That's a terrible
step to take without good advice first.)

Also post "mdadm -E" reports for /dev/sdc1, sdd1, and sdf1 as they are
now, so we can compare.

If you still have dmesg from before and after the breakage, please post
it too.

The hexdump above is very valuable, and will certainly help make
educated guesses (if necessary).

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
       [not found]   ` <5123E4E9.3020609@heisl.org>
@ 2013-02-19 21:16     ` Phil Turmel
       [not found]       ` <5123EF45.6080405@heisl.org>
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-19 21:16 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

Hi Stone,

You dropped the linux-raid list.  Please use "Reply-to-all" for any list
on vger.kernel.org.

[trim /]

>>> i found with an hexdump on the disk sdc1 and sdf1 the LUKS header
>>> hexdump -C /dev/sdc1 | head -40
>>> .....
>>> 00100000  4c 55 4b 53 ba be 00 01  61 65 73 00 00 00 00 00
>>> |LUKS....aes.....|

Note that the location is 100000 hex.  That is 1MB, or 2048 512-byte
sectors.

> i dont have a report from my disks before i recreated it. why i do this?
> i have found many postings and there say this is a good way... :/

Many people get in trouble and *have* to do it, but it is a *last*
resort, as it destroys the original configuration data.  Most people who
blog about these things report the command the fixed *their* problem,
without thinking about what *should* be done.

> mdadm -E /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x0
>      Array UUID : 87345225:b5aea7dc:3f3569ba:4804f177
>            Name : bender:2  (local to host bender)
>   Creation Time : Tue Feb 19 10:20:40 2013
>      Raid Level : raid5
>    Raid Devices : 4
> 
>  Avail Dev Size : 3906766941 (1862.89 GiB 2000.26 GB)
>      Array Size : 5860145664 (5588.67 GiB 6000.79 GB)
>   Used Dev Size : 3906763776 (1862.89 GiB 2000.26 GB)
>     Data Offset : 262144 sectors

When you recreated the array, the newer version of mdadm used a
different data offset.

>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 4353f38f:8adbd4fb:a80abaff:a08a784f
> 
>     Update Time : Tue Feb 19 10:33:58 2013
>        Checksum : c2ed9b46 - correct
>          Events : 4
> 
>          Layout : left-symmetric
>      Chunk Size : 512K

This chunk size is the default for recent versions of mdadm, but not
older ones.  But the 1MB data offset is also somewhat recent, so there's
a good chance this will work.

[trim /]

> i also have a hexdump on the md2 device running but this takes on 6Tb a
> very long time...

This won't be needed.

> the crash was on Feb 18. the syslog from this date i have. i attacht it
> at this mail and hope that is ok.
> at the end of the syslog.2 you see the first errors and then came the
> logswitch

I was hoping for the last successful boot-up from before the drive
failure, so I could see the device order for sure.  But I did find a
recovery event on the 17th that shows it:

> Feb 17 13:49:34 bender kernel: [5286525.603601] RAID conf printout:
> Feb 17 13:49:34 bender kernel: [5286525.603609]  --- level:5 rd:4 wd:3
> Feb 17 13:49:34 bender kernel: [5286525.603615]  disk 0, o:1, dev:sdc1
> Feb 17 13:49:34 bender kernel: [5286525.603620]  disk 1, o:1, dev:sdd1
> Feb 17 13:49:34 bender kernel: [5286525.603624]  disk 2, o:1, dev:sde1
> Feb 17 13:49:34 bender kernel: [5286525.603628]  disk 3, o:1, dev:sdf1

So your next step is find an older copy of mdadm that will create an
array with Data Offset of 2048 sectors (logical 512-byte sectors).
Something from about six months ago should do.  (The new 128MB offset
default is to support Bad Block logging, a fairly new feature.)

Then, with the older mdamd version, you must use "mdadm --create
--assume-clean" just like you already did.  If luksOpen works, do *not*
mount it until you've used "fsck -n" to see if the array properties are
correct.

If that reports many errors, you will need to try other chunk sizes
until you find the size the array was created with.  If you had saved
the "mdadm -E" reports from the original array, we would not have to guess.

Meanwhile, you need to investigate why you lost one disk, and then
another during rebuild.  This is often a side effect of using cheap
desktop drives in your array.  It is possible to do, but doesn't work
"out-of-the-box".

Please share "smartctl -x" from each of your drives, and the output of:

for x in /sys/block/sd*/device/timeout ; do echo $x ; cat $x ; done

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
       [not found]           ` <5123FB71.3060509@heisl.org>
@ 2013-02-20  0:31             ` Phil Turmel
  2013-02-20 18:32               ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-20  0:31 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

You forgot to include linux-raid again.  I'm adding them back to the
CC:.  Please always use "reply to all" in your email client.

I will look for your detailed reply tomorrow.

Phil

On 02/19/2013 05:23 PM, Stone wrote:
> Am 19.02.2013 23:08, schrieb Phil Turmel:
>> On 02/19/2013 04:31 PM, Stone wrote:
>>
>> [trim /]
>>
>>>> [trim /]
>>> ok. my system is a ubuntu 12.04
>>> i can install a older mdadm or a install a old ubuntu like 11.04. there
>>> is a older mdadm on board.
>> Using the older ubuntu as a LiveCD should be fine--you don't have to
>> uninistall your current system.
>>
>> [trim /]
>>
>>> ok. here my next steps
>>> i find a older mdadm or i install a older ubunt with an older mdadm on
>>> board.
>>> then i stop my md2 device and recreate it with: mdadm --create /dev/md2
>>> --assume-clean --verbose --level=5 --raid-devices=4 /dev/sdc1 /dev/sdd1
>>> missing /dev/sdf1
>> Yes.  But read all the way through first....
>>
>>> with a little bit of hope i can open the device.
>> But *don't* mount it!  Use "fsck -n" after you open it to verify it is
>> Ok.  If you mount it, and the chunk size is wrong, it will damage your
>> encrypted filesystem.
>>
>>> if not. i stop the md2 and recreate it with? with the parameter chunk?
>>> and with what value? do you have a range for me?
>> The current default is 512.  The old default was 64.  I'd try that if
>> 512 doesn't work.  After that you'll have to guess.
> Ok i will test this tomorrow.
>>> here the timeout infos:
>>> for x in /sys/block/sd*/device/timeout ; do echo $x ; cat $x ; done
>>> /sys/block/sda/device/timeout
>>> 30
>>> /sys/block/sdb/device/timeout
>>> 30
>>> /sys/block/sdc/device/timeout
>>> 30
>>> /sys/block/sdd/device/timeout
>>> 30
>>> /sys/block/sde/device/timeout
>>> 30
>>> /sys/block/sdf/device/timeout
>>> 30
>> Ok, these are all Linux default.  30 seconds.
>>
>>> here the smart infos:
>> Uh oh.  Two serious issues:
>>
>>> smartctl -x /dev/sdc1
>>> smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-23-generic] (local
>>> build)
>>> Copyright (C) 2002-11 by Bruce Allen,
>>> http://smartmontools.sourceforge.net
>> [trim /]
>>
>>>    5 Reallocated_Sector_Ct   PO--CK   200   200   140    -    0
>>>    7 Seek_Error_Rate         -OSR-K   200   200   000    -    0
>>>    9 Power_On_Hours          -O--CK   078   078   000    -    16219
>>>   10 Spin_Retry_Count        -O--CK   100   100   000    -    0
>>>   11 Calibration_Retry_Count -O--CK   100   253   000    -    0
>>>   12 Power_Cycle_Count       -O--CK   100   100   000    -    84
>>> 192 Power-Off_Retract_Count -O--CK   200   200   000    -    82
>>> 193 Load_Cycle_Count        -O--CK   169   169   000    -    94419
>>> 194 Temperature_Celsius     -O---K   114   106   000    -    36
>>> 196 Reallocated_Event_Count -O--CK   200   200   000    -    0
>>> 197 Current_Pending_Sector  -O--CK   200   200   000    -    2
>> Serious issue #1:
>>
>> You have unreadable sectors on sdc.  When you hit them during rebuild,
>> sdc will be kicked out (again).  They might not be permanent errors, but
>> you can't tell until the drive is given fresh data to write over them.
>>
>> You have two choices:
>>
>> 1) use ddrescue to copy sdc onto a new drive, then use it in place of
>> sdc when you re-create the array, or
>>
>> 2) use badblocks to find the exact locations of the bad sectors, then
>> write zeros to those sectors using dd.
>>
>> Either way, you have lost whatever those sectors used to hold.
>>
>> [trim /]
> yes this cheep WD Green drives. i have 4 new better drives here the i
> will use instead. this means i will get the raid running and than i copy
> all the data on the new drives.
>>> SCT Status Version:                  3
>>> SCT Version (vendor specific):       258 (0x0102)
>>> SCT Support Level:                   1
>>> Device State:                        Active (0)
>>> Current Temperature:                    36 Celsius
>>> Power Cycle Min/Max Temperature:     33/37 Celsius
>>> Lifetime    Min/Max Temperature:     33/44 Celsius
>>> Under/Over Temperature Limit Count:   0/0
>>> SCT Temperature History Version:     2
>>> Temperature Sampling Period:         1 minute
>>> Temperature Logging Interval:        1 minute
>>> Min/Max recommended Temperature:      0/60 Celsius
>>> Min/Max Temperature Limit:           -41/85 Celsius
>>> Temperature History Size (Index):    478 (314)
>>>
>>> Index    Estimated Time   Temperature Celsius
>>>   315    2013-02-19 14:26    36  *****************
>>>   ...    ..(476 skipped).    ..  *****************
>>>   314    2013-02-19 22:23    36  *****************
>>>
>>> Warning: device does not support SCT Error Recovery Control command
>> Serious issue #2:
>>
>> Error timeout mismatch.  Your cheap drives do not support Error Recovery
>> Control.  That means when they run into unreadable sectors, they will
>> spend a couple minutes trying "extra hard" to get the data.
>>
>> But linux is only going to wait 30 seconds.  Then it will reset the SATA
>> link and try again.  But the drive will *not* give up its error recovery
>> effort, and will not even *talk* to the linux driver in the meantime, so
>> the linux driver will disconnect the drive and report errors for all
>> remaining requests.  This will cause MD to kick the drive out.
>>
>> You only have one choice:
>>
>> 1) Set a long timeout in the linux drivers for the drives in your array,
>> on every boot.  Something like:
>>
>> for x in /sys/block/sd[cdef]/device/timeout ; do echo 180 >$x ; done
>>
>> If you had slightly better drives, SCTERC would be supported.  On
>> desktop drives at power up, it is disabled.  But you would be able to
>> enable a normal 7.0 second timeout in the drives using smartctl.  (In a
>> script, on every boot up.)  Enterprise "raid" drives do this by default.
>>
>> [trim /]
>>
>>> smartctl -x /dev/sdd1
>>> smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-23-generic] (local
>>> build)
>>> Copyright (C) 2002-11 by Bruce Allen,
>>> http://smartmontools.sourceforge.net
>> [trim /]
>>
>>> SMART Attributes Data Structure revision number: 16
>>> Vendor Specific SMART Attributes with Thresholds:
>>> ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
>>>    1 Raw_Read_Error_Rate     POSR-K   200   200   051    -    534
>>>    3 Spin_Up_Time            POS--K   172   171   021    -    6383
>>>    4 Start_Stop_Count        -O--CK   100   100   000    -    586
>>>    5 Reallocated_Sector_Ct   PO--CK   200   200   140    -    2
>> You already have two relocations on this drive.
>>
>>>    7 Seek_Error_Rate         -OSR-K   100   253   000    -    0
>>>    9 Power_On_Hours          -O--CK   085   085   000    -    11487
>> In less than two years.  You should pay close attention to this.
>>
>> Phil
> i think i must learn to interpret the smart values better.
> thank you.
> i will send you tomorrow my new info with the older mdadm version.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-20  0:31             ` Phil Turmel
@ 2013-02-20 18:32               ` Stone
  2013-02-20 18:39                 ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-20 18:32 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 20.02.2013 01:31, schrieb Phil Turmel:
> You forgot to include linux-raid again.  I'm adding them back to the
> CC:.  Please always use "reply to all" in your email client.
Sorry.
> I will look for your detailed reply tomorrow.
>
> Phil
>
> On 02/19/2013 05:23 PM, Stone wrote:
>> Am 19.02.2013 23:08, schrieb Phil Turmel:
>>> On 02/19/2013 04:31 PM, Stone wrote:
>>>
>>> [trim /]
>>>
>>>>> [trim /]
>>>> ok. my system is a ubuntu 12.04
>>>> i can install a older mdadm or a install a old ubuntu like 11.04. there
>>>> is a older mdadm on board.
>>> Using the older ubuntu as a LiveCD should be fine--you don't have to
>>> uninistall your current system.
>>>
>>> [trim /]
>>>
>>>> ok. here my next steps
>>>> i find a older mdadm or i install a older ubunt with an older mdadm on
>>>> board.
>>>> then i stop my md2 device and recreate it with: mdadm --create /dev/md2
>>>> --assume-clean --verbose --level=5 --raid-devices=4 /dev/sdc1 /dev/sdd1
>>>> missing /dev/sdf1
>>> Yes.  But read all the way through first....
>>>
>>>> with a little bit of hope i can open the device.
>>> But *don't* mount it!  Use "fsck -n" after you open it to verify it is
>>> Ok.  If you mount it, and the chunk size is wrong, it will damage your
>>> encrypted filesystem.
>>>
>>>> if not. i stop the md2 and recreate it with? with the parameter chunk?
>>>> and with what value? do you have a range for me?
>>> The current default is 512.  The old default was 64.  I'd try that if
>>> 512 doesn't work.  After that you'll have to guess.
>> Ok i will test this tomorrow.
>>>> here the timeout infos:
>>>> for x in /sys/block/sd*/device/timeout ; do echo $x ; cat $x ; done
>>>> /sys/block/sda/device/timeout
>>>> 30
>>>> /sys/block/sdb/device/timeout
>>>> 30
>>>> /sys/block/sdc/device/timeout
>>>> 30
>>>> /sys/block/sdd/device/timeout
>>>> 30
>>>> /sys/block/sde/device/timeout
>>>> 30
>>>> /sys/block/sdf/device/timeout
>>>> 30
>>> Ok, these are all Linux default.  30 seconds.
>>>
>>>> here the smart infos:
>>> Uh oh.  Two serious issues:
>>>
>>>> smartctl -x /dev/sdc1
>>>> smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-23-generic] (local
>>>> build)
>>>> Copyright (C) 2002-11 by Bruce Allen,
>>>> http://smartmontools.sourceforge.net
>>> [trim /]
>>>
>>>>     5 Reallocated_Sector_Ct   PO--CK   200   200   140    -    0
>>>>     7 Seek_Error_Rate         -OSR-K   200   200   000    -    0
>>>>     9 Power_On_Hours          -O--CK   078   078   000    -    16219
>>>>    10 Spin_Retry_Count        -O--CK   100   100   000    -    0
>>>>    11 Calibration_Retry_Count -O--CK   100   253   000    -    0
>>>>    12 Power_Cycle_Count       -O--CK   100   100   000    -    84
>>>> 192 Power-Off_Retract_Count -O--CK   200   200   000    -    82
>>>> 193 Load_Cycle_Count        -O--CK   169   169   000    -    94419
>>>> 194 Temperature_Celsius     -O---K   114   106   000    -    36
>>>> 196 Reallocated_Event_Count -O--CK   200   200   000    -    0
>>>> 197 Current_Pending_Sector  -O--CK   200   200   000    -    2
>>> Serious issue #1:
>>>
>>> You have unreadable sectors on sdc.  When you hit them during rebuild,
>>> sdc will be kicked out (again).  They might not be permanent errors, but
>>> you can't tell until the drive is given fresh data to write over them.
>>>
>>> You have two choices:
>>>
>>> 1) use ddrescue to copy sdc onto a new drive, then use it in place of
>>> sdc when you re-create the array, or
>>>
>>> 2) use badblocks to find the exact locations of the bad sectors, then
>>> write zeros to those sectors using dd.
>>>
>>> Either way, you have lost whatever those sectors used to hold.
befor i will recreate the raid with an older mdadm i would search the 
badblocks. is this right?
i have check all drives and the sdc device had badblock:
Pass completed, 48 bad blocks found. (48/0/0 errors)
but die binary dont give me the info where they are..
i have used this command in a screen badblocks -v /dev/sdc1
>>> [trim /]
>> yes this cheep WD Green drives. i have 4 new better drives here the i
>> will use instead. this means i will get the raid running and than i copy
>> all the data on the new drives.
>>>> SCT Status Version:                  3
>>>> SCT Version (vendor specific):       258 (0x0102)
>>>> SCT Support Level:                   1
>>>> Device State:                        Active (0)
>>>> Current Temperature:                    36 Celsius
>>>> Power Cycle Min/Max Temperature:     33/37 Celsius
>>>> Lifetime    Min/Max Temperature:     33/44 Celsius
>>>> Under/Over Temperature Limit Count:   0/0
>>>> SCT Temperature History Version:     2
>>>> Temperature Sampling Period:         1 minute
>>>> Temperature Logging Interval:        1 minute
>>>> Min/Max recommended Temperature:      0/60 Celsius
>>>> Min/Max Temperature Limit:           -41/85 Celsius
>>>> Temperature History Size (Index):    478 (314)
>>>>
>>>> Index    Estimated Time   Temperature Celsius
>>>>    315    2013-02-19 14:26    36  *****************
>>>>    ...    ..(476 skipped).    ..  *****************
>>>>    314    2013-02-19 22:23    36  *****************
>>>>
>>>> Warning: device does not support SCT Error Recovery Control command
>>> Serious issue #2:
>>>
>>> Error timeout mismatch.  Your cheap drives do not support Error Recovery
>>> Control.  That means when they run into unreadable sectors, they will
>>> spend a couple minutes trying "extra hard" to get the data.
>>>
>>> But linux is only going to wait 30 seconds.  Then it will reset the SATA
>>> link and try again.  But the drive will *not* give up its error recovery
>>> effort, and will not even *talk* to the linux driver in the meantime, so
>>> the linux driver will disconnect the drive and report errors for all
>>> remaining requests.  This will cause MD to kick the drive out.
>>>
>>> You only have one choice:
>>>
>>> 1) Set a long timeout in the linux drivers for the drives in your array,
>>> on every boot.  Something like:
>>>
>>> for x in /sys/block/sd[cdef]/device/timeout ; do echo 180 >$x ; done
>>>
>>> If you had slightly better drives, SCTERC would be supported.  On
>>> desktop drives at power up, it is disabled.  But you would be able to
>>> enable a normal 7.0 second timeout in the drives using smartctl.  (In a
>>> script, on every boot up.)  Enterprise "raid" drives do this by default.
>>>
>>> [trim /]
>>>
>>>> smartctl -x /dev/sdd1
>>>> smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-23-generic] (local
>>>> build)
>>>> Copyright (C) 2002-11 by Bruce Allen,
>>>> http://smartmontools.sourceforge.net
>>> [trim /]
>>>
>>>> SMART Attributes Data Structure revision number: 16
>>>> Vendor Specific SMART Attributes with Thresholds:
>>>> ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
>>>>     1 Raw_Read_Error_Rate     POSR-K   200   200   051    -    534
>>>>     3 Spin_Up_Time            POS--K   172   171   021    -    6383
>>>>     4 Start_Stop_Count        -O--CK   100   100   000    -    586
>>>>     5 Reallocated_Sector_Ct   PO--CK   200   200   140    -    2
>>> You already have two relocations on this drive.
>>>
>>>>     7 Seek_Error_Rate         -OSR-K   100   253   000    -    0
>>>>     9 Power_On_Hours          -O--CK   085   085   000    -    11487
>>> In less than two years.  You should pay close attention to this.
>>>
>>> Phil
>> i think i must learn to interpret the smart values better.
>> thank you.
>> i will send you tomorrow my new info with the older mdadm version.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-20 18:32               ` Stone
@ 2013-02-20 18:39                 ` Phil Turmel
  2013-02-21  7:04                   ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-20 18:39 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/20/2013 01:32 PM, Stone wrote:
>>> Am 19.02.2013 23:08, schrieb Phil Turmel:
>>>> Serious issue #1:
>>>>
>>>> You have unreadable sectors on sdc.  When you hit them during rebuild,
>>>> sdc will be kicked out (again).  They might not be permanent errors,
>>>> but
>>>> you can't tell until the drive is given fresh data to write over them.
>>>>
>>>> You have two choices:
>>>>
>>>> 1) use ddrescue to copy sdc onto a new drive, then use it in place of
>>>> sdc when you re-create the array, or
>>>>
>>>> 2) use badblocks to find the exact locations of the bad sectors, then
>>>> write zeros to those sectors using dd.
>>>>
>>>> Either way, you have lost whatever those sectors used to hold.

> befor i will recreate the raid with an older mdadm i would search the
> badblocks. is this right?

Yes, and write zeros to those blocks to either fix them or relocate them.

> i have check all drives and the sdc device had badblock:
> Pass completed, 48 bad blocks found. (48/0/0 errors)
> but die binary dont give me the info where they are..
> i have used this command in a screen badblocks -v /dev/sdc1

"man badblocks"

You should use the "-o" option to save the list.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-20 18:39                 ` Phil Turmel
@ 2013-02-21  7:04                   ` Stone
  2013-02-21  9:42                     ` stone
  2013-02-21 13:15                     ` Phil Turmel
  0 siblings, 2 replies; 79+ messages in thread
From: Stone @ 2013-02-21  7:04 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 20.02.2013 19:39, schrieb Phil Turmel:
> On 02/20/2013 01:32 PM, Stone wrote:
>>>> Am 19.02.2013 23:08, schrieb Phil Turmel:
>>>>> Serious issue #1:
>>>>>
>>>>> You have unreadable sectors on sdc.  When you hit them during rebuild,
>>>>> sdc will be kicked out (again).  They might not be permanent errors,
>>>>> but
>>>>> you can't tell until the drive is given fresh data to write over them.
>>>>>
>>>>> You have two choices:
>>>>>
>>>>> 1) use ddrescue to copy sdc onto a new drive, then use it in place of
>>>>> sdc when you re-create the array, or
>>>>>
>>>>> 2) use badblocks to find the exact locations of the bad sectors, then
>>>>> write zeros to those sectors using dd.
>>>>>
>>>>> Either way, you have lost whatever those sectors used to hold.
>> befor i will recreate the raid with an older mdadm i would search the
>> badblocks. is this right?
> Yes, and write zeros to those blocks to either fix them or relocate them.
Ok i have now a list of my badblocks.
Now i fix them with dd
dd if=/dev/zero of=/dev/sdc1 bs=1073006628 cout=1
and this for all badblocks?

with this command i fill the badblocks with a null but override data? i 
cannot damage my data with this or?

thank you.
>
>> i have check all drives and the sdc device had badblock:
>> Pass completed, 48 bad blocks found. (48/0/0 errors)
>> but die binary dont give me the info where they are..
>> i have used this command in a screen badblocks -v /dev/sdc1
> "man badblocks"
>
> You should use the "-o" option to save the list.
>
> Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21  7:04                   ` Stone
@ 2013-02-21  9:42                     ` stone
  2013-02-21 13:29                       ` Phil Turmel
  2013-02-21 13:15                     ` Phil Turmel
  1 sibling, 1 reply; 79+ messages in thread
From: stone @ 2013-02-21  9:42 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 08:04, schrieb Stone:
> Am 20.02.2013 19:39, schrieb Phil Turmel:
>> On 02/20/2013 01:32 PM, Stone wrote:
>>>>> Am 19.02.2013 23:08, schrieb Phil Turmel:
>>>>>> Serious issue #1:
>>>>>>
>>>>>> You have unreadable sectors on sdc.  When you hit them during 
>>>>>> rebuild,
>>>>>> sdc will be kicked out (again).  They might not be permanent errors,
>>>>>> but
>>>>>> you can't tell until the drive is given fresh data to write over 
>>>>>> them.
>>>>>>
>>>>>> You have two choices:
>>>>>>
>>>>>> 1) use ddrescue to copy sdc onto a new drive, then use it in 
>>>>>> place of
>>>>>> sdc when you re-create the array, or
>>>>>>
>>>>>> 2) use badblocks to find the exact locations of the bad sectors, 
>>>>>> then
>>>>>> write zeros to those sectors using dd.
>>>>>>
>>>>>> Either way, you have lost whatever those sectors used to hold.
>>> befor i will recreate the raid with an older mdadm i would search the
>>> badblocks. is this right?
>> Yes, and write zeros to those blocks to either fix them or relocate 
>> them.
> Ok i have now a list of my badblocks.
> Now i fix them with dd
> dd if=/dev/zero of=/dev/sdc1 bs=1073006628 cout=1
i think this is the right way -> dd if=/dev/zero of=/dev/sdc1 bs=4096 
count=1 seek=1073006628 (result of badblocks in my case 48 piece's)?

> and this for all badblocks?
>
> with this command i fill the badblocks with a null but override data? 
> i cannot damage my data with this or?
>
> thank you.
>>
>>> i have check all drives and the sdc device had badblock:
>>> Pass completed, 48 bad blocks found. (48/0/0 errors)
>>> but die binary dont give me the info where they are..
>>> i have used this command in a screen badblocks -v /dev/sdc1
>> "man badblocks"
>>
>> You should use the "-o" option to save the list.
>>
>> Phil
>


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21  7:04                   ` Stone
  2013-02-21  9:42                     ` stone
@ 2013-02-21 13:15                     ` Phil Turmel
  1 sibling, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 13:15 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 02:04 AM, Stone wrote:

> Ok i have now a list of my badblocks.
> Now i fix them with dd
> dd if=/dev/zero of=/dev/sdc1 bs=1073006628 cout=1
> and this for all badblocks?
> 
> with this command i fill the badblocks with a null but override data? i
> cannot damage my data with this or?

This will destroy a large part of your disk.  :-(

But you've already figured that out...

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21  9:42                     ` stone
@ 2013-02-21 13:29                       ` Phil Turmel
  2013-02-21 14:19                         ` stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 13:29 UTC (permalink / raw)
  To: stone; +Cc: linux-raid

On 02/21/2013 04:42 AM, stone@heisl.org wrote:

> i think this is the right way -> dd if=/dev/zero of=/dev/sdc1 bs=4096
> count=1 seek=1073006628 (result of badblocks in my case 48 piece's)?

Yes, but for safety when typing a command line, I always put of= last.
Just in case I hit the <enter> key accidentally:

> dd if=/dev/zero bs=4096 count=1 seek=1073006628 of=/dev/sdc1

>> and this for all badblocks?

Yes.

You should double-check the filesystem blocksize--it is usually 4096 but
ext4 allows you to change it.  "fsck -n" will report the total size of
the filesystem in its blocks.  Divide that into the total size of the
device to get the block size.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 13:29                       ` Phil Turmel
@ 2013-02-21 14:19                         ` stone
  2013-02-21 15:04                           ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: stone @ 2013-02-21 14:19 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 14:29, schrieb Phil Turmel:
> On 02/21/2013 04:42 AM, stone@heisl.org wrote:
>
>> i think this is the right way -> dd if=/dev/zero of=/dev/sdc1 bs=4096
>> count=1 seek=1073006628 (result of badblocks in my case 48 piece's)?
> Yes, but for safety when typing a command line, I always put of= last.
> Just in case I hit the <enter> key accidentally:
Thx for the hint ;-)
>> dd if=/dev/zero bs=4096 count=1 seek=1073006628 of=/dev/sdc1
>>> and this for all badblocks?
> Yes.
>
> You should double-check the filesystem blocksize--it is usually 4096 but
> ext4 allows you to change it.  "fsck -n" will report the total size of
> the filesystem in its blocks.  Divide that into the total size of the
> device to get the block size.
>
> Phil
>
o greate idea :)
but i dont get a good result
fsck -n /dev/sdc1
fsck from util-linux 2.20.1
fsck: fsck.linux_raid_member: not found
fsck: error 2 while executing fsck.linux_raid_member for /dev/sdc1

fsck -n /dev/md2
fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md2

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
     e2fsck -b 8193 <device>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 14:19                         ` stone
@ 2013-02-21 15:04                           ` Phil Turmel
  2013-02-21 15:30                             ` stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 15:04 UTC (permalink / raw)
  To: stone; +Cc: linux-raid

On 02/21/2013 09:19 AM, stone@heisl.org wrote:

> o greate idea :)

Whoops!  Not a great idea.  This is a member device.

> but i dont get a good result
> fsck -n /dev/sdc1
> fsck from util-linux 2.20.1
> fsck: fsck.linux_raid_member: not found
> fsck: error 2 while executing fsck.linux_raid_member for /dev/sdc1
> 
> fsck -n /dev/md2
> fsck from util-linux 2.20.1
> e2fsck 1.42 (29-Nov-2011)
> fsck.ext2: Superblock invalid, trying backup blocks...
> fsck.ext2: Bad magic number in super-block while trying to open /dev/md2
> 
> The superblock could not be read or does not describe a correct ext2
> filesystem.  If the device is valid and it really contains an ext2
> filesystem (and not swap or ufs or something else), then the superblock
> is corrupt, and you might try running e2fsck with an alternate superblock:
>     e2fsck -b 8193 <device>

Ignore this.  As long as badblocks was using 4096, then the dd command
is correct.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 15:04                           ` Phil Turmel
@ 2013-02-21 15:30                             ` stone
  2013-02-21 15:38                               ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: stone @ 2013-02-21 15:30 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 16:04, schrieb Phil Turmel:
> On 02/21/2013 09:19 AM, stone@heisl.org wrote:
>
>> o greate idea :)
> Whoops!  Not a great idea.  This is a member device.
>
>> but i dont get a good result
>> fsck -n /dev/sdc1
>> fsck from util-linux 2.20.1
>> fsck: fsck.linux_raid_member: not found
>> fsck: error 2 while executing fsck.linux_raid_member for /dev/sdc1
>>
>> fsck -n /dev/md2
>> fsck from util-linux 2.20.1
>> e2fsck 1.42 (29-Nov-2011)
>> fsck.ext2: Superblock invalid, trying backup blocks...
>> fsck.ext2: Bad magic number in super-block while trying to open /dev/md2
>>
>> The superblock could not be read or does not describe a correct ext2
>> filesystem.  If the device is valid and it really contains an ext2
>> filesystem (and not swap or ufs or something else), then the superblock
>> is corrupt, and you might try running e2fsck with an alternate superblock:
>>      e2fsck -b 8193 <device>
> Ignore this.  As long as badblocks was using 4096, then the dd command
> is correct.
>
> Phil
dd if=/dev/zero bs=4096 count=1 seek=1073006628 of=/dev/sdc1
dd: `/dev/sdc1': cannot seek: Invalid argument
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0,000493485 s, 0,0 kB/s

is there a problem with the bs parameter?
shoud i try dd if=/dev/zero bs=512 count=8 seek=1073006628 of=/dev/sdc1
?

thx.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 15:30                             ` stone
@ 2013-02-21 15:38                               ` Phil Turmel
  2013-02-21 15:49                                 ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 15:38 UTC (permalink / raw)
  To: stone; +Cc: linux-raid

On 02/21/2013 10:30 AM, stone@heisl.org wrote:

> dd if=/dev/zero bs=4096 count=1 seek=1073006628 of=/dev/sdc1
> dd: `/dev/sdc1': cannot seek: Invalid argument
> 0+0 records in
> 0+0 records out
> 0 bytes (0 B) copied, 0,000493485 s, 0,0 kB/s
> 
> is there a problem with the bs parameter?
> shoud i try dd if=/dev/zero bs=512 count=8 seek=1073006628 of=/dev/sdc1
> ?

How did you get 1073006628?  That is around the 4T mark?

Please show the badblocks output file.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 15:38                               ` Phil Turmel
@ 2013-02-21 15:49                                 ` Phil Turmel
  2013-02-21 16:32                                   ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 15:49 UTC (permalink / raw)
  To: stone; +Cc: linux-raid

On 02/21/2013 10:38 AM, Phil Turmel wrote:
> On 02/21/2013 10:30 AM, stone@heisl.org wrote:
> 
>> dd if=/dev/zero bs=4096 count=1 seek=1073006628 of=/dev/sdc1
>> dd: `/dev/sdc1': cannot seek: Invalid argument
>> 0+0 records in
>> 0+0 records out
>> 0 bytes (0 B) copied, 0,000493485 s, 0,0 kB/s
>>
>> is there a problem with the bs parameter?
>> shoud i try dd if=/dev/zero bs=512 count=8 seek=1073006628 of=/dev/sdc1
>> ?
> 
> How did you get 1073006628?  That is around the 4T mark?
> 
> Please show the badblocks output file.

I'm going to guess you didn't specify the block size when you used
badblocks.  It defaults to 1024.  If so, dd needs "bs=1024"

It is likely that your 48 errors are really 12 errors, four sequential
"blocks" for each.  Your drives are advanced format, so they really have
4k sectors, and that should have been specified to badblocks.

If so, you need to fix the sequential blocks together, or the drive will
fail to perform read-modify-write.

You probably need:

dd if=/dev/zero bs=1024 count=4 seek=1073006628 of=/dev/sdc1

But recheck everything carefully.  You can't undo whatever dd does.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 15:49                                 ` Phil Turmel
@ 2013-02-21 16:32                                   ` Stone
  2013-02-21 16:41                                     ` Phil Turmel
  2013-02-21 22:20                                     ` Chris Murphy
  0 siblings, 2 replies; 79+ messages in thread
From: Stone @ 2013-02-21 16:32 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 16:49, schrieb Phil Turmel:
> On 02/21/2013 10:38 AM, Phil Turmel wrote:
>> On 02/21/2013 10:30 AM, stone@heisl.org wrote:
>>
>>> dd if=/dev/zero bs=4096 count=1 seek=1073006628 of=/dev/sdc1
>>> dd: `/dev/sdc1': cannot seek: Invalid argument
>>> 0+0 records in
>>> 0+0 records out
>>> 0 bytes (0 B) copied, 0,000493485 s, 0,0 kB/s
>>>
>>> is there a problem with the bs parameter?
>>> shoud i try dd if=/dev/zero bs=512 count=8 seek=1073006628 of=/dev/sdc1
>>> ?
>> How did you get 1073006628?  That is around the 4T mark?
>>
>> Please show the badblocks output file.
This is my ouput from the badblocks
1073006628
1073006629
1073006630
1073006631
1073006632
1073006633
1073006634
1073006635
1073006636
1073006637
1073006638
1073006639
1073101016
1073101017
1073101018
1073101019
1073101020
1073101021
1073101022
1073101023
1073101024
1073101025
1073101026
1073101027
1335739456
1335739457
1335739458
1335739459
1335739460
1335739461
1335739462
1335739463
1346771164
1346771165
1346771166
1346771167
1346771168
1346771169
1346771170
1346771171
1348581732
1348581733
1348581734
1348581735
1348581736
1348581737
1348581738
1348581739
> I'm going to guess you didn't specify the block size when you used
> badblocks.  It defaults to 1024.  If so, dd needs "bs=1024"
>
> It is likely that your 48 errors are really 12 errors, four sequential
> "blocks" for each.  Your drives are advanced format, so they really have
> 4k sectors, and that should have been specified to badblocks.
>
> If so, you need to fix the sequential blocks together, or the drive will
> fail to perform read-modify-write.
>
> You probably need:
>
> dd if=/dev/zero bs=1024 count=4 seek=1073006628 of=/dev/sdc1
>
> But recheck everything carefully.  You can't undo whatever dd does.
>
> Phil
I will do this carefully. This is the reason why i will check with you a 
command befor i press the destroying return key.
Yes i think i have 4k sectors.
This means the only 12 blocks are damaged and i do the dd only for each 
fourth block

for example
dd if=/dev/zero bs=1024 count=4 seek=1073006628 of=/dev/sdc1
dd if=/dev/zero bs=1024 count=4 seek=1073006632 of=/dev/sdc1
dd if=/dev/zero bs=1024 count=4 seek=1073006636 of=/dev/sdc1

i think this must work but what shall i do when i get the same error? 
try the next block in the seek?

thx



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:32                                   ` Stone
@ 2013-02-21 16:41                                     ` Phil Turmel
  2013-02-21 16:43                                       ` Stone
  2013-02-21 22:29                                       ` Chris Murphy
  2013-02-21 22:20                                     ` Chris Murphy
  1 sibling, 2 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 16:41 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 11:32 AM, Stone wrote:

> This is my ouput from the badblocks

> 1073006628
> 1073006629
> 1073006630
> 1073006631
> 1073006632
> 1073006633
> 1073006634
> 1073006635
> 1073006636
> 1073006637
> 1073006638
> 1073006639

These 12 are together.  (Three real sectors.)

> 1073101016
> 1073101017
> 1073101018
> 1073101019
> 1073101020
> 1073101021
> 1073101022
> 1073101023
> 1073101024
> 1073101025
> 1073101026
> 1073101027

And these twelve are together.

> 1335739456
> 1335739457
> 1335739458
> 1335739459
> 1335739460
> 1335739461
> 1335739462
> 1335739463

These eight.

> 1346771164
> 1346771165
> 1346771166
> 1346771167
> 1346771168
> 1346771169
> 1346771170
> 1346771171

And these eight.

> 1348581732
> 1348581733
> 1348581734
> 1348581735
> 1348581736
> 1348581737
> 1348581738
> 1348581739

And these eight.

So you actually have five bad spots, two or three sectors apiece.

dd if=/dev/zero bs=1024 count=12 seek=1073006628 of=/dev/sdc1
dd if=/dev/zero bs=1024 count=12 seek=1073101016 of=/dev/sdc1
dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1

One last check:  Did you run "badblocks /dev/sdc" or "badblocks /dev/sdc1" ?

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:41                                     ` Phil Turmel
@ 2013-02-21 16:43                                       ` Stone
  2013-02-21 16:46                                         ` Phil Turmel
  2013-02-21 22:29                                       ` Chris Murphy
  1 sibling, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 16:43 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 17:41, schrieb Phil Turmel:
> On 02/21/2013 11:32 AM, Stone wrote:
>
>> This is my ouput from the badblocks
>> 1073006628
>> 1073006629
>> 1073006630
>> 1073006631
>> 1073006632
>> 1073006633
>> 1073006634
>> 1073006635
>> 1073006636
>> 1073006637
>> 1073006638
>> 1073006639
> These 12 are together.  (Three real sectors.)
>
>> 1073101016
>> 1073101017
>> 1073101018
>> 1073101019
>> 1073101020
>> 1073101021
>> 1073101022
>> 1073101023
>> 1073101024
>> 1073101025
>> 1073101026
>> 1073101027
> And these twelve are together.
>
>> 1335739456
>> 1335739457
>> 1335739458
>> 1335739459
>> 1335739460
>> 1335739461
>> 1335739462
>> 1335739463
> These eight.
>
>> 1346771164
>> 1346771165
>> 1346771166
>> 1346771167
>> 1346771168
>> 1346771169
>> 1346771170
>> 1346771171
> And these eight.
>
>> 1348581732
>> 1348581733
>> 1348581734
>> 1348581735
>> 1348581736
>> 1348581737
>> 1348581738
>> 1348581739
> And these eight.
>
> So you actually have five bad spots, two or three sectors apiece.
>
> dd if=/dev/zero bs=1024 count=12 seek=1073006628 of=/dev/sdc1
> dd if=/dev/zero bs=1024 count=12 seek=1073101016 of=/dev/sdc1
> dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
> dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
> dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1
>
> One last check:  Did you run "badblocks /dev/sdc" or "badblocks /dev/sdc1" ?
>
> Phil
"/dev/sdc1"
history: "badblocks -v /dev/sdc1 -o /root/badblocks_sdc1.txt"

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:43                                       ` Stone
@ 2013-02-21 16:46                                         ` Phil Turmel
  2013-02-21 16:51                                           ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 16:46 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 11:43 AM, Stone wrote:
> Am 21.02.2013 17:41, schrieb Phil Turmel:

>> So you actually have five bad spots, two or three sectors apiece.
>>
>> dd if=/dev/zero bs=1024 count=12 seek=1073006628 of=/dev/sdc1
>> dd if=/dev/zero bs=1024 count=12 seek=1073101016 of=/dev/sdc1
>> dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
>> dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
>> dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1
>>
>> One last check:  Did you run "badblocks /dev/sdc" or "badblocks
>> /dev/sdc1" ?
>>
>> Phil
> "/dev/sdc1"
> history: "badblocks -v /dev/sdc1 -o /root/badblocks_sdc1.txt"

Very good.  So have you learned enough to be confident when you hit the
<enter> key?  :-)

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:46                                         ` Phil Turmel
@ 2013-02-21 16:51                                           ` Stone
  2013-02-21 16:54                                             ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 16:51 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 17:46, schrieb Phil Turmel:
> On 02/21/2013 11:43 AM, Stone wrote:
>> Am 21.02.2013 17:41, schrieb Phil Turmel:
>>> So you actually have five bad spots, two or three sectors apiece.
>>>
>>> dd if=/dev/zero bs=1024 count=12 seek=1073006628 of=/dev/sdc1
>>> dd if=/dev/zero bs=1024 count=12 seek=1073101016 of=/dev/sdc1
>>> dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
>>> dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
>>> dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1
>>>
>>> One last check:  Did you run "badblocks /dev/sdc" or "badblocks
>>> /dev/sdc1" ?
>>>
>>> Phil
>> "/dev/sdc1"
>> history: "badblocks -v /dev/sdc1 -o /root/badblocks_sdc1.txt"
> Very good.  So have you learned enough to be confident when you hit the
> <enter> key?  :-)
>
> Phil
;-)

the dead-key was pressed
here the output:
root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073006628 of=/dev/sdc1
12+0 records in
12+0 records out
12288 bytes (12 kB) copied, 0,00019109 s, 64,3 MB/s
root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073101016 of=/dev/sdc1
12+0 records in
12+0 records out
12288 bytes (12 kB) copied, 0,00017799 s, 69,0 MB/s
root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
8+0 records in
8+0 records out
8192 bytes (8,2 kB) copied, 0,000159338 s, 51,4 MB/s
root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
8+0 records in
8+0 records out
8192 bytes (8,2 kB) copied, 0,000161977 s, 50,6 MB/s
root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1
8+0 records in
8+0 records out
8192 bytes (8,2 kB) copied, 0,000157825 s, 51,9 MB/s

now i boot my server with a live cd and recreate my raid.
if this was successfully i open the LUKS and check it.
here my commands for this step:
mdadm --create /dev/md2 --assume-clean --verbose --level=5 
--raid-devices=4 /dev/sdc1 /dev/sdd1 missing /dev/sdf1
cryptsetup luksOpen /dev/md2 md2_nas
fsck -n /dev/md2


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:51                                           ` Stone
@ 2013-02-21 16:54                                             ` Phil Turmel
  2013-02-21 17:17                                               ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 16:54 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 11:51 AM, Stone wrote:

> the dead-key was pressed
> here the output:
> root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073006628
> of=/dev/sdc1
> 12+0 records in
> 12+0 records out
> 12288 bytes (12 kB) copied, 0,00019109 s, 64,3 MB/s
> root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073101016
> of=/dev/sdc1
> 12+0 records in
> 12+0 records out
> 12288 bytes (12 kB) copied, 0,00017799 s, 69,0 MB/s
> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
> 8+0 records in
> 8+0 records out
> 8192 bytes (8,2 kB) copied, 0,000159338 s, 51,4 MB/s
> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
> 8+0 records in
> 8+0 records out
> 8192 bytes (8,2 kB) copied, 0,000161977 s, 50,6 MB/s
> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1
> 8+0 records in
> 8+0 records out
> 8192 bytes (8,2 kB) copied, 0,000157825 s, 51,9 MB/s

Very good.

> now i boot my server with a live cd and recreate my raid.
> if this was successfully i open the LUKS and check it.
> here my commands for this step:
> mdadm --create /dev/md2 --assume-clean --verbose --level=5
> --raid-devices=4 /dev/sdc1 /dev/sdd1 missing /dev/sdf1
> cryptsetup luksOpen /dev/md2 md2_nas
> fsck -n /dev/md2

Looks good.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:54                                             ` Phil Turmel
@ 2013-02-21 17:17                                               ` Stone
  2013-02-21 17:23                                                 ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 17:17 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 17:54, schrieb Phil Turmel:
> On 02/21/2013 11:51 AM, Stone wrote:
>
>> the dead-key was pressed
>> here the output:
>> root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073006628
>> of=/dev/sdc1
>> 12+0 records in
>> 12+0 records out
>> 12288 bytes (12 kB) copied, 0,00019109 s, 64,3 MB/s
>> root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073101016
>> of=/dev/sdc1
>> 12+0 records in
>> 12+0 records out
>> 12288 bytes (12 kB) copied, 0,00017799 s, 69,0 MB/s
>> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1335739456 of=/dev/sdc1
>> 8+0 records in
>> 8+0 records out
>> 8192 bytes (8,2 kB) copied, 0,000159338 s, 51,4 MB/s
>> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1346771164 of=/dev/sdc1
>> 8+0 records in
>> 8+0 records out
>> 8192 bytes (8,2 kB) copied, 0,000161977 s, 50,6 MB/s
>> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1348581732 of=/dev/sdc1
>> 8+0 records in
>> 8+0 records out
>> 8192 bytes (8,2 kB) copied, 0,000157825 s, 51,9 MB/s
> Very good.
>
>> now i boot my server with a live cd and recreate my raid.
>> if this was successfully i open the LUKS and check it.
>> here my commands for this step:
>> mdadm --create /dev/md2 --assume-clean --verbose --level=5
>> --raid-devices=4 /dev/sdc1 /dev/sdd1 missing /dev/sdf1
>> cryptsetup luksOpen /dev/md2 md2_nas
>> fsck -n /dev/md2
> Looks good.
>
> Phil
i created the raid successfulle and the LUKS is open!
this is the output of my fsck
fsck -n /dev/mapper/md2_nas
fsck from util-linux 2.19.1
e2fsck 1.41.14 (22-Dec-2010)
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open 
/dev/mapper/md2_nas

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
     e2fsck -b 8193 <device>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 17:17                                               ` Stone
@ 2013-02-21 17:23                                                 ` Stone
  2013-02-21 17:36                                                   ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 17:23 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 18:17, schrieb Stone:
> Am 21.02.2013 17:54, schrieb Phil Turmel:
>> On 02/21/2013 11:51 AM, Stone wrote:
>>
>>> the dead-key was pressed
>>> here the output:
>>> root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073006628
>>> of=/dev/sdc1
>>> 12+0 records in
>>> 12+0 records out
>>> 12288 bytes (12 kB) copied, 0,00019109 s, 64,3 MB/s
>>> root@bender:~# dd if=/dev/zero bs=1024 count=12 seek=1073101016
>>> of=/dev/sdc1
>>> 12+0 records in
>>> 12+0 records out
>>> 12288 bytes (12 kB) copied, 0,00017799 s, 69,0 MB/s
>>> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1335739456 
>>> of=/dev/sdc1
>>> 8+0 records in
>>> 8+0 records out
>>> 8192 bytes (8,2 kB) copied, 0,000159338 s, 51,4 MB/s
>>> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1346771164 
>>> of=/dev/sdc1
>>> 8+0 records in
>>> 8+0 records out
>>> 8192 bytes (8,2 kB) copied, 0,000161977 s, 50,6 MB/s
>>> root@bender:~# dd if=/dev/zero bs=1024 count=8 seek=1348581732 
>>> of=/dev/sdc1
>>> 8+0 records in
>>> 8+0 records out
>>> 8192 bytes (8,2 kB) copied, 0,000157825 s, 51,9 MB/s
>> Very good.
>>
>>> now i boot my server with a live cd and recreate my raid.
>>> if this was successfully i open the LUKS and check it.
>>> here my commands for this step:
>>> mdadm --create /dev/md2 --assume-clean --verbose --level=5
>>> --raid-devices=4 /dev/sdc1 /dev/sdd1 missing /dev/sdf1
>>> cryptsetup luksOpen /dev/md2 md2_nas
>>> fsck -n /dev/md2
>> Looks good.
>>
>> Phil
> i created the raid successfulle and the LUKS is open!
> this is the output of my fsck
> fsck -n /dev/mapper/md2_nas
> fsck from util-linux 2.19.1
> e2fsck 1.41.14 (22-Dec-2010)
> fsck.ext2: Superblock invalid, trying backup blocks...
> fsck.ext2: Bad magic number in super-block while trying to open 
> /dev/mapper/md2_nas
>
> The superblock could not be read or does not describe a correct ext2
> filesystem.  If the device is valid and it really contains an ext2
> filesystem (and not swap or ufs or something else), then the superblock
> is corrupt, and you might try running e2fsck with an alternate 
> superblock:
>     e2fsck -b 8193 <device>
i coud try to restore the superblock. the filesystem is ext4....
is this the right way?
mke2fs -n -j /dev/mapper/md2_nas
mke2fs 1.41.14 (22-Dec-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
366288896 inodes, 1465133568 blocks
73256678 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
44713 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 
2654208,
         4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
         102400000, 214990848, 512000000, 550731776, 644972544


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 17:23                                                 ` Stone
@ 2013-02-21 17:36                                                   ` Phil Turmel
  2013-02-21 17:47                                                     ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 17:36 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 12:23 PM, Stone wrote:

> i coud try to restore the superblock. the filesystem is ext4....
> is this the right way?
> mke2fs -n -j /dev/mapper/md2_nas

Partly. (scary) Without the "-n", that will destroy everything!

> mke2fs 1.41.14 (22-Dec-2010)
> Filesystem label=
> OS type: Linux
> Block size=4096 (log=2)
> Fragment size=4096 (log=2)
> Stride=128 blocks, Stripe width=384 blocks
> 366288896 inodes, 1465133568 blocks
> 73256678 blocks (5.00%) reserved for the super user
> First data block=0
> Maximum filesystem blocks=0
> 44713 block groups
> 32768 blocks per group, 32768 fragments per group
> 8192 inodes per group
> Superblock backups stored on blocks:
>         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
> 2654208,
>         4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
>         102400000, 214990848, 512000000, 550731776, 644972544

But it does give you the locations of the backup superblocks.  Use these
numbers, starting with 32768, as "xxxx" in:

fsck.ext4 -n -b xxxx /dev

Once you give it a superblock that hasn't been corrupted, it should be
able to check the rest of the filesystem.  There will be damage near the
beginning, and probably more damage where you had to put zeros.

If it looks like that, do it again without "-n" to actually fix it.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 17:36                                                   ` Phil Turmel
@ 2013-02-21 17:47                                                     ` Stone
  2013-02-21 18:00                                                       ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 17:47 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 18:36, schrieb Phil Turmel:
> On 02/21/2013 12:23 PM, Stone wrote:
>
>> i coud try to restore the superblock. the filesystem is ext4....
>> is this the right way?
>> mke2fs -n -j /dev/mapper/md2_nas
> Partly. (scary) Without the "-n", that will destroy everything!
>
>> mke2fs 1.41.14 (22-Dec-2010)
>> Filesystem label=
>> OS type: Linux
>> Block size=4096 (log=2)
>> Fragment size=4096 (log=2)
>> Stride=128 blocks, Stripe width=384 blocks
>> 366288896 inodes, 1465133568 blocks
>> 73256678 blocks (5.00%) reserved for the super user
>> First data block=0
>> Maximum filesystem blocks=0
>> 44713 block groups
>> 32768 blocks per group, 32768 fragments per group
>> 8192 inodes per group
>> Superblock backups stored on blocks:
>>          32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
>> 2654208,
>>          4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
>>          102400000, 214990848, 512000000, 550731776, 644972544
> But it does give you the locations of the backup superblocks.  Use these
> numbers, starting with 32768, as "xxxx" in:
>
> fsck.ext4 -n -b xxxx /dev
>
> Once you give it a superblock that hasn't been corrupted, it should be
> able to check the rest of the filesystem.  There will be damage near the
> beginning, and probably more damage where you had to put zeros.
>
> If it looks like that, do it again without "-n" to actually fix it.
>
> Phil
>
ok. i think i dont understand you not complete.
i restore now the superblock with --> fsck.ext4 -bv 4096000 
/dev/mapper/md2_nas

when this is done i make full filesystemcheck --> fsck.ext4 /dev/md2_nas
and answer the quest questions.

right?
thx.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 17:47                                                     ` Stone
@ 2013-02-21 18:00                                                       ` Phil Turmel
  2013-02-21 18:08                                                         ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 18:00 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 12:47 PM, Stone wrote:
> Am 21.02.2013 18:36, schrieb Phil Turmel:

>> But it does give you the locations of the backup superblocks.  Use these
>> numbers, starting with 32768, as "xxxx" in:
>>
>> fsck.ext4 -n -b xxxx /dev
>>
>> Once you give it a superblock that hasn't been corrupted, it should be
>> able to check the rest of the filesystem.  There will be damage near the
>> beginning, and probably more damage where you had to put zeros.
>>
>> If it looks like that, do it again without "-n" to actually fix it.
>>
>> Phil
>>
> ok. i think i dont understand you not complete.
> i restore now the superblock with --> fsck.ext4 -bv 4096000
> /dev/mapper/md2_nas

No!  You must keep using "-n" until you have seen a mostly-clean report!
 We don't know yet that the chunk size is right.

Leaving off "-n" will simultaneously fix the superblock (and all other
backup copies) and continue to fix the rest of the filesystem.  You
mustn't do that with a wrong chunk size--it will damage much more.

> when this is done i make full filesystemcheck --> fsck.ext4 /dev/md2_nas
> and answer the quest questions.

> right?

No.

Just do the "fsck -n -b xxxx" combinations until you find a good
superblock.  That report will also show if there are many other errors.
 Expect scattered damage in the region < 384MB due to the wrong data
offset.  After that, there should only be errors where the new zeros
are, and maybe a few scattered errors from the original crash.  If there
many more errors, you have the wrong chunk size.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 18:00                                                       ` Phil Turmel
@ 2013-02-21 18:08                                                         ` Stone
  2013-02-21 18:11                                                           ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 18:08 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 19:00, schrieb Phil Turmel:
> On 02/21/2013 12:47 PM, Stone wrote:
>> Am 21.02.2013 18:36, schrieb Phil Turmel:
>>> But it does give you the locations of the backup superblocks.  Use these
>>> numbers, starting with 32768, as "xxxx" in:
>>>
>>> fsck.ext4 -n -b xxxx /dev
>>>
>>> Once you give it a superblock that hasn't been corrupted, it should be
>>> able to check the rest of the filesystem.  There will be damage near the
>>> beginning, and probably more damage where you had to put zeros.
>>>
>>> If it looks like that, do it again without "-n" to actually fix it.
>>>
>>> Phil
>>>
>> ok. i think i dont understand you not complete.
>> i restore now the superblock with --> fsck.ext4 -bv 4096000
>> /dev/mapper/md2_nas
> No!  You must keep using "-n" until you have seen a mostly-clean report!
>   We don't know yet that the chunk size is right.
>
> Leaving off "-n" will simultaneously fix the superblock (and all other
> backup copies) and continue to fix the rest of the filesystem.  You
> mustn't do that with a wrong chunk size--it will damage much more.
ok
>> when this is done i make full filesystemcheck --> fsck.ext4 /dev/md2_nas
>> and answer the quest questions.
>> right?
> No.
>
> Just do the "fsck -n -b xxxx" combinations until you find a good
> superblock.  That report will also show if there are many other errors.
>   Expect scattered damage in the region < 384MB due to the wrong data
> offset.  After that, there should only be errors where the new zeros
> are, and maybe a few scattered errors from the original crash.  If there
> many more errors, you have the wrong chunk size.
>
> Phil
ok i have checked all superblocks but i get always the same message.
here one sample:
fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
e2fsck 1.41.14 (22-Dec-2010)
fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
     e2fsck -b 8193 <device>

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 18:08                                                         ` Stone
@ 2013-02-21 18:11                                                           ` Phil Turmel
  2013-02-21 18:29                                                             ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 18:11 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 01:08 PM, Stone wrote:

> ok i have checked all superblocks but i get always the same message.
> here one sample:
> fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
> e2fsck 1.41.14 (22-Dec-2010)
> fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
> 
> The superblock could not be read or does not describe a correct ext2
> filesystem.  If the device is valid and it really contains an ext2
> filesystem (and not swap or ufs or something else), then the superblock
> is corrupt, and you might try running e2fsck with an alternate superblock:
>     e2fsck -b 8193 <device>

You very likely have the wrong chunk size.

Close luks, stop md2, and re-create with --chunk=64

And then try to fsck again.  (Without -b at first.)

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 18:11                                                           ` Phil Turmel
@ 2013-02-21 18:29                                                             ` Stone
  2013-02-21 18:54                                                               ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 18:29 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 19:11, schrieb Phil Turmel:
> On 02/21/2013 01:08 PM, Stone wrote:
>
>> ok i have checked all superblocks but i get always the same message.
>> here one sample:
>> fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
>> e2fsck 1.41.14 (22-Dec-2010)
>> fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>>
>> The superblock could not be read or does not describe a correct ext2
>> filesystem.  If the device is valid and it really contains an ext2
>> filesystem (and not swap or ufs or something else), then the superblock
>> is corrupt, and you might try running e2fsck with an alternate superblock:
>>      e2fsck -b 8193 <device>
> You very likely have the wrong chunk size.
>
> Close luks, stop md2, and re-create with --chunk=64
>
> And then try to fsck again.  (Without -b at first.)
>
> Phil
ok. with --chunk=64 i cannot open the luks.
in witch steps (chunk) should i continue?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 18:29                                                             ` Stone
@ 2013-02-21 18:54                                                               ` Phil Turmel
  2013-02-21 19:12                                                                 ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 18:54 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 01:29 PM, Stone wrote:
> Am 21.02.2013 19:11, schrieb Phil Turmel:
>> On 02/21/2013 01:08 PM, Stone wrote:
>>
>>> ok i have checked all superblocks but i get always the same message.
>>> here one sample:
>>> fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
>>> e2fsck 1.41.14 (22-Dec-2010)
>>> fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>>>
>>> The superblock could not be read or does not describe a correct ext2
>>> filesystem.  If the device is valid and it really contains an ext2
>>> filesystem (and not swap or ufs or something else), then the superblock
>>> is corrupt, and you might try running e2fsck with an alternate
>>> superblock:
>>>      e2fsck -b 8193 <device>
>> You very likely have the wrong chunk size.
>>
>> Close luks, stop md2, and re-create with --chunk=64
>>
>> And then try to fsck again.  (Without -b at first.)
>>
>> Phil
> ok. with --chunk=64 i cannot open the luks.
> in witch steps (chunk) should i continue?

That is a big surprise.  The luks signature should not move with chunk
size.  Please use "mdadm -E /dev/sdc1" to recheck your data offset.

If that wasn't it, please show the hexdump for the entire luks
signature.  I'd like to see its payload offset.

Also, if you go back to --chunk=512, open the luks, you could run the
following command to find possible superblock locations:

hexdump -C /dev/mapper/md2_nas |egrep '^[0-9a-f]+30  .+  53 ef' >sb.lst

(May take a long time to read the whole array)

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 18:54                                                               ` Phil Turmel
@ 2013-02-21 19:12                                                                 ` Stone
  2013-02-21 19:17                                                                   ` Stone
  2013-02-21 19:24                                                                   ` Phil Turmel
  0 siblings, 2 replies; 79+ messages in thread
From: Stone @ 2013-02-21 19:12 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 19:54, schrieb Phil Turmel:
> On 02/21/2013 01:29 PM, Stone wrote:
>> Am 21.02.2013 19:11, schrieb Phil Turmel:
>>> On 02/21/2013 01:08 PM, Stone wrote:
>>>
>>>> ok i have checked all superblocks but i get always the same message.
>>>> here one sample:
>>>> fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
>>>> e2fsck 1.41.14 (22-Dec-2010)
>>>> fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>>>>
>>>> The superblock could not be read or does not describe a correct ext2
>>>> filesystem.  If the device is valid and it really contains an ext2
>>>> filesystem (and not swap or ufs or something else), then the superblock
>>>> is corrupt, and you might try running e2fsck with an alternate
>>>> superblock:
>>>>       e2fsck -b 8193 <device>
>>> You very likely have the wrong chunk size.
>>>
>>> Close luks, stop md2, and re-create with --chunk=64
>>>
>>> And then try to fsck again.  (Without -b at first.)
>>>
>>> Phil
>> ok. with --chunk=64 i cannot open the luks.
>> in witch steps (chunk) should i continue?
> That is a big surprise.  The luks signature should not move with chunk
> size.  Please use "mdadm -E /dev/sdc1" to recheck your data offset.
>
> If that wasn't it, please show the hexdump for the entire luks
> signature.  I'd like to see its payload offset.
>
> Also, if you go back to --chunk=512, open the luks, you could run the
> following command to find possible superblock locations:
>
> hexdump -C /dev/mapper/md2_nas |egrep '^[0-9a-f]+30  .+  53 ef' >sb.lst
>
> (May take a long time to read the whole array)
>
> Phil
with --chunk=512 i can open the luks but i cannot found a good superblock.
yes i can run the hexdump but i think this runs 8 hours or longer.
start the hexdump?
can i try more today?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 19:12                                                                 ` Stone
@ 2013-02-21 19:17                                                                   ` Stone
  2013-02-21 19:24                                                                   ` Phil Turmel
  1 sibling, 0 replies; 79+ messages in thread
From: Stone @ 2013-02-21 19:17 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 20:12, schrieb Stone:
> Am 21.02.2013 19:54, schrieb Phil Turmel:
>> On 02/21/2013 01:29 PM, Stone wrote:
>>> Am 21.02.2013 19:11, schrieb Phil Turmel:
>>>> On 02/21/2013 01:08 PM, Stone wrote:
>>>>
>>>>> ok i have checked all superblocks but i get always the same message.
>>>>> here one sample:
>>>>> fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
>>>>> e2fsck 1.41.14 (22-Dec-2010)
>>>>> fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>>>>>
>>>>> The superblock could not be read or does not describe a correct ext2
>>>>> filesystem.  If the device is valid and it really contains an ext2
>>>>> filesystem (and not swap or ufs or something else), then the 
>>>>> superblock
>>>>> is corrupt, and you might try running e2fsck with an alternate
>>>>> superblock:
>>>>>       e2fsck -b 8193 <device>
>>>> You very likely have the wrong chunk size.
>>>>
>>>> Close luks, stop md2, and re-create with --chunk=64
>>>>
>>>> And then try to fsck again.  (Without -b at first.)
>>>>
>>>> Phil
>>> ok. with --chunk=64 i cannot open the luks.
>>> in witch steps (chunk) should i continue?
>> That is a big surprise.  The luks signature should not move with chunk
>> size.  Please use "mdadm -E /dev/sdc1" to recheck your data offset.
>>
sorry i forgot
  mdadm -E /dev/sdc1
/dev/sdc1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : e5ace834:7cbb3655:4fd2e7b8:3e07b6d3
            Name : ubuntu:2  (local to host ubuntu)
   Creation Time : Thu Feb 21 18:57:51 2013
      Raid Level : raid5
    Raid Devices : 4

  Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
     Data Offset : 2048 sectors
    Super Offset : 8 sectors
           State : clean
     Device UUID : 368e4744:35adf66a:826f9d1b:11c606b6

     Update Time : Thu Feb 21 18:57:51 2013
        Checksum : 558db826 - correct
          Events : 0

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 0
    Array State : AA.A ('A' == active, '.' == missing)
>> If that wasn't it, please show the hexdump for the entire luks
>> signature.  I'd like to see its payload offset.
>>
>> Also, if you go back to --chunk=512, open the luks, you could run the
>> following command to find possible superblock locations:
>>
>> hexdump -C /dev/mapper/md2_nas |egrep '^[0-9a-f]+30  .+  53 ef' >sb.lst
>>
>> (May take a long time to read the whole array)
>>
>> Phil
> with --chunk=512 i can open the luks but i cannot found a good 
> superblock.
> yes i can run the hexdump but i think this runs 8 hours or longer.
> start the hexdump?
> can i try more today?


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 19:12                                                                 ` Stone
  2013-02-21 19:17                                                                   ` Stone
@ 2013-02-21 19:24                                                                   ` Phil Turmel
  2013-02-21 19:29                                                                     ` Stone
  1 sibling, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 19:24 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 02:12 PM, Stone wrote:

>>> ok. with --chunk=64 i cannot open the luks.
>>> in witch steps (chunk) should i continue?
>> That is a big surprise.  The luks signature should not move with chunk
>> size.  Please use "mdadm -E /dev/sdc1" to recheck your data offset.
>>
>> If that wasn't it, please show the hexdump for the entire luks
>> signature.  I'd like to see its payload offset.
>>
>> Also, if you go back to --chunk=512, open the luks, you could run the
>> following command to find possible superblock locations:
>>
>> hexdump -C /dev/mapper/md2_nas |egrep '^[0-9a-f]+30  .+  53 ef' >sb.lst
>>
>> (May take a long time to read the whole array)
>>
>> Phil
> with --chunk=512 i can open the luks but i cannot found a good superblock.
> yes i can run the hexdump but i think this runs 8 hours or longer.
> start the hexdump?
> can i try more today?

Run the hexdump for half an hour or so.  If it doesn't find some
candidates in that timeframe, it probably won't.

I've never crashed a luks partition like this, so I'm feeling around a
bit.  You should understand that luks normally uses "cipher block
chaining" salted with the sector number.  If you get the blocks out of
order (wrong chunk size or offset or layout), those sectors won't
decrypt correctly.  luksOpen won't detect this.

You may have to try many chunk sizes, verifying the data offset every
time, trying to find the one that will work.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 19:24                                                                   ` Phil Turmel
@ 2013-02-21 19:29                                                                     ` Stone
  2013-02-21 19:45                                                                       ` Phil Turmel
  2013-02-21 19:46                                                                       ` Stone
  0 siblings, 2 replies; 79+ messages in thread
From: Stone @ 2013-02-21 19:29 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 20:24, schrieb Phil Turmel:
> On 02/21/2013 02:12 PM, Stone wrote:
>
>>>> ok. with --chunk=64 i cannot open the luks.
>>>> in witch steps (chunk) should i continue?
>>> That is a big surprise.  The luks signature should not move with chunk
>>> size.  Please use "mdadm -E /dev/sdc1" to recheck your data offset.
>>>
>>> If that wasn't it, please show the hexdump for the entire luks
>>> signature.  I'd like to see its payload offset.
>>>
>>> Also, if you go back to --chunk=512, open the luks, you could run the
>>> following command to find possible superblock locations:
>>>
>>> hexdump -C /dev/mapper/md2_nas |egrep '^[0-9a-f]+30  .+  53 ef' >sb.lst
>>>
>>> (May take a long time to read the whole array)
>>>
>>> Phil
>> with --chunk=512 i can open the luks but i cannot found a good superblock.
>> yes i can run the hexdump but i think this runs 8 hours or longer.
>> start the hexdump?
>> can i try more today?
> Run the hexdump for half an hour or so.  If it doesn't find some
> candidates in that timeframe, it probably won't.
>
> I've never crashed a luks partition like this, so I'm feeling around a
> bit.  You should understand that luks normally uses "cipher block
> chaining" salted with the sector number.  If you get the blocks out of
> order (wrong chunk size or offset or layout), those sectors won't
> decrypt correctly.  luksOpen won't detect this.
>
> You may have to try many chunk sizes, verifying the data offset every
> time, trying to find the one that will work.
>
> Phil
ok i let the hexdump running for 30-60 min and then i check mehr chunk.
if i can open the luks i should check again the superblocks?
fsck.ext4 -n -b <superblock> /dev/mapper/md2_nas ?

but i see there is a result from hex:

cat sb.lst
011e0830  17 0d bb 6e b2 37 9f f8  53 ef 5b 62 6d ab 0f b8 
|...n.7..S.[bm...|
02a1c830  91 31 32 1a 35 c9 96 ab  53 ef 02 93 05 f2 b7 65 
|.12.5...S......e|
03c48e30  dc 46 07 a9 2d ac 96 36  53 ef 61 48 d1 c7 63 05 
|.F..-..6S.aH..c.|
04c28830  7f 36 db 0a 5a 65 6c 78  53 ef 3a 31 41 83 da c2 
|.6..ZelxS.:1A...|
04c60830  9f 32 a6 e1 1a cc ef dc  53 ef 59 bd 51 ac d0 01 
|.2......S.Y.Q...|
055be030  4b ee f1 d0 8e 36 15 67  53 ef 45 75 a9 cd 3c b5 
|K....6.gS.Eu..<.|
058aa530  82 91 1f 13 6f fa 60 2f  53 ef 1a 68 80 bc a5 0c 
|....o.`/S..h....|
05def830  db fc 8d a1 f3 49 c9 a6  53 ef cf 03 f9 e3 18 00 
|.....I..S.......|
06016630  5c 9b 31 ed 40 74 ad a5  53 ef c3 a8 b5 74 e2 25 
|\.1.@t..S....t.%|
060caf30  b5 aa cd 06 57 d3 22 6c  53 ef 04 54 d6 2a 74 3f 
|....W."lS..T.*t?|
071f0530  cb 6b 07 74 60 37 e9 34  53 ef ba fa cf 2d 69 58 
|.k.t`7.4S....-iX|
087f8c30  95 57 29 2c ca d8 02 0b  53 ef 05 c4 44 17 50 1c 
|.W),....S...D.P.|
0977ac30  32 61 7d 49 fc dc 61 a0  53 ef 53 95 96 88 25 65 
|2a}I..a.S.S...%e|
0be5a330  3f 05 57 2e 8e fd 55 44  53 ef 20 0f f1 a0 5b a0 |?.W...UDS. 
...[.|
0c5de030  65 2e 3a 18 44 b3 65 37  53 ef 24 f2 96 f9 99 dc 
|e.:.D.e7S.$.....|
0d3b8230  2f eb a0 08 d3 3e 32 70  53 ef 3b 5a 0c 7f c3 51 
|/....>2pS.;Z...Q|
0d46b930  d6 e2 c5 23 61 8c b5 26  53 ef 71 3f f3 36 df f6 
|...#a..&S.q?.6..|
0da68330  2a f2 24 fd df 4d 10 71  53 ef 8c b7 da fe 28 b3 
|*.$..M.qS.....(.|
0dfb9730  cc 4a a3 7f 99 d0 9c 59  53 ef f6 78 c4 5e 7c fd 
|.J.....YS..x.^|.|
0e6be030  ae 13 5f 1a 79 e6 a1 33  53 ef 7f e3 07 ef 6e 38 
|.._.y..3S.....n8|
0ee9e130  78 2a b9 8c 17 e8 0b b8  53 ef ad c8 d8 4b 3a 1d 
|x*......S....K:.|
0fe93130  f3 1f 02 69 6f f8 a9 f7  53 ef ee 2a 31 7f da f8 
|...io...S..*1...|
128c7130  45 5b 93 6e 5f 64 26 3c  53 ef 6c 1e be d0 f1 9b 
|E[.n_d&<S.l.....|
12ad4430  94 c0 9b e8 ff 6a 6e 63  53 ef 2d d3 95 f9 3a 64 
|.....jncS.-...:d|
13c76d30  ac 31 88 2b 08 ba 34 6b  53 ef 46 c3 3e 9b c5 05 
|.1.+..4kS.F.>...|
150ef030  1e 63 6d 93 c4 57 50 ac  53 ef 72 a4 62 98 9f f7 
|.cm..WP.S.r.b...|
166ec830  ef 91 29 77 c3 a6 8a 61  53 ef 86 15 8f 2d a9 38 
|..)w...aS....-.8|
16f96d30  1f 05 08 5a 64 df fa 09  53 ef b9 bc 21 85 c4 ff 
|...Zd...S...!...|
19c0b430  e4 a4 fd 26 10 90 ae 00  53 ef 3d b2 06 44 77 2a 
|...&....S.=..Dw*|
19c11d30  83 fd 70 6e 84 5e e9 42  53 ef b5 f2 34 02 e6 76 
|..pn.^.BS...4..v|
1b13d030  9c 1d 5e 72 ff 07 04 0c  53 ef 80 6f eb 9d 68 33 
|..^r....S..o..h3|
1b39f330  bf 1a 86 a0 43 fb c7 fd  53 ef 8f 55 7f 9f bf 46 
|....C...S..U...F|
1c2a9c30  a0 8d 51 34 d0 f6 21 6f  53 ef f4 5a 1f a1 03 9f 
|..Q4..!oS..Z....|
1c8d1630  e9 f3 f3 4b 3e dc 13 cc  53 ef 4b 75 a9 1e fb 74 
|...K>...S.Ku...t|
1e361d30  b8 c9 1a 14 16 c5 ba da  53 ef 22 fd 79 bd da 13 
|........S.".y...|
1f628330  22 b7 78 3b 71 94 67 38  53 ef a7 1a 5d 1c 0c 11 
|".x;q.g8S...]...|
21e76230  40 b9 f7 74 93 7a db 99  53 ef 67 fd 9a f2 38 4d 
|@..t.z..S.g...8M|
237eaf30  20 81 19 51 bc 80 2a 92  53 ef 02 c9 c0 6e a4 39  | 
..Q..*.S....n.9|
27f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
|...M..".S.......|
2aa58330  c7 ba 01 74 ae 83 a9 5c  53 ef 33 50 73 b4 80 67 
|...t...\S.3Ps..g|
2fe5fa30  65 e8 32 40 7d 63 ce d2  53 ef e3 14 29 4b bf eb 
|e.2@}c..S...)K..|
3021a330  c8 52 86 6a c4 83 4b 4f  53 ef f0 48 27 8c e4 6b 
|.R.j..KOS..H'..k|
30bf6e30  66 64 fb 7c 6d 67 e1 ef  53 ef 9d e4 88 ab 7c 28 
|fd.|mg..S.....|(|
32d58f30  ce 79 84 2c 41 54 b7 c7  53 ef c7 c9 b8 7a a9 55 
|.y.,AT..S....z.U|
3456cc30  63 55 26 a5 8d 2d e7 b5  53 ef 6e c2 15 ea 2c de 
|cU&..-..S.n...,.|
36108630  ba 36 3b 45 89 e4 f8 dd  53 ef fc 17 8f 8c 7d f6 
|.6;E....S.....}.|
36c1fe30  36 0a 62 79 11 78 cd a3  53 ef 93 a3 ac 71 fe 2f 
|6.by.x..S....q./|
37601c30  f8 51 e8 5b 4b a9 33 df  53 ef e7 59 b7 15 d4 8e 
|.Q.[K.3.S..Y....|
37f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
|...M..".S.......|
3a0be530  c5 2a 5a 11 34 a3 5e 3d  53 ef d2 98 85 cb 9c 60 
|.*Z.4.^=S......`|
3c7bcb30  a8 32 d6 84 10 24 a9 92  53 ef 53 ce c5 17 46 a1 
|.2...$..S.S...F.|
3e763830  d0 2e e5 e4 5d 12 50 ff  53 ef 2a 3f 54 89 66 b9 
|....].P.S.*?T.f.|
40eb1830  82 45 f5 29 50 11 40 27  53 ef 81 97 c0 d2 40 09 
|.E.)P.@'S.....@.|
42d6bb30  9f 97 47 d4 27 d1 0a 5b  53 ef 5d 98 12 20 2e 79 
|..G.'..[S.].. .y|
44dc8b30  50 57 55 89 43 04 e8 95  53 ef ff 8b 43 04 8d 14 
|PWU.C...S...C...|
4af7d430  54 67 6d 65 f2 54 4c e4  53 ef 42 8a 57 73 f2 7b 
|Tgme.TL.S.B.Ws.{|
4b49b130  f4 d6 92 45 61 fa f4 c5  53 ef be 9d 89 f5 cd 99 
|...Ea...S.......|
4b4d6230  ae 93 05 cc 3c 7d fe 6e  53 ef 66 10 ff 30 b9 24 
|....<}.nS.f..0.$|
4bd02830  35 5c 44 a7 4e 15 b5 87  53 ef 7b 3f 9b 75 6d ab 
|5\D.N...S.{?.um.|
4cf5ad30  08 fc 34 b1 00 1e e0 af  53 ef 8b 7b 18 99 73 1a 
|..4.....S..{..s.|
4d31e130  e8 d9 ef 6a a2 ba d6 d5  53 ef f5 92 f2 55 4c ca 
|...j....S....UL.|
4e887930  7b a0 ce fa fe 6d ad a1  53 ef 3a b5 5f f0 07 da 
|{....m..S.:._...|
50687b30  7b 98 1c d3 49 59 e4 ca  53 ef 46 69 0b 3e ee c3 
|{...IY..S.Fi.>..|
5083e630  53 8c 5e 60 0e 93 e3 94  53 ef d6 e0 85 06 26 d1 
|S.^`....S.....&.|
50dd0b30  f4 8d 43 b9 91 74 a6 de  53 ef c9 a7 d6 79 d6 e5 
|..C..t..S....y..|
50efa530  2d af 75 74 96 f0 29 f7  53 ef 5f 40 94 9d d7 31 
|-.ut..).S._@...1|
519ed830  4b 5b 76 50 fb 5a e7 f8  53 ef 79 ad 17 0c 41 d7 
|K[vP.Z..S.y...A.|
52968e30  16 27 4a af 10 70 b9 a7  53 ef fc 76 4d e1 a5 d3 
|.'J..p..S..vM...|
53df7430  4a 34 16 33 78 40 16 07  53 ef 87 80 14 ad 03 be 
|J4.3x@..S.......|
545b3e30  38 d4 7c f7 52 62 0d a4  53 ef 9e 0d f4 f8 3e 4c 
|8.|.Rb..S.....>L|
550dda30  4e 80 ee c9 75 5a bd 8d  53 ef c5 1d 9e 51 ad 70 
|N...uZ..S....Q.p|
55afb530  a9 de 66 d9 a6 ee 35 a8  53 ef 36 61 69 c1 6a c4 
|..f...5.S.6ai.j.|
55baf430  ad 20 92 a6 f7 f1 fb 43  53 ef 10 03 07 9e ba 38  |. 
.....CS......8|
55ca7c30  9c 00 6c 6a 69 29 ae cc  53 ef a0 6e 2e 2c 48 67 
|..lji)..S..n.,Hg|
56203730  6d ef f0 ae 12 61 9e 7f  53 ef e6 cf 79 a9 e6 a2 
|m....a..S...y...|
567d3030  37 83 4d f6 fa 79 2d f1  53 ef b4 19 dc 31 41 ee 
|7.M..y-.S....1A.|
5939bf30  91 7f f7 ed 44 4b f1 f9  53 ef fc f3 44 fd 7f 65 
|....DK..S...D..e|
5a600030  39 3b 64 91 2a 62 4f 62  53 ef 54 e2 0b 9f 77 04 
|9;d.*bObS.T...w.|
5bffaa30  54 6d 0f 30 16 20 71 88  53 ef 72 2a 89 bb e9 d8  |Tm.0. 
q.S.r*....|
5cd40e30  60 dc 99 6c d2 f7 4e da  53 ef 27 7f 15 e1 6a 6b 
|`..l..N.S.'...jk|
5ce11c30  5d 97 d8 49 a9 c8 ea bc  53 ef c1 ac b2 5c 3c 58 
|]..I....S....\<X|
5d8f6d30  c3 bc 16 0b 85 f8 89 37  53 ef 60 be 22 23 95 01 
|.......7S.`."#..|
5e3b9630  4b 49 18 a9 32 b5 c7 2b  53 ef 0e 43 e3 d3 7f 1c 
|KI..2..+S..C....|
5e950430  f2 f7 b9 3b 9c 1f 8b 69  53 ef af d6 c2 31 6b 0e 
|...;...iS....1k.|
5ea3e030  66 2e f7 40 61 41 50 de  53 ef 56 01 9a 21 a0 33 
|f..@aAP.S.V..!.3|
5f638d30  42 b1 ec 8a cc 57 8b c9  53 ef df 41 d5 9d 9d 6d 
|B....W..S..A...m|
6a241830  16 e8 c4 39 bb 50 96 f8  53 ef 60 62 90 e6 4e b3 
|...9.P..S.`b..N.|
6a249f30  17 d6 77 d3 62 b2 c3 83  53 ef b4 9d ff d7 07 15 
|..w.b...S.......|
6a8e6230  3e 78 55 da 76 e1 55 51  53 ef 2b 21 79 fc 75 cd 
|>xU.v.UQS.+!y.u.|
6b166730  07 db 8d 18 b8 69 cc 22  53 ef 2b 4f dd df b8 56 
|.....i."S.+O...V|
6b82f830  61 45 5f 5a bf cb 25 10  53 ef b9 e3 11 2a e4 a5 
|aE_Z..%.S....*..|
6c3f3d30  d3 a4 cc a5 fd bd d4 dd  53 ef 5e eb 26 ff 39 6e 
|........S.^.&.9n|
6cd42f30  4f c5 1f 2b 13 3a 19 b8  53 ef 67 59 70 1b 11 65 
|O..+.:..S.gYp..e|
6d40f130  76 99 1f 14 1a 8a 33 58  53 ef c0 3b de 8e 97 0f 
|v.....3XS..;....|
6d6cbb30  82 6a 37 4b 34 59 64 1c  53 ef 64 c2 29 88 50 31 
|.j7K4Yd.S.d.).P1|
6e606730  dc 4d 43 71 ff 2b 71 68  53 ef 9d 41 16 0f 91 10 
|.MCq.+qhS..A....|
6e909e30  96 0c 7c be 2a f3 a4 e9  53 ef fc 00 20 2f 32 32 |..|.*...S... 
/22|
72d0df30  1b b5 d0 79 e3 33 76 22  53 ef 4e d9 14 3b 17 93 
|...y.3v"S.N..;..|
74a44b30  03 87 a4 b2 3a 73 22 03  53 ef 54 6b 4d ed 9d a8 
|....:s".S.TkM...|
7582bc30  2d 50 cb 49 10 c8 c8 4e  53 ef 04 79 4a 8a d1 9c 
|-P.I...NS..yJ...|
7599fa30  1a 25 29 c3 32 6c 30 55  53 ef 99 cf 29 20 e5 81 
|.%).2l0US...) ..|
76a0ed30  86 4f 8d 18 da fc af f1  53 ef c3 c1 93 c4 a5 fa 
|.O......S.......|
76b63f30  e1 9d e8 e4 0b 1e 14 80  53 ef 11 99 e7 58 37 b9 
|........S....X7.|
79eec330  a2 5e 93 f9 db f3 f1 85  53 ef bf fd 36 13 b1

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 19:29                                                                     ` Stone
@ 2013-02-21 19:45                                                                       ` Phil Turmel
  2013-02-21 19:46                                                                       ` Stone
  1 sibling, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 19:45 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/21/2013 02:29 PM, Stone wrote:

> ok i let the hexdump running for 30-60 min and then i check mehr
> chunk. if i can open the luks i should check again the superblocks? 
> fsck.ext4 -n -b <superblock> /dev/mapper/md2_nas ?
> 
> but i see there is a result from hex:

[trim /]

None of those look like real superblocks.  Here's the beginning of that
report for a simple fs here:

00000430  1d 22 26 51 3b 00 ff ff  53 ef 01 00 01 00 00 00  |."&Q;...S.......|
0371f730  98 73 ed 8f 60 6a f0 5f  53 ef 43 bd 58 f0 b0 11  |.s..`j._S.C.X...|
0377dc30  24 18 c0 32 ef f2 c0 4a  53 ef ac 90 d1 30 9c 01  |$..2...JS....0..|
03951c30  26 0d 5f ef 3a a9 10 07  53 ef aa 87 3c 17 d1 32  |&._.:...S...<..2|
03a7ec30  77 6b fb 23 cf 1a d6 7f  53 ef ce 73 70 b4 8c ce  |wk.#....S..sp...|
05816930  22 00 f6 22 e8 a7 89 49  53 ef 56 00 e1 8d 5c b7  |".."...IS.V...\.|
06265630  e9 c3 a0 10 29 cc 70 b2  53 ef 8c 07 0d aa 30 90  |....).p.S.....0.|
066dae30  c1 c0 aa 32 6c 56 33 43  53 ef d4 bd 5f 03 fc 06  |...2lV3CS..._...|
06ee0730  88 8a 99 09 0e e8 af ba  53 ef db 77 83 62 6e a0  |........S..w.bn.|
07842630  c6 ff d7 79 f8 5d 20 87  53 ef cf df 67 90 91 19  |...y.] .S...g...|
08000030  42 5b f8 4f 00 00 ff ff  53 ef 00 00 01 00 00 00  |B[.O....S.......|
08826630  e2 ac 97 4a 7e 2e 17 26  53 ef 9f 59 22 5d 3a 28  |...J~..&S..Y"]:(|
0d2d6330  95 f6 5e c6 7c 1d 04 8e  53 ef d7 69 e1 0a d3 73  |..^.|...S..i...s|
0e077d30  c7 18 fa 82 4c e3 9f 78  53 ef 57 49 fb 1b 64 f9  |....L..xS.WI..d.|
0f213830  6e 1d 0b c1 85 d4 07 e5  53 ef f1 2c d7 2e b1 19  |n.......S..,....|
11ff3230  4c 22 10 55 c5 95 fa 7d  53 ef 57 22 6a 25 46 3d  |L".U...}S.W"j%F=|

The "candidates" at 00000430 and 08000030 are really part of a superblock.
The others are not.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 19:29                                                                     ` Stone
  2013-02-21 19:45                                                                       ` Phil Turmel
@ 2013-02-21 19:46                                                                       ` Stone
       [not found]                                                                         ` <51269DE0.5070905@heisl.org>
  1 sibling, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-21 19:46 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 21.02.2013 20:29, schrieb Stone:
> Am 21.02.2013 20:24, schrieb Phil Turmel:
>> On 02/21/2013 02:12 PM, Stone wrote:
>>
>>>>> ok. with --chunk=64 i cannot open the luks.
>>>>> in witch steps (chunk) should i continue?
>>>> That is a big surprise.  The luks signature should not move with chunk
>>>> size.  Please use "mdadm -E /dev/sdc1" to recheck your data offset.
>>>>
>>>> If that wasn't it, please show the hexdump for the entire luks
>>>> signature.  I'd like to see its payload offset.
>>>>
>>>> Also, if you go back to --chunk=512, open the luks, you could run the
>>>> following command to find possible superblock locations:
>>>>
>>>> hexdump -C /dev/mapper/md2_nas |egrep '^[0-9a-f]+30  .+  53 ef' 
>>>> >sb.lst
>>>>
>>>> (May take a long time to read the whole array)
>>>>
>>>> Phil
>>> with --chunk=512 i can open the luks but i cannot found a good 
>>> superblock.
>>> yes i can run the hexdump but i think this runs 8 hours or longer.
>>> start the hexdump?
>>> can i try more today?
>> Run the hexdump for half an hour or so.  If it doesn't find some
>> candidates in that timeframe, it probably won't.
>>
>> I've never crashed a luks partition like this, so I'm feeling around a
>> bit.  You should understand that luks normally uses "cipher block
>> chaining" salted with the sector number.  If you get the blocks out of
>> order (wrong chunk size or offset or layout), those sectors won't
>> decrypt correctly.  luksOpen won't detect this.
>>
>> You may have to try many chunk sizes, verifying the data offset every
>> time, trying to find the one that will work.
>>
>> Phil
> ok i let the hexdump running for 30-60 min and then i check mehr chunk.
> if i can open the luks i should check again the superblocks?
> fsck.ext4 -n -b <superblock> /dev/mapper/md2_nas ?
>
> but i see there is a result from hex:
>
> cat sb.lst
> 011e0830  17 0d bb 6e b2 37 9f f8  53 ef 5b 62 6d ab 0f b8 
> |...n.7..S.[bm...|
> 02a1c830  91 31 32 1a 35 c9 96 ab  53 ef 02 93 05 f2 b7 65 
> |.12.5...S......e|
> 03c48e30  dc 46 07 a9 2d ac 96 36  53 ef 61 48 d1 c7 63 05 
> |.F..-..6S.aH..c.|
> 04c28830  7f 36 db 0a 5a 65 6c 78  53 ef 3a 31 41 83 da c2 
> |.6..ZelxS.:1A...|
> 04c60830  9f 32 a6 e1 1a cc ef dc  53 ef 59 bd 51 ac d0 01 
> |.2......S.Y.Q...|
> 055be030  4b ee f1 d0 8e 36 15 67  53 ef 45 75 a9 cd 3c b5 
> |K....6.gS.Eu..<.|
> 058aa530  82 91 1f 13 6f fa 60 2f  53 ef 1a 68 80 bc a5 0c 
> |....o.`/S..h....|
> 05def830  db fc 8d a1 f3 49 c9 a6  53 ef cf 03 f9 e3 18 00 
> |.....I..S.......|
> 06016630  5c 9b 31 ed 40 74 ad a5  53 ef c3 a8 b5 74 e2 25 
> |\.1.@t..S....t.%|
> 060caf30  b5 aa cd 06 57 d3 22 6c  53 ef 04 54 d6 2a 74 3f 
> |....W."lS..T.*t?|
> 071f0530  cb 6b 07 74 60 37 e9 34  53 ef ba fa cf 2d 69 58 
> |.k.t`7.4S....-iX|
> 087f8c30  95 57 29 2c ca d8 02 0b  53 ef 05 c4 44 17 50 1c 
> |.W),....S...D.P.|
> 0977ac30  32 61 7d 49 fc dc 61 a0  53 ef 53 95 96 88 25 65 
> |2a}I..a.S.S...%e|
> 0be5a330  3f 05 57 2e 8e fd 55 44  53 ef 20 0f f1 a0 5b a0 |?.W...UDS. 
> ...[.|
> 0c5de030  65 2e 3a 18 44 b3 65 37  53 ef 24 f2 96 f9 99 dc 
> |e.:.D.e7S.$.....|
> 0d3b8230  2f eb a0 08 d3 3e 32 70  53 ef 3b 5a 0c 7f c3 51 
> |/....>2pS.;Z...Q|
> 0d46b930  d6 e2 c5 23 61 8c b5 26  53 ef 71 3f f3 36 df f6 
> |...#a..&S.q?.6..|
> 0da68330  2a f2 24 fd df 4d 10 71  53 ef 8c b7 da fe 28 b3 
> |*.$..M.qS.....(.|
> 0dfb9730  cc 4a a3 7f 99 d0 9c 59  53 ef f6 78 c4 5e 7c fd 
> |.J.....YS..x.^|.|
> 0e6be030  ae 13 5f 1a 79 e6 a1 33  53 ef 7f e3 07 ef 6e 38 
> |.._.y..3S.....n8|
> 0ee9e130  78 2a b9 8c 17 e8 0b b8  53 ef ad c8 d8 4b 3a 1d 
> |x*......S....K:.|
> 0fe93130  f3 1f 02 69 6f f8 a9 f7  53 ef ee 2a 31 7f da f8 
> |...io...S..*1...|
> 128c7130  45 5b 93 6e 5f 64 26 3c  53 ef 6c 1e be d0 f1 9b 
> |E[.n_d&<S.l.....|
> 12ad4430  94 c0 9b e8 ff 6a 6e 63  53 ef 2d d3 95 f9 3a 64 
> |.....jncS.-...:d|
> 13c76d30  ac 31 88 2b 08 ba 34 6b  53 ef 46 c3 3e 9b c5 05 
> |.1.+..4kS.F.>...|
> 150ef030  1e 63 6d 93 c4 57 50 ac  53 ef 72 a4 62 98 9f f7 
> |.cm..WP.S.r.b...|
> 166ec830  ef 91 29 77 c3 a6 8a 61  53 ef 86 15 8f 2d a9 38 
> |..)w...aS....-.8|
> 16f96d30  1f 05 08 5a 64 df fa 09  53 ef b9 bc 21 85 c4 ff 
> |...Zd...S...!...|
> 19c0b430  e4 a4 fd 26 10 90 ae 00  53 ef 3d b2 06 44 77 2a 
> |...&....S.=..Dw*|
> 19c11d30  83 fd 70 6e 84 5e e9 42  53 ef b5 f2 34 02 e6 76 
> |..pn.^.BS...4..v|
> 1b13d030  9c 1d 5e 72 ff 07 04 0c  53 ef 80 6f eb 9d 68 33 
> |..^r....S..o..h3|
> 1b39f330  bf 1a 86 a0 43 fb c7 fd  53 ef 8f 55 7f 9f bf 46 
> |....C...S..U...F|
> 1c2a9c30  a0 8d 51 34 d0 f6 21 6f  53 ef f4 5a 1f a1 03 9f 
> |..Q4..!oS..Z....|
> 1c8d1630  e9 f3 f3 4b 3e dc 13 cc  53 ef 4b 75 a9 1e fb 74 
> |...K>...S.Ku...t|
> 1e361d30  b8 c9 1a 14 16 c5 ba da  53 ef 22 fd 79 bd da 13 
> |........S.".y...|
> 1f628330  22 b7 78 3b 71 94 67 38  53 ef a7 1a 5d 1c 0c 11 
> |".x;q.g8S...]...|
> 21e76230  40 b9 f7 74 93 7a db 99  53 ef 67 fd 9a f2 38 4d 
> |@..t.z..S.g...8M|
> 237eaf30  20 81 19 51 bc 80 2a 92  53 ef 02 c9 c0 6e a4 39  | 
> ..Q..*.S....n.9|
> 27f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
> |...M..".S.......|
> 2aa58330  c7 ba 01 74 ae 83 a9 5c  53 ef 33 50 73 b4 80 67 
> |...t...\S.3Ps..g|
> 2fe5fa30  65 e8 32 40 7d 63 ce d2  53 ef e3 14 29 4b bf eb 
> |e.2@}c..S...)K..|
> 3021a330  c8 52 86 6a c4 83 4b 4f  53 ef f0 48 27 8c e4 6b 
> |.R.j..KOS..H'..k|
> 30bf6e30  66 64 fb 7c 6d 67 e1 ef  53 ef 9d e4 88 ab 7c 28 
> |fd.|mg..S.....|(|
> 32d58f30  ce 79 84 2c 41 54 b7 c7  53 ef c7 c9 b8 7a a9 55 
> |.y.,AT..S....z.U|
> 3456cc30  63 55 26 a5 8d 2d e7 b5  53 ef 6e c2 15 ea 2c de 
> |cU&..-..S.n...,.|
> 36108630  ba 36 3b 45 89 e4 f8 dd  53 ef fc 17 8f 8c 7d f6 
> |.6;E....S.....}.|
> 36c1fe30  36 0a 62 79 11 78 cd a3  53 ef 93 a3 ac 71 fe 2f 
> |6.by.x..S....q./|
> 37601c30  f8 51 e8 5b 4b a9 33 df  53 ef e7 59 b7 15 d4 8e 
> |.Q.[K.3.S..Y....|
> 37f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
> |...M..".S.......|
> 3a0be530  c5 2a 5a 11 34 a3 5e 3d  53 ef d2 98 85 cb 9c 60 
> |.*Z.4.^=S......`|
> 3c7bcb30  a8 32 d6 84 10 24 a9 92  53 ef 53 ce c5 17 46 a1 
> |.2...$..S.S...F.|
> 3e763830  d0 2e e5 e4 5d 12 50 ff  53 ef 2a 3f 54 89 66 b9 
> |....].P.S.*?T.f.|
> 40eb1830  82 45 f5 29 50 11 40 27  53 ef 81 97 c0 d2 40 09 
> |.E.)P.@'S.....@.|
> 42d6bb30  9f 97 47 d4 27 d1 0a 5b  53 ef 5d 98 12 20 2e 79 
> |..G.'..[S.].. .y|
> 44dc8b30  50 57 55 89 43 04 e8 95  53 ef ff 8b 43 04 8d 14 
> |PWU.C...S...C...|
> 4af7d430  54 67 6d 65 f2 54 4c e4  53 ef 42 8a 57 73 f2 7b 
> |Tgme.TL.S.B.Ws.{|
> 4b49b130  f4 d6 92 45 61 fa f4 c5  53 ef be 9d 89 f5 cd 99 
> |...Ea...S.......|
> 4b4d6230  ae 93 05 cc 3c 7d fe 6e  53 ef 66 10 ff 30 b9 24 
> |....<}.nS.f..0.$|
> 4bd02830  35 5c 44 a7 4e 15 b5 87  53 ef 7b 3f 9b 75 6d ab 
> |5\D.N...S.{?.um.|
> 4cf5ad30  08 fc 34 b1 00 1e e0 af  53 ef 8b 7b 18 99 73 1a 
> |..4.....S..{..s.|
> 4d31e130  e8 d9 ef 6a a2 ba d6 d5  53 ef f5 92 f2 55 4c ca 
> |...j....S....UL.|
> 4e887930  7b a0 ce fa fe 6d ad a1  53 ef 3a b5 5f f0 07 da 
> |{....m..S.:._...|
> 50687b30  7b 98 1c d3 49 59 e4 ca  53 ef 46 69 0b 3e ee c3 
> |{...IY..S.Fi.>..|
> 5083e630  53 8c 5e 60 0e 93 e3 94  53 ef d6 e0 85 06 26 d1 
> |S.^`....S.....&.|
> 50dd0b30  f4 8d 43 b9 91 74 a6 de  53 ef c9 a7 d6 79 d6 e5 
> |..C..t..S....y..|
> 50efa530  2d af 75 74 96 f0 29 f7  53 ef 5f 40 94 9d d7 31 
> |-.ut..).S._@...1|
> 519ed830  4b 5b 76 50 fb 5a e7 f8  53 ef 79 ad 17 0c 41 d7 
> |K[vP.Z..S.y...A.|
> 52968e30  16 27 4a af 10 70 b9 a7  53 ef fc 76 4d e1 a5 d3 
> |.'J..p..S..vM...|
> 53df7430  4a 34 16 33 78 40 16 07  53 ef 87 80 14 ad 03 be 
> |J4.3x@..S.......|
> 545b3e30  38 d4 7c f7 52 62 0d a4  53 ef 9e 0d f4 f8 3e 4c 
> |8.|.Rb..S.....>L|
> 550dda30  4e 80 ee c9 75 5a bd 8d  53 ef c5 1d 9e 51 ad 70 
> |N...uZ..S....Q.p|
> 55afb530  a9 de 66 d9 a6 ee 35 a8  53 ef 36 61 69 c1 6a c4 
> |..f...5.S.6ai.j.|
> 55baf430  ad 20 92 a6 f7 f1 fb 43  53 ef 10 03 07 9e ba 38  |. 
> .....CS......8|
> 55ca7c30  9c 00 6c 6a 69 29 ae cc  53 ef a0 6e 2e 2c 48 67 
> |..lji)..S..n.,Hg|
> 56203730  6d ef f0 ae 12 61 9e 7f  53 ef e6 cf 79 a9 e6 a2 
> |m....a..S...y...|
> 567d3030  37 83 4d f6 fa 79 2d f1  53 ef b4 19 dc 31 41 ee 
> |7.M..y-.S....1A.|
> 5939bf30  91 7f f7 ed 44 4b f1 f9  53 ef fc f3 44 fd 7f 65 
> |....DK..S...D..e|
> 5a600030  39 3b 64 91 2a 62 4f 62  53 ef 54 e2 0b 9f 77 04 
> |9;d.*bObS.T...w.|
> 5bffaa30  54 6d 0f 30 16 20 71 88  53 ef 72 2a 89 bb e9 d8  |Tm.0. 
> q.S.r*....|
> 5cd40e30  60 dc 99 6c d2 f7 4e da  53 ef 27 7f 15 e1 6a 6b 
> |`..l..N.S.'...jk|
> 5ce11c30  5d 97 d8 49 a9 c8 ea bc  53 ef c1 ac b2 5c 3c 58 
> |]..I....S....\<X|
> 5d8f6d30  c3 bc 16 0b 85 f8 89 37  53 ef 60 be 22 23 95 01 
> |.......7S.`."#..|
> 5e3b9630  4b 49 18 a9 32 b5 c7 2b  53 ef 0e 43 e3 d3 7f 1c 
> |KI..2..+S..C....|
> 5e950430  f2 f7 b9 3b 9c 1f 8b 69  53 ef af d6 c2 31 6b 0e 
> |...;...iS....1k.|
> 5ea3e030  66 2e f7 40 61 41 50 de  53 ef 56 01 9a 21 a0 33 
> |f..@aAP.S.V..!.3|
> 5f638d30  42 b1 ec 8a cc 57 8b c9  53 ef df 41 d5 9d 9d 6d 
> |B....W..S..A...m|
> 6a241830  16 e8 c4 39 bb 50 96 f8  53 ef 60 62 90 e6 4e b3 
> |...9.P..S.`b..N.|
> 6a249f30  17 d6 77 d3 62 b2 c3 83  53 ef b4 9d ff d7 07 15 
> |..w.b...S.......|
> 6a8e6230  3e 78 55 da 76 e1 55 51  53 ef 2b 21 79 fc 75 cd 
> |>xU.v.UQS.+!y.u.|
> 6b166730  07 db 8d 18 b8 69 cc 22  53 ef 2b 4f dd df b8 56 
> |.....i."S.+O...V|
> 6b82f830  61 45 5f 5a bf cb 25 10  53 ef b9 e3 11 2a e4 a5 
> |aE_Z..%.S....*..|
> 6c3f3d30  d3 a4 cc a5 fd bd d4 dd  53 ef 5e eb 26 ff 39 6e 
> |........S.^.&.9n|
> 6cd42f30  4f c5 1f 2b 13 3a 19 b8  53 ef 67 59 70 1b 11 65 
> |O..+.:..S.gYp..e|
> 6d40f130  76 99 1f 14 1a 8a 33 58  53 ef c0 3b de 8e 97 0f 
> |v.....3XS..;....|
> 6d6cbb30  82 6a 37 4b 34 59 64 1c  53 ef 64 c2 29 88 50 31 
> |.j7K4Yd.S.d.).P1|
> 6e606730  dc 4d 43 71 ff 2b 71 68  53 ef 9d 41 16 0f 91 10 
> |.MCq.+qhS..A....|
> 6e909e30  96 0c 7c be 2a f3 a4 e9  53 ef fc 00 20 2f 32 32 
> |..|.*...S... /22|
> 72d0df30  1b b5 d0 79 e3 33 76 22  53 ef 4e d9 14 3b 17 93 
> |...y.3v"S.N..;..|
> 74a44b30  03 87 a4 b2 3a 73 22 03  53 ef 54 6b 4d ed 9d a8 
> |....:s".S.TkM...|
> 7582bc30  2d 50 cb 49 10 c8 c8 4e  53 ef 04 79 4a 8a d1 9c 
> |-P.I...NS..yJ...|
> 7599fa30  1a 25 29 c3 32 6c 30 55  53 ef 99 cf 29 20 e5 81 
> |.%).2l0US...) ..|
> 76a0ed30  86 4f 8d 18 da fc af f1  53 ef c3 c1 93 c4 a5 fa 
> |.O......S.......|
> 76b63f30  e1 9d e8 e4 0b 1e 14 80  53 ef 11 99 e7 58 37 b9 
> |........S....X7.|
> 79eec330  a2 5e 93 f9 db f3 f1 85  53 ef bf fd 36 13 b1


more results for you:
cat sb.lst
011e0830  17 0d bb 6e b2 37 9f f8  53 ef 5b 62 6d ab 0f b8 
|...n.7..S.[bm...|
02a1c830  91 31 32 1a 35 c9 96 ab  53 ef 02 93 05 f2 b7 65 
|.12.5...S......e|
03c48e30  dc 46 07 a9 2d ac 96 36  53 ef 61 48 d1 c7 63 05 
|.F..-..6S.aH..c.|
04c28830  7f 36 db 0a 5a 65 6c 78  53 ef 3a 31 41 83 da c2 
|.6..ZelxS.:1A...|
04c60830  9f 32 a6 e1 1a cc ef dc  53 ef 59 bd 51 ac d0 01 
|.2......S.Y.Q...|
055be030  4b ee f1 d0 8e 36 15 67  53 ef 45 75 a9 cd 3c b5 
|K....6.gS.Eu..<.|
058aa530  82 91 1f 13 6f fa 60 2f  53 ef 1a 68 80 bc a5 0c 
|....o.`/S..h....|
05def830  db fc 8d a1 f3 49 c9 a6  53 ef cf 03 f9 e3 18 00 
|.....I..S.......|
06016630  5c 9b 31 ed 40 74 ad a5  53 ef c3 a8 b5 74 e2 25 
|\.1.@t..S....t.%|
060caf30  b5 aa cd 06 57 d3 22 6c  53 ef 04 54 d6 2a 74 3f 
|....W."lS..T.*t?|
071f0530  cb 6b 07 74 60 37 e9 34  53 ef ba fa cf 2d 69 58 
|.k.t`7.4S....-iX|
087f8c30  95 57 29 2c ca d8 02 0b  53 ef 05 c4 44 17 50 1c 
|.W),....S...D.P.|
0977ac30  32 61 7d 49 fc dc 61 a0  53 ef 53 95 96 88 25 65 
|2a}I..a.S.S...%e|
0be5a330  3f 05 57 2e 8e fd 55 44  53 ef 20 0f f1 a0 5b a0 |?.W...UDS. 
...[.|
0c5de030  65 2e 3a 18 44 b3 65 37  53 ef 24 f2 96 f9 99 dc 
|e.:.D.e7S.$.....|
0d3b8230  2f eb a0 08 d3 3e 32 70  53 ef 3b 5a 0c 7f c3 51 
|/....>2pS.;Z...Q|
0d46b930  d6 e2 c5 23 61 8c b5 26  53 ef 71 3f f3 36 df f6 
|...#a..&S.q?.6..|
0da68330  2a f2 24 fd df 4d 10 71  53 ef 8c b7 da fe 28 b3 
|*.$..M.qS.....(.|
0dfb9730  cc 4a a3 7f 99 d0 9c 59  53 ef f6 78 c4 5e 7c fd 
|.J.....YS..x.^|.|
0e6be030  ae 13 5f 1a 79 e6 a1 33  53 ef 7f e3 07 ef 6e 38 
|.._.y..3S.....n8|
0ee9e130  78 2a b9 8c 17 e8 0b b8  53 ef ad c8 d8 4b 3a 1d 
|x*......S....K:.|
0fe93130  f3 1f 02 69 6f f8 a9 f7  53 ef ee 2a 31 7f da f8 
|...io...S..*1...|
128c7130  45 5b 93 6e 5f 64 26 3c  53 ef 6c 1e be d0 f1 9b 
|E[.n_d&<S.l.....|
12ad4430  94 c0 9b e8 ff 6a 6e 63  53 ef 2d d3 95 f9 3a 64 
|.....jncS.-...:d|
13c76d30  ac 31 88 2b 08 ba 34 6b  53 ef 46 c3 3e 9b c5 05 
|.1.+..4kS.F.>...|
150ef030  1e 63 6d 93 c4 57 50 ac  53 ef 72 a4 62 98 9f f7 
|.cm..WP.S.r.b...|
166ec830  ef 91 29 77 c3 a6 8a 61  53 ef 86 15 8f 2d a9 38 
|..)w...aS....-.8|
16f96d30  1f 05 08 5a 64 df fa 09  53 ef b9 bc 21 85 c4 ff 
|...Zd...S...!...|
19c0b430  e4 a4 fd 26 10 90 ae 00  53 ef 3d b2 06 44 77 2a 
|...&....S.=..Dw*|
19c11d30  83 fd 70 6e 84 5e e9 42  53 ef b5 f2 34 02 e6 76 
|..pn.^.BS...4..v|
1b13d030  9c 1d 5e 72 ff 07 04 0c  53 ef 80 6f eb 9d 68 33 
|..^r....S..o..h3|
1b39f330  bf 1a 86 a0 43 fb c7 fd  53 ef 8f 55 7f 9f bf 46 
|....C...S..U...F|
1c2a9c30  a0 8d 51 34 d0 f6 21 6f  53 ef f4 5a 1f a1 03 9f 
|..Q4..!oS..Z....|
1c8d1630  e9 f3 f3 4b 3e dc 13 cc  53 ef 4b 75 a9 1e fb 74 
|...K>...S.Ku...t|
1e361d30  b8 c9 1a 14 16 c5 ba da  53 ef 22 fd 79 bd da 13 
|........S.".y...|
1f628330  22 b7 78 3b 71 94 67 38  53 ef a7 1a 5d 1c 0c 11 
|".x;q.g8S...]...|
21e76230  40 b9 f7 74 93 7a db 99  53 ef 67 fd 9a f2 38 4d 
|@..t.z..S.g...8M|
237eaf30  20 81 19 51 bc 80 2a 92  53 ef 02 c9 c0 6e a4 39  | 
..Q..*.S....n.9|
27f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
|...M..".S.......|
2aa58330  c7 ba 01 74 ae 83 a9 5c  53 ef 33 50 73 b4 80 67 
|...t...\S.3Ps..g|
2fe5fa30  65 e8 32 40 7d 63 ce d2  53 ef e3 14 29 4b bf eb 
|e.2@}c..S...)K..|
3021a330  c8 52 86 6a c4 83 4b 4f  53 ef f0 48 27 8c e4 6b 
|.R.j..KOS..H'..k|
30bf6e30  66 64 fb 7c 6d 67 e1 ef  53 ef 9d e4 88 ab 7c 28 
|fd.|mg..S.....|(|
32d58f30  ce 79 84 2c 41 54 b7 c7  53 ef c7 c9 b8 7a a9 55 
|.y.,AT..S....z.U|
3456cc30  63 55 26 a5 8d 2d e7 b5  53 ef 6e c2 15 ea 2c de 
|cU&..-..S.n...,.|
36108630  ba 36 3b 45 89 e4 f8 dd  53 ef fc 17 8f 8c 7d f6 
|.6;E....S.....}.|
36c1fe30  36 0a 62 79 11 78 cd a3  53 ef 93 a3 ac 71 fe 2f 
|6.by.x..S....q./|
37601c30  f8 51 e8 5b 4b a9 33 df  53 ef e7 59 b7 15 d4 8e 
|.Q.[K.3.S..Y....|
37f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
|...M..".S.......|
3a0be530  c5 2a 5a 11 34 a3 5e 3d  53 ef d2 98 85 cb 9c 60 
|.*Z.4.^=S......`|
3c7bcb30  a8 32 d6 84 10 24 a9 92  53 ef 53 ce c5 17 46 a1 
|.2...$..S.S...F.|
3e763830  d0 2e e5 e4 5d 12 50 ff  53 ef 2a 3f 54 89 66 b9 
|....].P.S.*?T.f.|
40eb1830  82 45 f5 29 50 11 40 27  53 ef 81 97 c0 d2 40 09 
|.E.)P.@'S.....@.|
42d6bb30  9f 97 47 d4 27 d1 0a 5b  53 ef 5d 98 12 20 2e 79 
|..G.'..[S.].. .y|
44dc8b30  50 57 55 89 43 04 e8 95  53 ef ff 8b 43 04 8d 14 
|PWU.C...S...C...|
4af7d430  54 67 6d 65 f2 54 4c e4  53 ef 42 8a 57 73 f2 7b 
|Tgme.TL.S.B.Ws.{|
4b49b130  f4 d6 92 45 61 fa f4 c5  53 ef be 9d 89 f5 cd 99 
|...Ea...S.......|
4b4d6230  ae 93 05 cc 3c 7d fe 6e  53 ef 66 10 ff 30 b9 24 
|....<}.nS.f..0.$|
4bd02830  35 5c 44 a7 4e 15 b5 87  53 ef 7b 3f 9b 75 6d ab 
|5\D.N...S.{?.um.|
4cf5ad30  08 fc 34 b1 00 1e e0 af  53 ef 8b 7b 18 99 73 1a 
|..4.....S..{..s.|
4d31e130  e8 d9 ef 6a a2 ba d6 d5  53 ef f5 92 f2 55 4c ca 
|...j....S....UL.|
4e887930  7b a0 ce fa fe 6d ad a1  53 ef 3a b5 5f f0 07 da 
|{....m..S.:._...|
50687b30  7b 98 1c d3 49 59 e4 ca  53 ef 46 69 0b 3e ee c3 
|{...IY..S.Fi.>..|
5083e630  53 8c 5e 60 0e 93 e3 94  53 ef d6 e0 85 06 26 d1 
|S.^`....S.....&.|
50dd0b30  f4 8d 43 b9 91 74 a6 de  53 ef c9 a7 d6 79 d6 e5 
|..C..t..S....y..|
50efa530  2d af 75 74 96 f0 29 f7  53 ef 5f 40 94 9d d7 31 
|-.ut..).S._@...1|
519ed830  4b 5b 76 50 fb 5a e7 f8  53 ef 79 ad 17 0c 41 d7 
|K[vP.Z..S.y...A.|
52968e30  16 27 4a af 10 70 b9 a7  53 ef fc 76 4d e1 a5 d3 
|.'J..p..S..vM...|
53df7430  4a 34 16 33 78 40 16 07  53 ef 87 80 14 ad 03 be 
|J4.3x@..S.......|
545b3e30  38 d4 7c f7 52 62 0d a4  53 ef 9e 0d f4 f8 3e 4c 
|8.|.Rb..S.....>L|
550dda30  4e 80 ee c9 75 5a bd 8d  53 ef c5 1d 9e 51 ad 70 
|N...uZ..S....Q.p|
55afb530  a9 de 66 d9 a6 ee 35 a8  53 ef 36 61 69 c1 6a c4 
|..f...5.S.6ai.j.|
55baf430  ad 20 92 a6 f7 f1 fb 43  53 ef 10 03 07 9e ba 38  |. 
.....CS......8|
55ca7c30  9c 00 6c 6a 69 29 ae cc  53 ef a0 6e 2e 2c 48 67 
|..lji)..S..n.,Hg|
56203730  6d ef f0 ae 12 61 9e 7f  53 ef e6 cf 79 a9 e6 a2 
|m....a..S...y...|
567d3030  37 83 4d f6 fa 79 2d f1  53 ef b4 19 dc 31 41 ee 
|7.M..y-.S....1A.|
5939bf30  91 7f f7 ed 44 4b f1 f9  53 ef fc f3 44 fd 7f 65 
|....DK..S...D..e|
5a600030  39 3b 64 91 2a 62 4f 62  53 ef 54 e2 0b 9f 77 04 
|9;d.*bObS.T...w.|
5bffaa30  54 6d 0f 30 16 20 71 88  53 ef 72 2a 89 bb e9 d8  |Tm.0. 
q.S.r*....|
5cd40e30  60 dc 99 6c d2 f7 4e da  53 ef 27 7f 15 e1 6a 6b 
|`..l..N.S.'...jk|
5ce11c30  5d 97 d8 49 a9 c8 ea bc  53 ef c1 ac b2 5c 3c 58 
|]..I....S....\<X|
5d8f6d30  c3 bc 16 0b 85 f8 89 37  53 ef 60 be 22 23 95 01 
|.......7S.`."#..|
5e3b9630  4b 49 18 a9 32 b5 c7 2b  53 ef 0e 43 e3 d3 7f 1c 
|KI..2..+S..C....|
5e950430  f2 f7 b9 3b 9c 1f 8b 69  53 ef af d6 c2 31 6b 0e 
|...;...iS....1k.|
5ea3e030  66 2e f7 40 61 41 50 de  53 ef 56 01 9a 21 a0 33 
|f..@aAP.S.V..!.3|
5f638d30  42 b1 ec 8a cc 57 8b c9  53 ef df 41 d5 9d 9d 6d 
|B....W..S..A...m|
6a241830  16 e8 c4 39 bb 50 96 f8  53 ef 60 62 90 e6 4e b3 
|...9.P..S.`b..N.|
6a249f30  17 d6 77 d3 62 b2 c3 83  53 ef b4 9d ff d7 07 15 
|..w.b...S.......|
6a8e6230  3e 78 55 da 76 e1 55 51  53 ef 2b 21 79 fc 75 cd 
|>xU.v.UQS.+!y.u.|
6b166730  07 db 8d 18 b8 69 cc 22  53 ef 2b 4f dd df b8 56 
|.....i."S.+O...V|
6b82f830  61 45 5f 5a bf cb 25 10  53 ef b9 e3 11 2a e4 a5 
|aE_Z..%.S....*..|
6c3f3d30  d3 a4 cc a5 fd bd d4 dd  53 ef 5e eb 26 ff 39 6e 
|........S.^.&.9n|
6cd42f30  4f c5 1f 2b 13 3a 19 b8  53 ef 67 59 70 1b 11 65 
|O..+.:..S.gYp..e|
6d40f130  76 99 1f 14 1a 8a 33 58  53 ef c0 3b de 8e 97 0f 
|v.....3XS..;....|
6d6cbb30  82 6a 37 4b 34 59 64 1c  53 ef 64 c2 29 88 50 31 
|.j7K4Yd.S.d.).P1|
6e606730  dc 4d 43 71 ff 2b 71 68  53 ef 9d 41 16 0f 91 10 
|.MCq.+qhS..A....|
6e909e30  96 0c 7c be 2a f3 a4 e9  53 ef fc 00 20 2f 32 32 |..|.*...S... 
/22|
72d0df30  1b b5 d0 79 e3 33 76 22  53 ef 4e d9 14 3b 17 93 
|...y.3v"S.N..;..|
74a44b30  03 87 a4 b2 3a 73 22 03  53 ef 54 6b 4d ed 9d a8 
|....:s".S.TkM...|
7582bc30  2d 50 cb 49 10 c8 c8 4e  53 ef 04 79 4a 8a d1 9c 
|-P.I...NS..yJ...|
7599fa30  1a 25 29 c3 32 6c 30 55  53 ef 99 cf 29 20 e5 81 
|.%).2l0US...) ..|
76a0ed30  86 4f 8d 18 da fc af f1  53 ef c3 c1 93 c4 a5 fa 
|.O......S.......|
76b63f30  e1 9d e8 e4 0b 1e 14 80  53 ef 11 99 e7 58 37 b9 
|........S....X7.|
79eec330  a2 5e 93 f9 db f3 f1 85  53 ef bf fd 36 13 b1 95 
|.^......S...6...|
7abefc30  27 17 0b c0 75 49 05 4e  53 ef 04 da 53 8c f4 37 
|'...uI.NS...S..7|
7b577e30  81 e7 c1 79 bf 07 af d9  53 ef 23 ea 2d 53 cc 16 
|...y....S.#.-S..|
7c305930  62 a2 b8 3c 5d 73 e7 a7  53 ef b9 18 7f 69 ff 5a 
|b..<]s..S....i.Z|
7d8fae30  69 eb 56 75 90 f4 2b cb  53 ef 82 dc bb 78 71 a2 
|i.Vu..+.S....xq.|
7dc68f30  f0 3b ef 35 e4 0a 98 05  53 ef 36 e4 34 89 67 3e 
|.;.5....S.6.4.g>|
7e410930  37 a4 5e f6 f7 25 56 22  53 ef 10 b3 0c 4a a5 74 
|7.^..%V"S....J.t|
7ee45330  03 1c 5a ce 92 8a 2b 1a  53 ef a1 59 95 6e 21 bf 
|..Z...+.S..Y.n!.|
88374d30  f3 df 14 71 dc b8 56 8d  53 ef ff 45 32 27 18 cc 
|...q..V.S..E2'..|
8d2c6030  93 e8 83 49 db ab 1d 99  53 ef a8 d3 f5 29 3e 26 
|...I....S....)>&|
8f347130  49 83 05 84 55 83 1e 41  53 ef 4b ca 99 99 80 d8 
|I...U..AS.K.....|
8ffb7a30  8e 2a 3b f9 42 71 cb c4  53 ef 07 dd 03 bb 9f ee 
|.*;.Bq..S.......|
90766730  47 f5 39 0c 1b 65 e6 5a  53 ef c0 42 41 66 65 43 
|G.9..e.ZS..BAfeC|
91f75b30  2f 58 d0 27 71 6d c2 ae  53 ef 90 f1 94 ad 65 f9 
|/X.'qm..S.....e.|
9235ff30  9e fc 72 a4 af 56 2a 03  53 ef 9c aa 4d a1 99 a7 
|..r..V*.S...M...|
93deff30  b7 4a f2 7f 51 0e 0c 91  53 ef 46 80 18 dc 08 22 
|.J..Q...S.F...."|
943ed730  f2 ef 37 8d f6 ac 34 c0  53 ef 32 11 83 6b 97 36 
|..7...4.S.2..k.6|
94a02630  06 4f ce 49 12 23 fc 44  53 ef 84 60 e2 39 8e 40 
|.O.I.#.DS..`.9.@|
9579e330  c0 67 4d 93 05 88 03 32  53 ef eb 0a a8 3f f1 77 
|.gM....2S....?.w|
95ad5f30  81 31 73 e5 83 38 c4 fe  53 ef e7 dd 40 24 d3 bd 
|.1s..8..S...@$..|
95f39e30  7d d2 5d 06 23 a3 23 aa  53 ef 31 05 4e 68 85 5b 
|}.].#.#.S.1.Nh.[|
9742c030  6d 9c 14 d4 c5 50 d1 71  53 ef 88 e3 14 80 74 66 
|m....P.qS.....tf|
98fd2030  fb db 10 a8 59 48 23 30  53 ef 6b 9d 6c bc 71 02 
|....YH#0S.k.l.q.|
99642330  16 ad d4 36 f7 6d e4 be  53 ef 83 93 4a 95 06 69 
|...6.m..S...J..i|
99c28130  44 f8 f1 f0 82 a4 c2 3d  53 ef 19 65 0c 1f 25 b6 
|D......=S..e..%.|
9ab08730  67 ba 40 3c d8 9e 6f 97  53 ef 26 87 5c fe cd 38 
|g.@<..o.S.&.\..8|
9aeb8130  62 64 7e 4c ba ba 87 f3  53 ef 35 99 d9 d5 53 37 
|bd~L....S.5...S7|
9b4d2e30  19 ad fa 0c e9 3f fe 1f  53 ef 18 1e d9 f6 fc f1 
|.....?..S.......|
9beeba30  3e b0 a0 53 8d e5 16 c8  53 ef 7a 0e ba 44 ac 4a 
|>..S....S.z..D.J|
9c041b30  4b 27 8a bf 1a 34 34 16  53 ef 31 b5 12 61 b0 0c 
|K'...44.S.1..a..|
9d3ee730  68 ad a4 26 80 ed e5 dd  53 ef 4f 98 57 f0 25 6a 
|h..&....S.O.W.%j|
9d7e4430  5e 4c c3 75 17 cf 01 72  53 ef 72 3f 6c b2 63 2d 
|^L.u...rS.r?l.c-|
9e0dc430  f2 8b db 49 e3 b1 4c 76  53 ef ae 76 7b 36 d8 f4 
|...I..LvS..v{6..|
9efa9a30  d3 89 6a 50 d2 64 c2 9b  53 ef d1 36 d4 eb 89 0b 
|..jP.d..S..6....|
a0aeda30  62 ea 4e 94 e8 f3 3f 65  53 ef 5a e6 32 47 b9 b9 
|b.N...?eS.Z.2G..|
a0d9b330  45 bb 87 5c d5 e5 b8 aa  53 ef 0f 13 fc 40 a0 37 
|E..\....S....@.7|
a0dd7230  dc 66 d1 f3 69 21 c5 f0  53 ef d1 5b a1 80 6a 15 
|.f..i!..S..[..j.|
a184e930  8a 2e 6a d1 18 59 62 aa  53 ef 83 fa 2e 1f f0 08 
|..j..Yb.S.......|
a1a78830  6b 7f de 6c b5 ea 5b 2a  53 ef 53 53 1d 8f 02 ca 
|k..l..[*S.SS....|
a20c8630  d0 e5 3e 7d ed b2 38 eb  53 ef b4 1e 9f f5 58 56 
|..>}..8.S.....XV|
a2520430  9e d0 c8 13 c0 ad 57 f3  53 ef 95 cb 73 c9 bf 3c 
|......W.S...s..<|
a279f230  5c 92 0b 13 37 25 c9 43  53 ef ce 12 4b 2c e6 09 
|\...7%.CS...K,..|
a29b6930  fd c1 60 3f f0 bd 40 b6  53 ef af 1c a0 74 1a 93 
|..`?..@.S....t..|
a2a47e30  44 b3 e8 4f 46 30 1e d7  53 ef 86 3d 87 f9 8f cb 
|D..OF0..S..=....|
a31a7030  65 5b 40 9c fd 3e 87 5a  53 ef 07 49 ee 8f 76 84 
|e[@..>.ZS..I..v.|
a4b94130  57 d4 4a fb a7 8d d3 6e  53 ef 1c 15 0a 96 57 7d 
|W.J....nS.....W}|
a5d74730  a5 e9 a3 5c ca 06 e0 f4  53 ef ea b5 86 8e 85 81 
|...\....S.......|
a5e5be30  81 33 1e e9 36 bf d3 7e  53 ef ee 33 2e de 16 e5 
|.3..6..~S..3....|
a6a5f930  4e ff b4 4c 0d b8 43 26  53 ef 43 1f 9b 1b 65 b2 
|N..L..C&S.C...e.|
a8240c30  b3 e1 8d 98 de 92 dd b9  53 ef 6b 66 b3 5b 7a 9f 
|........S.kf.[z.|
a86e3d30  40 3c d6 fa cb 3a a1 f9  53 ef 36 d2 74 80 f0 e6 
|@<...:..S.6.t...|
aa3bf130  d8 d0 91 39 64 ce 52 e8  53 ef 22 24 fa 0d 6a af 
|...9d.R.S."$..j.|
aa73bd30  c3 25 aa a1 a2 49 95 95  53 ef b0 2f 28 3e 7c 73 
|.%...I..S../(>|s|
ab485730  3a c0 03 be 90 b2 f1 db  53 ef a5 2e 81 81 cf 2b 
|:.......S......+|
ab868830  67 05 12 2a e6 df 34 f8  53 ef 74 b2 3a 81 30 47 
|g..*..4.S.t.:.0G|
abb86130  83 38 19 e3 8c a4 73 a4  53 ef b1 b1 c3 7a f4 f9 
|.8....s.S....z..|
acbbc930  08 b9 77 60 dc d9 97 f4  53 ef a9 ce 0e c0 01 19 
|..w`....S.......|
acfa7c30  3e b3 43 52 a9 56 e5 06  53 ef ed ee e2 ad 3d 1f 
|>.CR.V..S.....=.|
b0209830  22 65 5b f4 a2 aa 24 3b  53 ef 68 bf 1a ca ca 7d 
|"e[...$;S.h....}|
b0a47930  a7 66 91 4b b9 7e 1a 7a  53 ef 85 dd d8 2a f3 a6 
|.f.K.~.zS....*..|
b34b7d30  f9 62 8a f8 21 76 f5 39  53 ef 36 bd 1b dd f5 22 
|.b..!v.9S.6...."|
b3909a30  c6 95 cb a9 19 95 de 47  53 ef 70 c7 e3 82 7f ee 
|.......GS.p.....|
b4a87f30  3e 90 55 7d 54 92 77 d6  53 ef ac 7b 32 39 24 89 
|>.U}T.w.S..{29$.|
b4b71c30  09 9f 15 fa ff e0 90 16  53 ef 1c 22 91 b4 17 6d 
|........S.."...m|
b63bc330  15 47 d0 bd b8 96 95 b5  53 ef 78 74 62 15 34 fe 
|.G......S.xtb.4.|
b745f730  fd e5 98 49 b2 3d dd 6d  53 ef d5 3f d5 4b ea d3 
|...I.=.mS..?.K..|
b9a57030  13 b3 b2 96 92 73 22 ce  53 ef 0c 69 26 a2 2a 15 
|.....s".S..i&.*.|
ba500930  a8 7b 65 24 6d 7b 81 f6  53 ef 0e ad a0 e4 1a 17 
|.{e$m{..S.......|
bbe98330  ba 86 c2 77 19 48 44 00  53 ef ac 40 53 78 1e 72 
|...w.HD.S..@Sx.r|
bfe6a430  d3 3e 3d 66 79 ab 35 6b  53 ef 6f b7 dc 2d b3 fc 
|.>=fy.5kS.o..-..|
c1824c30  81 c3 b3 c1 75 ab 23 04  53 ef 6f 81 17 ca 5b 82 
|....u.#.S.o...[.|
c325a130  10 36 f1 ad 93 fd ee 1e  53 ef 01 19 35 94 4b d5 
|.6......S...5.K.|
c3705e30  78 6d d9 68 67 ab 53 57  53 ef ab ae 9b 82 97 ca 
|xm.hg.SWS.......|
c4cea430  c6 3a 25 c2 3a e5 36 ac  53 ef 1b eb d4 08 eb d4 
|.:%.:.6.S.......|
c5431d30  a7 89 0d 3a fd 9d 66 3a  53 ef 72 82 23 d9 8d 8b 
|...:..f:S.r.#...|
c7255e30  7f 2b ec 92 a8 07 5a 2c  53 ef 97 63 c4 80 49 98 
|.+....Z,S..c..I.|
c7619530  6a a5 26 33 06 34 33 e4  53 ef 3d b2 3a 4a 7f 48 
|j.&3.43.S.=.:J.H|
c7bc1f30  11 5d 5f c9 fe c1 5f 17  53 ef ee c0 ba dd 3e 3d 
|.]_..._.S.....>=|
c7f80030  80 91 df 4d 0d 00 22 00  53 ef 00 00 01 00 00 00 
|...M..".S.......|
c8869c30  4c fa ca af fd 51 e5 69  53 ef 04 2f cb 82 71 61 
|L....Q.iS../..qa|
c94c1230  77 64 99 88 b4 84 26 31  53 ef 94 fc 3f b5 a6 3f 
|wd....&1S...?..?|
c97f7030  b5 be 30 f0 a1 fe 61 65  53 ef f9 a8 3b bc c8 3f 
|..0...aeS...;..?|
ca4fe330  c5 6f ab 07 b5 9b d6 11  53 ef ab 07 71 6a bd a1 
|.o......S...qj..|
cbd31930  c7 be bf de 65 51 be b8  53 ef 8e 85 52 48 d3 d5 
|....eQ..S...RH..|
cbebb530  9c 11 35 f5 ce c5 bf 7d  53 ef 1c 1c a9 6d ac 9a 
|..5....}S....m..|
cc0de830  08 59 49 8f 9c c1 e9 b7  53 ef 1c 23 d2 0d ce 32 
|.YI.....S..#...2|
cc1e3430  7a 94 26 3e c3 a7 10 f5  53 ef f3 3c d6 3d 3d 91 
|z.&>....S..<.==.|
cc3be130  3e a4 2c 3a ad 8f 5a f7  53 ef c0 d7 9b 99 8f 8f 
|>.,:..Z.S.......|
cc4db430  3b fb a9 d6 b8 63 a7 5d  53 ef 31 92 de 66 ba a0 
|;....c.]S.1..f..|
cc4f6c30  a4 c2 df 3e f2 be 93 73  53 ef d9 31 b8 a6 06 8c 
|...>...sS..1....|
ccf28130  20 57 6f 7e bd fc 98 49  53 ef ef a0 dd be 29 0f  | 
Wo~...IS.....).|
d2a85630  af f3 73 10 32 6c f0 80  53 ef f7 1c 29 50 10 54 
|..s.2l..S...)P.T|
d39ccb30  b5 4f 92 3b fd 5c c0 2b  53 ef 66 bd c1 74 54 ba 
|.O.;.\.+S.f..tT.|
d4caaa30  b1 e6 0a 05 00 28 b2 29  53 ef 96 de 96 7d 91 a2 
|.....(.)S....}..|
d626ad30  68 9d 34 9d 36 ad d7 89  53 ef 4a 55 e6 98 e3 7c 
|h.4.6...S.JU...||
d83b5830  50 22 f9 de d8 70 46 3b  53 ef 19 ff ad 1e 54 5c 
|P"...pF;S.....T\|
d8f74030  8b 03 45 51 25 42 81 8f  53 ef 3c 6d fb c3 62 3e 
|..EQ%B..S.<m..b>|
db58ab30  6c 72 61 ae 1d 1a be 66  53 ef a2 3e 48 c8 13 5e 
|lra....fS..>H..^|
dc59b030  7e 13 9e 8f 4c 3a f7 3b  53 ef f2 f6 6c 2f fb 08 
|~...L:.;S...l/..|
dfc68330  78 74 25 46 c9 e5 79 46  53 ef bc 62 f9 71 75 ea 
|xt%F..yFS..b.qu.|
dfe9c530  77 f7 0f f8 cc 40 e7 22  53 ef a3 a9 92 71 c4 1c 
|w....@."S....q..|
e0644b30  b1 56 a8 48 36 09 f3 73  53 ef f9 29 f2 05 f6 fb 
|.V.H6..sS..)....|
e16f7130  be 95 6d e6 52 ef 06 9a  53 ef 06 5c f9 9c fc 41 
|..m.R...S..\...A|
e22c3530  66 14 f3 b2 43 88 be 61  53 ef f6 14 10 ec 6c d2 
|f...C..aS.....l.|
e2939830  a0 00 4b 86 9e a8 3f

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:32                                   ` Stone
  2013-02-21 16:41                                     ` Phil Turmel
@ 2013-02-21 22:20                                     ` Chris Murphy
  2013-02-21 22:26                                       ` Phil Turmel
  1 sibling, 1 reply; 79+ messages in thread
From: Chris Murphy @ 2013-02-21 22:20 UTC (permalink / raw)
  To: Stone; +Cc: Phil Turmel, linux-raid


On Feb 21, 2013, at 9:32 AM, Stone <stone@heisl.org> wrote:
>>> 
> This is my ouput from the badblocks
> 1073006628
> 1073006629
> 1073006630
> 1073006631
> 1073006632
> 1073006633
> 1073006634
> 1073006635
> 1073006636
> 1073006637
> 1073006638
> 1073006639

It's consistently reporting 12. This can't be LBA values if it's an AF disk, or you'd get multiples of 8 (8*512=4096). I actually don't recall off hand how to convert from ext block numbers to LBA. But dd wants LBA.

I haven't read this whole thread, is there a backup? I did see more than one disk with non-zero current pending sector values. So in my opinion, I'd ATA secure erase all of these drives and start from scratch if you have a backup. Actually, I'd ATA Secure Erase them, and then do an extended SMART test to confirm. Or if they're under warranty, RMA them. You shouldn't have so much bad sectors on a disk.

If you keep them, you need to keep an eye on them with an extended smart test every week or two. It sounds like there may be loose material bouncing around in the disks causing these bad sectors, and if that's true, more will go bad. And if more do show up in an extended smart test, and the drives are under warranty, I'd bail out on them. Get them replaced.

Chris Murphy

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 22:20                                     ` Chris Murphy
@ 2013-02-21 22:26                                       ` Phil Turmel
  0 siblings, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 22:26 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Stone, linux-raid

On 02/21/2013 05:20 PM, Chris Murphy wrote:
> 
> On Feb 21, 2013, at 9:32 AM, Stone <stone@heisl.org> wrote:
>>>> 
>> This is my ouput from the badblocks
>> 1073006628
>> 1073006629
>> 1073006630
>> 1073006631
>> 1073006632
>> 1073006633
>> 1073006634
>> 1073006635
>> 1073006636
>> 1073006637
>> 1073006638
>> 1073006639

> It's consistently reporting 12. This can't be LBA values if it's an 
> AF disk, or you'd get multiples of 8 (8*512=4096). I actually don't 
> recall off hand how to convert from ext block numbers to LBA. But dd
>  wants LBA.

These are default 1k block addresses returned by badblocks.  dd does not
want LBA.  It wants block addresses, with a default block size of 512.
If you specify a different block size with bs=, you must use that scale
for seek= or skip= or count=.

> I haven't read this whole thread, is there a backup? I did see more 
> than one disk with non-zero current pending sector values. So in my 
> opinion, I'd ATA secure erase all of these drives and start from 
> scratch if you have a backup. Actually, I'd ATA Secure Erase them, 
> and then do an extended SMART test to confirm. Or if they're under 
> warranty, RMA them. You shouldn't have so much bad sectors on a 
> disk.

No backup.

> If you keep them, you need to keep an eye on them with an extended 
> smart test every week or two. It sounds like there may be loose 
> material bouncing around in the disks causing these bad sectors, and 
> if that's true, more will go bad. And if more do show up in an 
> extended smart test, and the drives are under warranty, I'd bail out 
> on them. Get them replaced.

Read the whole thread.  A followup smartctl report will be useful, but
Stone's hands are full at the moment.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 16:41                                     ` Phil Turmel
  2013-02-21 16:43                                       ` Stone
@ 2013-02-21 22:29                                       ` Chris Murphy
  2013-02-21 22:34                                         ` Phil Turmel
  1 sibling, 1 reply; 79+ messages in thread
From: Chris Murphy @ 2013-02-21 22:29 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Stone, linux-raid


On Feb 21, 2013, at 9:41 AM, Phil Turmel <philip@turmel.org> wrote:

>> This is my ouput from the badblocks
> 
>> 1073006628
>> 1073006629
>> 1073006630
>> 1073006631
>> 1073006632
>> 1073006633
>> 1073006634
>> 1073006635
>> 1073006636
>> 1073006637
>> 1073006638
>> 1073006639
> 
> These 12 are together.  (Three real sectors.)

> dd if=/dev/zero bs=1024 count=12 seek=1073006628 of=/dev/sdc1

Oh I get it. Yeah that works. In this case seek= isn't LBA but is a multiple of 2, same as the block size. NEVERMIND! Ignore the man behind the curtain….

Nevertheless if there's a backup, I'd nuke these drives with ATA Secure Erase. And definitely schedule regular smart tests to keep an eye on them. It's a lot of bad sectors on one disk.

Chris Murphy

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-21 22:29                                       ` Chris Murphy
@ 2013-02-21 22:34                                         ` Phil Turmel
  0 siblings, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-21 22:34 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Stone, linux-raid

On 02/21/2013 05:29 PM, Chris Murphy wrote:

> Nevertheless if there's a backup, I'd nuke these drives with ATA
> Secure Erase. And definitely schedule regular smart tests to keep an
> eye on them. It's a lot of bad sectors on one disk.

It's five URE flaws spanning two years of use without scrubbing.  Hardly
surprising or unusual.  If the *relocations* start climbing, then
there's a problem.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
       [not found]                                                                         ` <51269DE0.5070905@heisl.org>
@ 2013-02-22 10:31                                                                           ` stone
  2013-02-22 13:53                                                                             ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: stone @ 2013-02-22 10:31 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

i have mdadm tested with this --chunk sizes: 64, 128 (no luks opned) 256,
> with 256 i get here this error ->
> fsck.ext4 -n -b 644972544 /dev/mapper/md2_nas
> e2fsck 1.41.14 (22-Dec-2010)
> Superblock has an invalid journal (inode 8).
> Clear? no
>
> fsck.ext4: Illegal inode number while checking ext3 journal for 
> /dev/mapper/md2_nas
>
> is this good? ;-)
>
> in the attachments is my hexdump for you...
>
>
to work on the live cd is very slow.
i will kick out my two system drives and take one new and install a old 
system (ubuntu 11.04, i think on this system i have created the first 
time the raid) to it.

do you have new infos from the hexdump or other news to try out some 
things the get the raid and the luks running?

thank you.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 10:31                                                                           ` stone
@ 2013-02-22 13:53                                                                             ` Phil Turmel
  2013-02-22 14:58                                                                               ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-22 13:53 UTC (permalink / raw)
  To: stone; +Cc: linux-raid

On 02/22/2013 05:31 AM, stone@heisl.org wrote:
> to work on the live cd is very slow.
> i will kick out my two system drives and take one new and install a old
> system (ubuntu 11.04, i think on this system i have created the first
> time the raid) to it.
> 
> do you have new infos from the hexdump or other news to try out some
> things the get the raid and the luks running?

Unfortunately, no.  The hexdump had no real superblock candidates that I
could see.  That strongly suggests that there remain some ordering
issues.  I would try chunk sizes down to 8k.  If that still doesn't
work, consider re-creating with a different drive order--it's a slim
possibility that "sdc1 sdd1 missing sdf1" isn't correct.

Meanwhile, you haven't supplied the complete hexdump of your luks
signature sector.  It may not help, but it would show the payload offset.

> thank you.

You're welcome, for what it's worth.  The encryption layer has stymied
my normal tricks for figuring out how an array was put together.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 13:53                                                                             ` Phil Turmel
@ 2013-02-22 14:58                                                                               ` Stone
  2013-02-22 15:37                                                                                 ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-22 14:58 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 22.02.2013 14:53, schrieb Phil Turmel:
> On 02/22/2013 05:31 AM, stone@heisl.org wrote:
>> to work on the live cd is very slow.
>> i will kick out my two system drives and take one new and install a old
>> system (ubuntu 11.04, i think on this system i have created the first
>> time the raid) to it.
>>
>> do you have new infos from the hexdump or other news to try out some
>> things the get the raid and the luks running?
> Unfortunately, no.  The hexdump had no real superblock candidates that I
> could see.  That strongly suggests that there remain some ordering
> issues.  I would try chunk sizes down to 8k.  If that still doesn't
> work, consider re-creating with a different drive order--it's a slim
> possibility that "sdc1 sdd1 missing sdf1" isn't correct.
>
> Meanwhile, you haven't supplied the complete hexdump of your luks
> signature sector.  It may not help, but it would show the payload offset.
i have installed the system now with one system drive.
the raid devices are now: sdb1 sdc1 sdd1(brocken not sync) sde1

i have now tested all chunk's from 512k to 8k
512 Open Luks but no superblock
256 Open Luks but no superblock
128 No key available with this passphrase
64 No key available with this passphrase
32 No key available with this passphrase
16 No key available with this passphrase
8 No key available with this passphrase

512k and 256k working better...
next tests:
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sde1 /dev/sdb1 missing /dev/sdc1
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sdc1 /dev/sdb1 missing /dev/sde1
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sdc1 missing /dev/sdb1 /dev/sde1
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sdb1 /dev/sde1 /dev/sdc1 missing
     fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
     fsck.ext4: Bad magic number in super-block while trying to open 
/dev/mapper/md2_nas
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sde1 /dev/sdc1 /dev/sdb1 missing
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sdc1 /dev/sde1 /dev/sdb1 missing
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sde1 missing
     fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
     fsck.ext4: Bad magic number in super-block while trying to open 
/dev/mapper/md2_nas

do you think that i should try do mount the partion as RO? but i think 
this is not working because the damaged filesystem. right?
>> thank you.
> You're welcome, for what it's worth.  The encryption layer has stymied
> my normal tricks for figuring out how an array was put together.
>
> Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 14:58                                                                               ` Stone
@ 2013-02-22 15:37                                                                                 ` Phil Turmel
  2013-02-22 18:17                                                                                   ` Stone
       [not found]                                                                                   ` <5127B0AB.5040108@heisl.org>
  0 siblings, 2 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-22 15:37 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/22/2013 09:58 AM, Stone wrote:
> Am 22.02.2013 14:53, schrieb Phil Turmel:
>> On 02/22/2013 05:31 AM, stone@heisl.org wrote:
>>> to work on the live cd is very slow.
>>> i will kick out my two system drives and take one new and install a old
>>> system (ubuntu 11.04, i think on this system i have created the first
>>> time the raid) to it.
>>>
>>> do you have new infos from the hexdump or other news to try out some
>>> things the get the raid and the luks running?
>> Unfortunately, no.  The hexdump had no real superblock candidates that I
>> could see.  That strongly suggests that there remain some ordering
>> issues.  I would try chunk sizes down to 8k.  If that still doesn't
>> work, consider re-creating with a different drive order--it's a slim
>> possibility that "sdc1 sdd1 missing sdf1" isn't correct.
>>
>> Meanwhile, you haven't supplied the complete hexdump of your luks
>> signature sector.  It may not help, but it would show the payload offset.

What about this part?

> i have installed the system now with one system drive.
> the raid devices are now: sdb1 sdc1 sdd1(brocken not sync) sde1

Ok.

> i have now tested all chunk's from 512k to 8k
> 512 Open Luks but no superblock
> 256 Open Luks but no superblock
> 128 No key available with this passphrase
> 64 No key available with this passphrase
> 32 No key available with this passphrase
> 16 No key available with this passphrase
> 8 No key available with this passphrase

Ok, but on the smaller chunk sizes, the device order could impact
interpretation of the key material.  You should repeat the small chunk
tests with the drive order variations below.

Make a grid with chunk size on one axis, and drive order on the other
axis.  Mark each combination with yes or no if it can open luks.  If it
can, save the output of "fsck -n" in a file.  This would be a good thing
to script.

After the script is done, look at all the saved files to see if any look
like possible solutions.

> 512k and 256k working better...
> next tests:
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sde1 /dev/sdb1 missing /dev/sdc1
>     No Luks
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sdc1 /dev/sdb1 missing /dev/sde1
>     No Luks
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sdc1 missing /dev/sdb1 /dev/sde1
>     No Luks
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sdb1 /dev/sde1 /dev/sdc1 missing
>     fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>     fsck.ext4: Bad magic number in super-block while trying to open
> /dev/mapper/md2_nas
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sde1 /dev/sdc1 /dev/sdb1 missing
>     No Luks
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sdc1 /dev/sde1 /dev/sdb1 missing
>     No Luks
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sde1 missing
>     fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>     fsck.ext4: Bad magic number in super-block while trying to open
> /dev/mapper/md2_nas
> 
> do you think that i should try do mount the partion as RO? but i think
> this is not working because the damaged filesystem. right?

Do *not* mount at all.  Even a read-only mount isn't really
read-only--it will try to play back the journal, and will try to write
to the superblocks.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 15:37                                                                                 ` Phil Turmel
@ 2013-02-22 18:17                                                                                   ` Stone
  2013-02-22 18:23                                                                                     ` Phil Turmel
  2013-02-22 20:43                                                                                     ` Stone
       [not found]                                                                                   ` <5127B0AB.5040108@heisl.org>
  1 sibling, 2 replies; 79+ messages in thread
From: Stone @ 2013-02-22 18:17 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 22.02.2013 16:37, schrieb Phil Turmel:
> On 02/22/2013 09:58 AM, Stone wrote:
>> Am 22.02.2013 14:53, schrieb Phil Turmel:
>>> On 02/22/2013 05:31 AM, stone@heisl.org wrote:
>>>> to work on the live cd is very slow.
>>>> i will kick out my two system drives and take one new and install a old
>>>> system (ubuntu 11.04, i think on this system i have created the first
>>>> time the raid) to it.
>>>>
>>>> do you have new infos from the hexdump or other news to try out some
>>>> things the get the raid and the luks running?
>>> Unfortunately, no.  The hexdump had no real superblock candidates that I
>>> could see.  That strongly suggests that there remain some ordering
>>> issues.  I would try chunk sizes down to 8k.  If that still doesn't
>>> work, consider re-creating with a different drive order--it's a slim
>>> possibility that "sdc1 sdd1 missing sdf1" isn't correct.
>>>
>>> Meanwhile, you haven't supplied the complete hexdump of your luks
>>> signature sector.  It may not help, but it would show the payload offset.
> What about this part?
>
>> i have installed the system now with one system drive.
>> the raid devices are now: sdb1 sdc1 sdd1(brocken not sync) sde1
> Ok.
>
>> i have now tested all chunk's from 512k to 8k
>> 512 Open Luks but no superblock
>> 256 Open Luks but no superblock
>> 128 No key available with this passphrase
>> 64 No key available with this passphrase
>> 32 No key available with this passphrase
>> 16 No key available with this passphrase
>> 8 No key available with this passphrase
> Ok, but on the smaller chunk sizes, the device order could impact
> interpretation of the key material.  You should repeat the small chunk
> tests with the drive order variations below.
>
> Make a grid with chunk size on one axis, and drive order on the other
> axis.  Mark each combination with yes or no if it can open luks.  If it
> can, save the output of "fsck -n" in a file.  This would be a good thing
> to script.
>
> After the script is done, look at all the saved files to see if any look
> like possible solutions.
i write a script and send my results back but you really wont a fsck -n 
/dev/mapper/md2_nas?
the output i veeeery long like this:
Illegal block number passed to ext2fs_mark_block_bitmap #2667529020 for 
in-use block map
Illegal block number passed to ext2fs_test_block_bitmap #2667529021 for 
in-use block map
Illegal block number passed to ext2fs_mark_block_bitmap #2667529021 for 
in-use block map
Illegal block number passed to ext2fs_test_block_bitmap #2667529022 for 
in-use block map
Illegal block number passed to ext2fs_mark_block_bitmap #2667529022 for 
in-use block map
Illegal block number passed to ext2fs_test_block_bitmap #2667529023 for 
in-use block map
Illegal block number passed to ext2fs_mark_block_bitmap #2667529023 for 
in-use block map
Illegal block number passed to ext2fs_test_block_bitmap #2667529024 for 
in-use block map
Illegal block number passed to ext2fs_mark_block_bitmap #2667529024 for 
in-use block map
Illegal block number passed to ext2fs_test_block_bitmap #2667529025 for 
in-use block map
Illegal block number passed to ext2fs_mark_block_bitmap #2667529025 for 
in-use block map
>> 512k and 256k working better...
>> next tests:
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sde1 /dev/sdb1 missing /dev/sdc1
>>      No Luks
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sdc1 /dev/sdb1 missing /dev/sde1
>>      No Luks
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sdc1 missing /dev/sdb1 /dev/sde1
>>      No Luks
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sdb1 /dev/sde1 /dev/sdc1 missing
>>      fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>>      fsck.ext4: Bad magic number in super-block while trying to open
>> /dev/mapper/md2_nas
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sde1 /dev/sdc1 /dev/sdb1 missing
>>      No Luks
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sdc1 /dev/sde1 /dev/sdb1 missing
>>      No Luks
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sde1 missing
>>      fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
>>      fsck.ext4: Bad magic number in super-block while trying to open
>> /dev/mapper/md2_nas
>>
>> do you think that i should try do mount the partion as RO? but i think
>> this is not working because the damaged filesystem. right?
> Do *not* mount at all.  Even a read-only mount isn't really
> read-only--it will try to play back the journal, and will try to write
> to the superblocks.
>
> Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 18:17                                                                                   ` Stone
@ 2013-02-22 18:23                                                                                     ` Phil Turmel
  2013-02-22 20:43                                                                                     ` Stone
  1 sibling, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-22 18:23 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/22/2013 01:17 PM, Stone wrote:
> Am 22.02.2013 16:37, schrieb Phil Turmel:

>> After the script is done, look at all the saved files to see if any look
>> like possible solutions.

> i write a script and send my results back but you really wont a fsck -n
> /dev/mapper/md2_nas?
> the output i veeeery long like this:
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529020 for
> in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529021 for
> in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529021 for
> in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529022 for
> in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529022 for
> in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529023 for
> in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529023 for
> in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529024 for
> in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529024 for
> in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529025 for
> in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529025 for
> in-use block map

No, I don't need to see such results.  I meant for you to look at them
yourself to see if one appears to have many fewer errors than the
others.  If you find one, you can share that one with us (report file
size < 100k or so).

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
       [not found]                                                                                   ` <5127B0AB.5040108@heisl.org>
@ 2013-02-22 18:30                                                                                     ` Phil Turmel
  0 siblings, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-22 18:30 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

You forgot linux-raid again.

On 02/22/2013 12:53 PM, Stone wrote:

> a littel question.
> i have now created my new raid on my new devices with:
> mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sdb1
> /dev/sdc1 /dev/sdd1 /dev/sde1
> mdadm --detail --scan --verbose > /etc/mdadm/mdadm.conf
> cat /etc/mdadm/mdadm.conf
> ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 spares=1
> name=bender:0 UUID=a3ff1ec9:83c2cc4b:7c5e0550:5655e4d0
>    devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1
> 
> md0 is now synced
> after a reboot my md0 is now a md127

You forgot to update your initramfs after you changed mdadm.conf.

> cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md127 : active (auto-read-only) raid5 sde1[4] sdb1[0] sdd1[2] sdc1[1]
>       8788597248 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4]
> [UUUU]
> 
> unused devices: <none>
> 
>  mdadm -D /dev/md127
> /dev/md127:
>         Version : 1.2
>   Creation Time : Fri Feb 22 10:42:16 2013
>      Raid Level : raid5
>      Array Size : 8788597248 (8381.46 GiB 8999.52 GB)
>   Used Dev Size : 2929532416 (2793.82 GiB 2999.84 GB)
>    Raid Devices : 4
>   Total Devices : 4
>     Persistence : Superblock is persistent
> 
>     Update Time : Fri Feb 22 18:09:13 2013
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>            Name : bender:0  (local to host bender)
>            UUID : a3ff1ec9:83c2cc4b:7c5e0550:5655e4d0
>          Events : 28
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       33        1      active sync   /dev/sdc1
>        2       8       49        2      active sync   /dev/sdd1
>        4       8       65        3      active sync   /dev/sde1
> 
> WTF is going on here?
> i had never this problem :)

Not a problem.  User error.  How raids are put together during boot
happens inside the initramfs.  Your root filesystem isn't even mounted
yet at that point, so mdadm.conf can't be read from it.

If you don't want the array assembled by the initramfs, you need two
mdadm.conf.  One inside the initramfs to disable assembly, and another
in /etc/...  to control later assembly.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 18:17                                                                                   ` Stone
  2013-02-22 18:23                                                                                     ` Phil Turmel
@ 2013-02-22 20:43                                                                                     ` Stone
  2013-02-22 22:35                                                                                       ` Phil Turmel
  1 sibling, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-22 20:43 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5593 bytes --]

Am 22.02.2013 19:17, schrieb Stone:
> Am 22.02.2013 16:37, schrieb Phil Turmel:
>> On 02/22/2013 09:58 AM, Stone wrote:
>>> Am 22.02.2013 14:53, schrieb Phil Turmel:
>>>> On 02/22/2013 05:31 AM, stone@heisl.org wrote:
>>>>> to work on the live cd is very slow.
>>>>> i will kick out my two system drives and take one new and install 
>>>>> a old
>>>>> system (ubuntu 11.04, i think on this system i have created the first
>>>>> time the raid) to it.
>>>>>
>>>>> do you have new infos from the hexdump or other news to try out some
>>>>> things the get the raid and the luks running?
>>>> Unfortunately, no.  The hexdump had no real superblock candidates 
>>>> that I
>>>> could see.  That strongly suggests that there remain some ordering
>>>> issues.  I would try chunk sizes down to 8k.  If that still doesn't
>>>> work, consider re-creating with a different drive order--it's a slim
>>>> possibility that "sdc1 sdd1 missing sdf1" isn't correct.
>>>>
>>>> Meanwhile, you haven't supplied the complete hexdump of your luks
>>>> signature sector.  It may not help, but it would show the payload 
>>>> offset.
>> What about this part?
yes i can run a hexdum over the hole device but with chunk and with 
device order?
>>
>>> i have installed the system now with one system drive.
>>> the raid devices are now: sdb1 sdc1 sdd1(brocken not sync) sde1
>> Ok.
>>
>>> i have now tested all chunk's from 512k to 8k
>>> 512 Open Luks but no superblock
>>> 256 Open Luks but no superblock
>>> 128 No key available with this passphrase
>>> 64 No key available with this passphrase
>>> 32 No key available with this passphrase
>>> 16 No key available with this passphrase
>>> 8 No key available with this passphrase
>> Ok, but on the smaller chunk sizes, the device order could impact
>> interpretation of the key material.  You should repeat the small chunk
>> tests with the drive order variations below.
>>
>> Make a grid with chunk size on one axis, and drive order on the other
>> axis.  Mark each combination with yes or no if it can open luks.  If it
>> can, save the output of "fsck -n" in a file.  This would be a good thing
>> to script.
>>
>> After the script is done, look at all the saved files to see if any look
>> like possible solutions.
> i write a script and send my results back but you really wont a fsck 
> -n /dev/mapper/md2_nas?
> the output i veeeery long like this:
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529020 
> for in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529021 
> for in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529021 
> for in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529022 
> for in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529022 
> for in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529023 
> for in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529023 
> for in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529024 
> for in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529024 
> for in-use block map
> Illegal block number passed to ext2fs_test_block_bitmap #2667529025 
> for in-use block map
> Illegal block number passed to ext2fs_mark_block_bitmap #2667529025 
> for in-use block map
>>> 512k and 256k working better...
>>> next tests:
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sde1 /dev/sdb1 missing /dev/sdc1
>>>      No Luks
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sdc1 /dev/sdb1 missing /dev/sde1
>>>      No Luks
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sdc1 missing /dev/sdb1 /dev/sde1
>>>      No Luks
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sdb1 /dev/sde1 /dev/sdc1 missing
>>>      fsck.ext4: Invalid argument while trying to open 
>>> /dev/mapper/md2_nas
>>>      fsck.ext4: Bad magic number in super-block while trying to open
>>> /dev/mapper/md2_nas
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sde1 /dev/sdc1 /dev/sdb1 missing
>>>      No Luks
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sdc1 /dev/sde1 /dev/sdb1 missing
>>>      No Luks
>>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>>> --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sde1 missing
>>>      fsck.ext4: Invalid argument while trying to open 
>>> /dev/mapper/md2_nas
>>>      fsck.ext4: Bad magic number in super-block while trying to open
>>> /dev/mapper/md2_nas
>>>
>>> do you think that i should try do mount the partion as RO? but i think
>>> this is not working because the damaged filesystem. right?
>> Do *not* mount at all.  Even a read-only mount isn't really
>> read-only--it will try to play back the journal, and will try to write
>> to the superblocks.
>>
>> Phil
>
so hello again.
the script was running and i looked at my logs.

this results i have
with chunk-size 8,16,32,64,128 i have always the same error: Enter 
passphrase for /dev/md2: No key available with this passphrase.

the first time to open it is possible with chunk 256

i send you some logs from 256 and 512

at 512 i get the first results with checksum errors.
i think the right size is 512?! but the filesystem is brocken?!


[-- Attachment #2: results.zip --]
[-- Type: application/octet-stream, Size: 6682 bytes --]

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 20:43                                                                                     ` Stone
@ 2013-02-22 22:35                                                                                       ` Phil Turmel
  2013-02-22 22:42                                                                                         ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-22 22:35 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/22/2013 03:43 PM, Stone wrote:
> so hello again.
> the script was running and i looked at my logs.
> 
> this results i have
> with chunk-size 8,16,32,64,128 i have always the same error: Enter
> passphrase for /dev/md2: No key available with this passphrase.
> 
> the first time to open it is possible with chunk 256
> 
> i send you some logs from 256 and 512
> 
> at 512 i get the first results with checksum errors.
> i think the right size is 512?! but the filesystem is brocken?!

Damage near the beginning of your filesystem is expected, remember?
Lots of damage further in is not expected, once we have the right settings.

Very interesting.  Especially this with sdb-sdc-missing-sde:

> The filesystem size (according to the superblock) is 1465134336 blocks
> The physical size of the device is 1465133568 blocks
> Either the superblock or the partition table is likely to be corrupt!
> Abort? no


I think this is the right device order, and the right chunk size.  It
also best matches what we know of your original setup.  But the total
size of the array device is 768 blocks smaller than it should be.  This
should be solved.

Please recreate the array with this combination, then show:

mdadm -D /dev/md2
mdadm -E /dev/sd[bce]1
cat /proc/partitions
for x in /dev/sd[bce] ; do fdisk -l $x ; done

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 22:35                                                                                       ` Phil Turmel
@ 2013-02-22 22:42                                                                                         ` Stone
  2013-02-23  2:22                                                                                           ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-22 22:42 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 22.02.2013 23:35, schrieb Phil Turmel:
> On 02/22/2013 03:43 PM, Stone wrote:
>> so hello again.
>> the script was running and i looked at my logs.
>>
>> this results i have
>> with chunk-size 8,16,32,64,128 i have always the same error: Enter
>> passphrase for /dev/md2: No key available with this passphrase.
>>
>> the first time to open it is possible with chunk 256
>>
>> i send you some logs from 256 and 512
>>
>> at 512 i get the first results with checksum errors.
>> i think the right size is 512?! but the filesystem is brocken?!
> Damage near the beginning of your filesystem is expected, remember?
> Lots of damage further in is not expected, once we have the right settings.
>
> Very interesting.  Especially this with sdb-sdc-missing-sde:
>
>> The filesystem size (according to the superblock) is 1465134336 blocks
>> The physical size of the device is 1465133568 blocks
>> Either the superblock or the partition table is likely to be corrupt!
>> Abort? no
>
> I think this is the right device order, and the right chunk size.  It
> also best matches what we know of your original setup.  But the total
> size of the array device is 768 blocks smaller than it should be.  This
> should be solved.
>
> Please recreate the array with this combination, then show:
>
> mdadm -D /dev/md2
> mdadm -E /dev/sd[bce]1
> cat /proc/partitions
> for x in /dev/sd[bce] ; do fdisk -l $x ; done
>
> Phil

i dont know what chunk do you what but i think you mean 512k

mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sdb1 /dev/sdc1 missing /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdb1 appears to be part of a raid array:
     level=raid5 devices=4 ctime=Fri Feb 22 21:30:55 2013
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdc1 appears to be part of a raid array:
     level=raid5 devices=4 ctime=Fri Feb 22 21:30:55 2013
mdadm: layout defaults to left-symmetric
mdadm: /dev/sde1 appears to be part of a raid array:
     level=raid5 devices=4 ctime=Fri Feb 22 21:30:55 2013
mdadm: size set to 1953511936K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started


mdadm -E /dev/sd[bce]1
/dev/sdb1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : 63da1137:ee45cd1f:72935645:398569ae
            Name : ubuntu:2  (local to host ubuntu)
   Creation Time : Fri Feb 22 23:40:12 2013
      Raid Level : raid5
    Raid Devices : 4

  Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
     Data Offset : 2048 sectors
    Super Offset : 8 sectors
           State : clean
     Device UUID : 76d36328:32020f15:893f4962:f1cf0e7a

     Update Time : Fri Feb 22 23:40:12 2013
        Checksum : 225b2e91 - correct
          Events : 0

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 0
    Array State : AA.A ('A' == active, '.' == missing)
/dev/sdc1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : 63da1137:ee45cd1f:72935645:398569ae
            Name : ubuntu:2  (local to host ubuntu)
   Creation Time : Fri Feb 22 23:40:12 2013
      Raid Level : raid5
    Raid Devices : 4

  Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
     Data Offset : 2048 sectors
    Super Offset : 8 sectors
           State : clean
     Device UUID : 5bb7b565:81820027:695dad8b:e3d97352

     Update Time : Fri Feb 22 23:40:12 2013
        Checksum : 7367b23b - correct
          Events : 0

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 1
    Array State : AA.A ('A' == active, '.' == missing)
/dev/sde1:
           Magic : a92b4efc
         Version : 1.2
     Feature Map : 0x0
      Array UUID : 63da1137:ee45cd1f:72935645:398569ae
            Name : ubuntu:2  (local to host ubuntu)
   Creation Time : Fri Feb 22 23:40:12 2013
      Raid Level : raid5
    Raid Devices : 4

  Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
     Data Offset : 2048 sectors
    Super Offset : 8 sectors
           State : clean
     Device UUID : e016452e:6e86b9fa:27b46b87:08484e32

     Update Time : Fri Feb 22 23:40:12 2013
        Checksum : eb48e2ef - correct
          Events : 0

          Layout : left-symmetric
      Chunk Size : 512K

    Device Role : Active device 3
    Array State : AA.A ('A' == active, '.' == missing)


/proc/partitions
major minor  #blocks  name

    8        0  244198584 sda
    8        1     248832 sda1
    8        2          1 sda2
    8        5  243947520 sda5
  252        0  235589632 dm-0
  252        1    8351744 dm-1
    8       16 1953514584 sdb
    8       17 1953514542 sdb1
    8       32 1953514584 sdc
    8       33 1953513472 sdc1
    8       48 1953514584 sdd
    8       49 1953514542 sdd1
    8       64 1953514584 sde
    8       65 1953514542 sde1
    9        2 5860535808 md2



for x in /dev/sd[bce] ; do fdisk -l $x ; done

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util 
fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1  3907029167  1953514583+  ee  GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util 
fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1  3907029167  1953514583+  ee  GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util 
fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1  3907029167  1953514583+  ee  GPT


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-22 22:42                                                                                         ` Stone
@ 2013-02-23  2:22                                                                                           ` Phil Turmel
  2013-02-23  3:11                                                                                             ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-23  2:22 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/22/2013 05:42 PM, Stone wrote:
> Am 22.02.2013 23:35, schrieb Phil Turmel:
>> Please recreate the array with this combination, then show:
>>
>> mdadm -D /dev/md2
>> mdadm -E /dev/sd[bce]1
>> cat /proc/partitions
>> for x in /dev/sd[bce] ; do fdisk -l $x ; done

> i dont know what chunk do you what but i think you mean 512k

Yes.

> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
> --raid-devices=4 /dev/sdb1 /dev/sdc1 missing /dev/sde1

> mdadm -E /dev/sd[bce]1
> /dev/sdb1:

>  Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
>      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
>   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)

> /dev/sdc1:
>  Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
>      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
>   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)

See the difference?

> /dev/sde1:
>  Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
>      Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
>   Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)

> /proc/partitions
> major minor  #blocks  name
> 
>    8        0  244198584 sda
>    8        1     248832 sda1
>    8        2          1 sda2
>    8        5  243947520 sda5
>  252        0  235589632 dm-0
>  252        1    8351744 dm-1
>    8       16 1953514584 sdb
>    8       17 1953514542 sdb1
>    8       32 1953514584 sdc
>    8       33 1953513472 sdc1

And here?

>    8       48 1953514584 sdd
>    8       49 1953514542 sdd1
>    8       64 1953514584 sde
>    8       65 1953514542 sde1
>    9        2 5860535808 md2

> for x in /dev/sd[bce] ; do fdisk -l $x ; done
> 
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
> fdisk doesn't support GPT. Use GNU Parted.

Oops.  But you get the point, I hope.  /dev/sdc has a different
partition table from /dev/sdb and /dev/sde.  That short partition is
causing mdadm to make the array too small for the filesystem in it.

You need to fix the partitions on /dev/sdc to exactly match /dev/sdb and
/dev/sde.  Can you explain how it might have become different?

And use "parted /dev/sdb print" to show the partitions instead of fdisk.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23  2:22                                                                                           ` Phil Turmel
@ 2013-02-23  3:11                                                                                             ` Stone
  2013-02-23  4:36                                                                                               ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-23  3:11 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 23.02.2013 03:22, schrieb Phil Turmel:
> On 02/22/2013 05:42 PM, Stone wrote:
>> Am 22.02.2013 23:35, schrieb Phil Turmel:
>>> Please recreate the array with this combination, then show:
>>>
>>> mdadm -D /dev/md2
>>> mdadm -E /dev/sd[bce]1
>>> cat /proc/partitions
>>> for x in /dev/sd[bce] ; do fdisk -l $x ; done
>> i dont know what chunk do you what but i think you mean 512k
> Yes.
>
>> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
>> --raid-devices=4 /dev/sdb1 /dev/sdc1 missing /dev/sde1
>> mdadm -E /dev/sd[bce]1
>> /dev/sdb1:
>>   Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
>>       Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
>>    Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
>> /dev/sdc1:
>>   Avail Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
>>       Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
>>    Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
> See the difference?
yes ;-(
>> /dev/sde1:
>>   Avail Dev Size : 3907027037 (1863.02 GiB 2000.40 GB)
>>       Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
>>    Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
>> /proc/partitions
>> major minor  #blocks  name
>>
>>     8        0  244198584 sda
>>     8        1     248832 sda1
>>     8        2          1 sda2
>>     8        5  243947520 sda5
>>   252        0  235589632 dm-0
>>   252        1    8351744 dm-1
>>     8       16 1953514584 sdb
>>     8       17 1953514542 sdb1
>>     8       32 1953514584 sdc
>>     8       33 1953513472 sdc1
> And here?
>
>>     8       48 1953514584 sdd
>>     8       49 1953514542 sdd1
>>     8       64 1953514584 sde
>>     8       65 1953514542 sde1
>>     9        2 5860535808 md2
>> for x in /dev/sd[bce] ; do fdisk -l $x ; done
>>
>> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
>> fdisk doesn't support GPT. Use GNU Parted.
> Oops.  But you get the point, I hope.  /dev/sdc has a different
> partition table from /dev/sdb and /dev/sde.  That short partition is
> causing mdadm to make the array too small for the filesystem in it.
>
> You need to fix the partitions on /dev/sdc to exactly match /dev/sdb and
> /dev/sde.  Can you explain how it might have become different?
>
> And use "parted /dev/sdb print" to show the partitions instead of fdisk.
>
> Phil
>
no i have no idea why die partion table is wrong on /dev/sdc.
i can copy with dd the partion table from sdb to sdc...
example:
dd if=/dev/sdb of=sdb.part bs=512 count=1
dd if=sdb.part of=/dev/sdc bs=512 count=1

infos:
root@ubuntu:~/raid# parted /dev/sdb print
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name  Flags
  1      17,4kB  2000GB  2000GB                     raid

root@ubuntu:~/raid# parted /dev/sdc print
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdc: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name  Flags
  1      1049kB  2000GB  2000GB

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23  3:11                                                                                             ` Stone
@ 2013-02-23  4:36                                                                                               ` Phil Turmel
  2013-02-23 10:19                                                                                                 ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-23  4:36 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/22/2013 10:11 PM, Stone wrote:

> no i have no idea why die partion table is wrong on /dev/sdc.
> i can copy with dd the partion table from sdb to sdc...
> example:
> dd if=/dev/sdb of=sdb.part bs=512 count=1
> dd if=sdb.part of=/dev/sdc bs=512 count=1

That won't work for gpt.
http://en.wikipedia.org/wiki/GUID_Partition_Table

> infos:
> root@ubuntu:~/raid# parted /dev/sdb print
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdb: 2000GB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start   End     Size    File system  Name  Flags
>  1      17,4kB  2000GB  2000GB                     raid

Uh oh.  Sector 34.  That start point is very bad for performance.

> root@ubuntu:~/raid# parted /dev/sdc print
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdc: 2000GB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start   End     Size    File system  Name  Flags
>  1      1049kB  2000GB  2000GB

This one is aligned for proper performance, but it appears to be the
wrong alignment for getting your data back.  Yuck.

Once you fix this partition table to get your data, take a backup of the
array.  Then make a new array with partitions starting at sector 2048.
Or no partitions at all.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23  4:36                                                                                               ` Phil Turmel
@ 2013-02-23 10:19                                                                                                 ` Stone
  2013-02-23 16:10                                                                                                   ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-23 10:19 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 23.02.2013 05:36, schrieb Phil Turmel:
> On 02/22/2013 10:11 PM, Stone wrote:
>
>> no i have no idea why die partion table is wrong on /dev/sdc.
>> i can copy with dd the partion table from sdb to sdc...
>> example:
>> dd if=/dev/sdb of=sdb.part bs=512 count=1
>> dd if=sdb.part of=/dev/sdc bs=512 count=1
> That won't work for gpt.
> http://en.wikipedia.org/wiki/GUID_Partition_Table
>
>> infos:
>> root@ubuntu:~/raid# parted /dev/sdb print
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sdb: 2000GB
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start   End     Size    File system  Name  Flags
>>   1      17,4kB  2000GB  2000GB                     raid
> Uh oh.  Sector 34.  That start point is very bad for performance.
i dont know why the partion is so....
>
>> root@ubuntu:~/raid# parted /dev/sdc print
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sdc: 2000GB
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start   End     Size    File system  Name  Flags
>>   1      1049kB  2000GB  2000GB
> This one is aligned for proper performance, but it appears to be the
> wrong alignment for getting your data back.  Yuck.
>
> Once you fix this partition table to get your data, take a backup of the
> array.  Then make a new array with partitions starting at sector 2048.
> Or no partitions at all.
>
> Phil
ok befor i doing something worg i will ask you more questions :)
what do you mean with "take a backup of the array" and how it works? 
sorry i dont know what you mean
after this i create on all four devices the partiontable new with parted 
and the starting sector must be 2048.
should i make a backup copy of all devices partiontables? if yes how?

thx.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23 10:19                                                                                                 ` Stone
@ 2013-02-23 16:10                                                                                                   ` Phil Turmel
  2013-02-23 22:26                                                                                                     ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-23 16:10 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/23/2013 05:19 AM, Stone wrote:
> ok befor i doing something worg i will ask you more questions :)
> what do you mean with "take a backup of the array" and how it works?

First priority is to recover your data in the encrypted volume.  You
can't fix the partition misalignment on sdb and sde without destroying
their content.  So *after* we get your data back, you need to save it
somewhere else when you repartition.

> sorry i dont know what you mean
> after this i create on all four devices the partiontable new with parted
> and the starting sector must be 2048.

Not yet.  We have to save your data first.  Start sector 34 is bad for
performance.  But that is where your data is, so you have to use it
until you get you data back, and can put the data on some other storage
system.

> should i make a backup copy of all devices partiontables? if yes how?

for x in /dev/sd[bce] ; do parted $x unit s print ; done

The partition structure on /dev/sdc is causing the array to be too short
for the filesystem.  There are two possibilities:

1) The partition doesn't go far enough to the end of the disk,

2) The partition starts too far into the disk (move start sector to 34
like sdb and sde).

We can see that the partition on sdc does start further into the disk
than sdb, so that is suspicious.  But you don't remember repartitioning
sdc, so changing it might misalign your existing data.

I don't know if you can fix #1--I need to see the parted report with
"unit s".  If there's room at the end, you try that first and see the
results of "fsck -n".  (The size of /dev/sdc1 needs be at least
3907025920 sectors.)

If that still has many errors, you try fixing #2.

Phil

ps.  I hope this odyssey has emphasized to all lurkers how terrible it
can be to use "mdadm --create" without careful, thorough preparation.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23 16:10                                                                                                   ` Phil Turmel
@ 2013-02-23 22:26                                                                                                     ` Stone
  2013-02-23 23:49                                                                                                       ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-23 22:26 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 23.02.2013 17:10, schrieb Phil Turmel:
> On 02/23/2013 05:19 AM, Stone wrote:
>> ok befor i doing something worg i will ask you more questions :)
>> what do you mean with "take a backup of the array" and how it works?
> First priority is to recover your data in the encrypted volume.  You
> can't fix the partition misalignment on sdb and sde without destroying
> their content.  So *after* we get your data back, you need to save it
> somewhere else when you repartition.
>
>> sorry i dont know what you mean
>> after this i create on all four devices the partiontable new with parted
>> and the starting sector must be 2048.
> Not yet.  We have to save your data first.  Start sector 34 is bad for
> performance.  But that is where your data is, so you have to use it
> until you get you data back, and can put the data on some other storage
> system.
i have a secound storage system with enough space to copy all there. 
this is my plan. to mount the device and copy as fast as i can all my 
data to my secound system and after this i take the cheap drives and 
drive with my car over it ;-)


>> should i make a backup copy of all devices partiontables? if yes how?
> for x in /dev/sd[bce] ; do parted $x unit s print ; done

for x in /dev/sd[bce] ; do parted $x unit s print ; done
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdb: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
  1      34s    3907029118s  3907029085s                     raid

Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdc: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
  1      2048s  3907028991s  3907026944s

Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sde: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
  1      34s    3907029118s  3907029085s                     raid
>
> The partition structure on /dev/sdc is causing the array to be too short
> for the filesystem.  There are two possibilities:
>
> 1) The partition doesn't go far enough to the end of the disk,
>
> 2) The partition starts too far into the disk (move start sector to 34
> like sdb and sde).
>
> We can see that the partition on sdc does start further into the disk
> than sdb, so that is suspicious.  But you don't remember repartitioning
> sdc, so changing it might misalign your existing data.
>
> I don't know if you can fix #1--I need to see the parted report with
> "unit s".  If there's room at the end, you try that first and see the
> results of "fsck -n".  (The size of /dev/sdc1 needs be at least
> 3907025920 sectors.)
>
> If that still has many errors, you try fixing #2.
>
> Phil
>
> ps.  I hope this odyssey has emphasized to all lurkers how terrible it
> can be to use "mdadm --create" without careful, thorough preparation.
@ ps: sorry that i do this and thx for your help!


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23 22:26                                                                                                     ` Stone
@ 2013-02-23 23:49                                                                                                       ` Phil Turmel
  2013-02-24  0:13                                                                                                         ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-23 23:49 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/23/2013 05:26 PM, Stone wrote:

> i have a secound storage system with enough space to copy all there.
> this is my plan. to mount the device and copy as fast as i can all my
> data to my secound system and after this i take the cheap drives and
> drive with my car over it ;-)

Good plan for the first part.  But I wouldn't get rid of the cheap
drives.  They may lack features needed for best use in a raid array, but
they are fine for solo duties.  I have some similarly annoying Seagate
drives.  I use them one-by-one for off-site rotating backups.

> for x in /dev/sd[bce] ; do parted $x unit s print ; done
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdb: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start  End          Size         File system  Name  Flags
>  1      34s    3907029118s  3907029085s                     raid
> 
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdc: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start  End          Size         File system  Name  Flags
>  1      2048s  3907028991s  3907026944s
> 
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sde: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start  End          Size         File system  Name  Flags
>  1      34s    3907029118s  3907029085s                     raid
>>
>> The partition structure on /dev/sdc is causing the array to be too short
>> for the filesystem.  There are two possibilities:
>>
>> 1) The partition doesn't go far enough to the end of the disk,

For this, repartition /dev/sdc to start at sector 2048 and end at
3907029118.  Then re-create the array, open luks, and do "fsck -n" and
show the results.

>>
>> 2) The partition starts too far into the disk (move start sector to 34
>> like sdb and sde).

For this, repartition /dev/sdc to start at 34 and end at 3907029118.
This makes it match sdb and sde.  Then re-create the array, open luks,
and do "fsck -n" and show the results.

>> ps.  I hope this odyssey has emphasized to all lurkers how terrible it
>> can be to use "mdadm --create" without careful, thorough preparation.

> @ ps: sorry that i do this and thx for your help!

You're welcome.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-23 23:49                                                                                                       ` Phil Turmel
@ 2013-02-24  0:13                                                                                                         ` Stone
  2013-02-24  4:04                                                                                                           ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24  0:13 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 00:49, schrieb Phil Turmel:
> On 02/23/2013 05:26 PM, Stone wrote:
>
>> i have a secound storage system with enough space to copy all there.
>> this is my plan. to mount the device and copy as fast as i can all my
>> data to my secound system and after this i take the cheap drives and
>> drive with my car over it ;-)
> Good plan for the first part.  But I wouldn't get rid of the cheap
> drives.  They may lack features needed for best use in a raid array, but
> they are fine for solo duties.  I have some similarly annoying Seagate
> drives.  I use them one-by-one for off-site rotating backups.
this was my secound time to buy WD devices. i bought 5 harddisk from the 
green series and all disks have very dangerous smart values. two devices 
i send back within one year :(
my new server have segate drives and i have with segate rarely problems 
over years and the performance is very good. i hope with this drives are 
that same.
>> for x in /dev/sd[bce] ; do parted $x unit s print ; done
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sdb: 3907029168s
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start  End          Size         File system  Name  Flags
>>   1      34s    3907029118s  3907029085s                     raid
>>
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sdc: 3907029168s
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start  End          Size         File system  Name  Flags
>>   1      2048s  3907028991s  3907026944s
>>
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sde: 3907029168s
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start  End          Size         File system  Name  Flags
>>   1      34s    3907029118s  3907029085s                     raid
>>> The partition structure on /dev/sdc is causing the array to be too short
>>> for the filesystem.  There are two possibilities:
>>>
>>> 1) The partition doesn't go far enough to the end of the disk,
> For this, repartition /dev/sdc to start at sector 2048 and end at
> 3907029118.  Then re-create the array, open luks, and do "fsck -n" and
> show the results.
roger that.
i would do this:
parted /dev/sdc
unit s
resize 1 2048 3907029118
parted /dev/sdc unit s print -> to check my new settings
recreate the md2 device with chunk 512 and the order we find out.
open luks
check it with fsck -n and report you my (errors) result.
>>> 2) The partition starts too far into the disk (move start sector to 34
>>> like sdb and sde).
>>>
>>> For this, repartition /dev/sdc to start at 34 and end at 3907029118.
>>> This makes it match sdb and sde.  Then re-create the array, open luks,
>>> and do "fsck -n" and show the results.
this is the next step if you say step one is the wrong one.
parted /dev/sdc
unit s
resize 1 34 3907029118
parted /dev/sdc1 unit s print -> to check my new settings
recreate the md2 device with chunk 512 and the order we find out.
open luks
check it with fsck -n and report you again my (errors) result.
>>> ps.  I hope this odyssey has emphasized to all lurkers how terrible it
>>> can be to use "mdadm --create" without careful, thorough preparation.
>> @ ps: sorry that i do this and thx for your help!
> You're welcome.
>
> Phil
>
please verify my steps.
thx

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24  0:13                                                                                                         ` Stone
@ 2013-02-24  4:04                                                                                                           ` Phil Turmel
  2013-02-24  7:10                                                                                                             ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-24  4:04 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/23/2013 07:13 PM, Stone wrote:
> Am 24.02.2013 00:49, schrieb Phil Turmel:

>>>> 1) The partition doesn't go far enough to the end of the disk,
>> For this, repartition /dev/sdc to start at sector 2048 and end at
>> 3907029118.  Then re-create the array, open luks, and do "fsck -n" and
>> show the results.
> roger that.
> i would do this:
> parted /dev/sdc
> unit s
> resize 1 2048 3907029118
> parted /dev/sdc unit s print -> to check my new settings
> recreate the md2 device with chunk 512 and the order we find out.
> open luks
> check it with fsck -n and report you my (errors) result.

Do not use "resize".  (And it doesn't exist in current versions of
parted anyways.)  Use rm then mkpart.

Otherwise, Yes.

>>>> 2) The partition starts too far into the disk (move start sector to 34
>>>> like sdb and sde).
>>>>
>>>> For this, repartition /dev/sdc to start at 34 and end at 3907029118.
>>>> This makes it match sdb and sde.  Then re-create the array, open luks,
>>>> and do "fsck -n" and show the results.
> this is the next step if you say step one is the wrong one.
> parted /dev/sdc
> unit s
> resize 1 34 3907029118
> parted /dev/sdc1 unit s print -> to check my new settings
> recreate the md2 device with chunk 512 and the order we find out.
> open luks
> check it with fsck -n and report you again my (errors) result.

Same here.  Delete the partition with rm, then create it at the new
location and size.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24  4:04                                                                                                           ` Phil Turmel
@ 2013-02-24  7:10                                                                                                             ` Stone
  2013-02-24 14:15                                                                                                               ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24  7:10 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 05:04, schrieb Phil Turmel:
> On 02/23/2013 07:13 PM, Stone wrote:
>> Am 24.02.2013 00:49, schrieb Phil Turmel:
>>>>> 1) The partition doesn't go far enough to the end of the disk,
>>> For this, repartition /dev/sdc to start at sector 2048 and end at
>>> 3907029118.  Then re-create the array, open luks, and do "fsck -n" and
>>> show the results.
>> roger that.
>> i would do this:
>> parted /dev/sdc
>> unit s
>> resize 1 2048 3907029118
>> parted /dev/sdc unit s print -> to check my new settings
>> recreate the md2 device with chunk 512 and the order we find out.
>> open luks
>> check it with fsck -n and report you my (errors) result.
> Do not use "resize".  (And it doesn't exist in current versions of
> parted anyways.)  Use rm then mkpart.
>
> Otherwise, Yes.
frist try:
parted /dev/sdc unit s print
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdc: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name     Flags
  1      2048s  3907029118s  3907027071s               primary

One or more block group descriptor checksums are invalid.  Fix? no

Group descriptor 0 checksum is invalid.  IGNORED.
Group descriptor 1 checksum is invalid.  IGNORED.
Group descriptor 2 checksum is invalid.  IGNORED.
Group descriptor 3 checksum is invalid.  IGNORED. -> Up to 44712
...
...
Inode 12130 has a extra size (40545) which is invalid
Fix? no

Inode 12131 is in use, but has dtime set.  Fix? no

Inode 12131 has a extra size (60293) which is invalid
Fix? no

Inode 12131, i_size is 10232047334267190319, should be 0.  Fix? no

Inode 12131, i_blocks is 205664983023728, should be 0.  Fix? no

Inode 12132 is in use, but has dtime set.  Fix? no

Inode 12132 has a extra size (15477) which is invalid
Fix? no
...
...
and on the end
Suppress messages? no

Illegal block #1020 (3776472337) in inode 11884.  IGNORED.
Illegal block #1023 (4146244532) in inode 11884.  IGNORED.
Illegal block #1026 (3701080588) in inode 11884.  IGNORED.
Illegal block #1028 (3560657754) in inode 11884.  IGNORED.
Illegal block #1029 (3846570075) in inode 11884.  IGNORED.
Illegal block #1030 (2560600395) in inode 11884.  IGNORED.
Illegal block #1031 (2695974737) in inode 11884.  IGNORED.
Illegal block #1034 (3747644559) in inode 11884.  IGNORED.
Illegal block #1035 (3005116177) in inode 11884.  IGNORED.
Illegal indirect block (3855214404) in inode 11884.  IGNORED.
Error while iterating over blocks in inode 11884: Illegal indirect block 
found
e2fsck: aborted

the output have 711mb..
do you ned more examples?
what do you say to case1?

>
>>>>> 2) The partition starts too far into the disk (move start sector to 34
>>>>> like sdb and sde).
>>>>>
>>>>> For this, repartition /dev/sdc to start at 34 and end at 3907029118.
>>>>> This makes it match sdb and sde.  Then re-create the array, open luks,
>>>>> and do "fsck -n" and show the results.
>> this is the next step if you say step one is the wrong one.
>> parted /dev/sdc
>> unit s
>> resize 1 34 3907029118
>> parted /dev/sdc1 unit s print -> to check my new settings
>> recreate the md2 device with chunk 512 and the order we find out.
>> open luks
>> check it with fsck -n and report you again my (errors) result.
> Same here.  Delete the partition with rm, then create it at the new
> location and size.
>
> Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24  7:10                                                                                                             ` Stone
@ 2013-02-24 14:15                                                                                                               ` Phil Turmel
  2013-02-24 18:22                                                                                                                 ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-24 14:15 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/24/2013 02:10 AM, Stone wrote:

> e2fsck: aborted
> 
> the output have 711mb..
> do you ned more examples?
> what do you say to case1?

I think you need to see case2.  Case #1 isn't very good.

Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 14:15                                                                                                               ` Phil Turmel
@ 2013-02-24 18:22                                                                                                                 ` Stone
  2013-02-24 18:33                                                                                                                   ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24 18:22 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 15:15, schrieb Phil Turmel:
> On 02/24/2013 02:10 AM, Stone wrote:
>
>> e2fsck: aborted
>>
>> the output have 711mb..
>> do you ned more examples?
>> what do you say to case1?
> I think you need to see case2.  Case #1 isn't very good.
>
> Phil
>
roger.

  print
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdc: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name     Flags
  1      34s    3907029118s  3907029085s               primary


fsck:
One or more block group descriptor checksums are invalid.  Fix? no

Group descriptor 0 checksum is invalid.  IGNORED.
Group descriptor 1 checksum is invalid.  IGNORED.
Group descriptor 2 checksum is invalid.  IGNORED.
Group descriptor 3 checksum is invalid.  IGNORED. -> up to 44712
...
....
HTREE directory inode 6140 has an invalid root node.
Clear HTree index? no

Inode 6140 should not have EOFBLOCKS_FL set (size 11864516288231708776, 
lblk -1)
Clear? no

Inode 6140, i_size is 11864516288231708776, should be 0.  Fix? no

Inode 6140, i_blocks is 146149627678319, should be 0.  Fix? no

Inode 6141 is in use, but has dtime set.  Fix? no

Inode 6141 has imagic flag set.  Clear? no

Inode 6141 has a extra size (39571) which is invalid
Fix? no

Inode 6141 has compression flag set on filesystem without compression 
support.  Clear? no

Inode 6141, i_size is 9532800016578411510, should be 0.  Fix? no

Inode 6141, i_blocks is 44690332653787, should be 0.  Fix? no
.....
....
....
Suppress messages? no

Illegal block #2039 (1737394854) in inode 5693.  IGNORED.
Illegal block #2040 (2008213591) in inode 5693.  IGNORED.
Illegal block #2041 (2130466482) in inode 5693.  IGNORED.
Illegal block #2042 (1739888040) in inode 5693.  IGNORED.
Illegal block #2043 (1807109420) in inode 5693.  IGNORED.
Illegal block #2044 (3445204660) in inode 5693.  IGNORED.
Illegal block #2045 (1712689420) in inode 5693.  IGNORED.
Illegal block #2046 (1938106967) in inode 5693.  IGNORED.
Illegal block #2047 (3704437218) in inode 5693.  IGNORED.
Illegal block #2048 (2131383249) in inode 5693.  IGNORED.
Illegal block #2049 (2676662235) in inode 5693.  IGNORED.
Illegal block #2050 (2662461689) in inode 5693.  IGNORED.
Too many illegal blocks in inode 5693.
Clear inode? no

Suppress messages? no

Illegal block #2051 (4286896066) in inode 5693.  IGNORED.
Illegal block #2055 (3899334065) in inode 5693.  IGNORED.
Illegal block #2056 (3410742419) in inode 5693.  IGNORED.
Illegal block #2057 (3944198843) in inode 5693.  IGNORED.
Illegal block #2058 (3374907392) in inode 5693.  IGNORED.
Illegal indirect block (2165853531) in inode 5693.  IGNORED.
Error while iterating over blocks in inode 5693: Illegal indirect block 
found
e2fsck: aborted

same shit different partion.
the logfile have 708mb
:(



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 18:22                                                                                                                 ` Stone
@ 2013-02-24 18:33                                                                                                                   ` Phil Turmel
  2013-02-24 19:23                                                                                                                     ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-24 18:33 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/24/2013 01:22 PM, Stone wrote:
> Am 24.02.2013 15:15, schrieb Phil Turmel:
>> On 02/24/2013 02:10 AM, Stone wrote:
>>
>>> e2fsck: aborted
>>>
>>> the output have 711mb..
>>> do you ned more examples?
>>> what do you say to case1?
>> I think you need to see case2.  Case #1 isn't very good.

Case #2 ....
> same shit different partion.
> the logfile have 708mb
> :(


Hmm.  If one was clearly better than the other, I'd recommend you do
"fsck -y" with it.  But they are both ugly.  Not extraordinarily bad,
but bad.

I believe you should copy all three complete drives to spares, then try
"fsck -y" with case #1.  If it doesn't give you your data, put the
spares in your system and try case #2 with "fsck -y".

If neither case #1 nor case #2 give you (most of) your data, I'm out of
ideas. :-(

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 18:33                                                                                                                   ` Phil Turmel
@ 2013-02-24 19:23                                                                                                                     ` Stone
  2013-02-24 19:51                                                                                                                       ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24 19:23 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 19:33, schrieb Phil Turmel:
> On 02/24/2013 01:22 PM, Stone wrote:
>> Am 24.02.2013 15:15, schrieb Phil Turmel:
>>> On 02/24/2013 02:10 AM, Stone wrote:
>>>
>>>> e2fsck: aborted
>>>>
>>>> the output have 711mb..
>>>> do you ned more examples?
>>>> what do you say to case1?
>>> I think you need to see case2.  Case #1 isn't very good.
> Case #2 ....
>> same shit different partion.
>> the logfile have 708mb
>> :(
>
> Hmm.  If one was clearly better than the other, I'd recommend you do
> "fsck -y" with it.  But they are both ugly.  Not extraordinarily bad,
> but bad.
>
> I believe you should copy all three complete drives to spares, then try
> "fsck -y" with case #1.  If it doesn't give you your data, put the
> spares in your system and try case #2 with "fsck -y".
>
> If neither case #1 nor case #2 give you (most of) your data, I'm out of
> ideas. :-(
>
> Phil
hm ok.
what copy method recommend you?
dd? (the duration is very long)

i think i must buy disks because if i get a pice of my data back than i 
copy it to my secound storage and so many space i dont have...

if this is you last idea then i will try this.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 19:23                                                                                                                     ` Stone
@ 2013-02-24 19:51                                                                                                                       ` Phil Turmel
  2013-02-24 20:15                                                                                                                         ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-24 19:51 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/24/2013 02:23 PM, Stone wrote:

> hm ok.
> what copy method recommend you?
> dd? (the duration is very long)

I normally use dc3dd.  For troublesome disks, gnu ddrescue is the better
choice.

> i think i must buy disks because if i get a pice of my data back than i
> copy it to my secound storage and so many space i dont have...
> 
> if this is you last idea then i will try this.

Ok.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 19:51                                                                                                                       ` Phil Turmel
@ 2013-02-24 20:15                                                                                                                         ` Stone
  2013-02-24 20:25                                                                                                                           ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24 20:15 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 20:51, schrieb Phil Turmel:
> On 02/24/2013 02:23 PM, Stone wrote:
>
>> hm ok.
>> what copy method recommend you?
>> dd? (the duration is very long)
> I normally use dc3dd.  For troublesome disks, gnu ddrescue is the better
> choice.
ok. this tool i do not know but i think this is the right way:
ddrescue /dev/sdx /path/to/my/new/drive
>> i think i must buy disks because if i get a pice of my data back than i
>> copy it to my secound storage and so many space i dont have...
>>
>> if this is you last idea then i will try this.
> Ok.
>
> Phil


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 20:15                                                                                                                         ` Stone
@ 2013-02-24 20:25                                                                                                                           ` Phil Turmel
  2013-02-24 20:38                                                                                                                             ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-24 20:25 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/24/2013 03:15 PM, Stone wrote:
> Am 24.02.2013 20:51, schrieb Phil Turmel:
>> On 02/24/2013 02:23 PM, Stone wrote:
>>
>>> hm ok.
>>> what copy method recommend you?
>>> dd? (the duration is very long)
>> I normally use dc3dd.  For troublesome disks, gnu ddrescue is the better
>> choice.
> ok. this tool i do not know but i think this is the right way:
> ddrescue /dev/sdx /path/to/my/new/drive

If you have spares of the same size, simply duplicate to the spare:

ddrescue -b 4096 /dev/sdx /dev/sdy


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 20:25                                                                                                                           ` Phil Turmel
@ 2013-02-24 20:38                                                                                                                             ` Stone
  2013-02-24 20:44                                                                                                                               ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24 20:38 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 21:25, schrieb Phil Turmel:
> On 02/24/2013 03:15 PM, Stone wrote:
>> Am 24.02.2013 20:51, schrieb Phil Turmel:
>>> On 02/24/2013 02:23 PM, Stone wrote:
>>>
>>>> hm ok.
>>>> what copy method recommend you?
>>>> dd? (the duration is very long)
>>> I normally use dc3dd.  For troublesome disks, gnu ddrescue is the better
>>> choice.
>> ok. this tool i do not know but i think this is the right way:
>> ddrescue /dev/sdx /path/to/my/new/drive
> If you have spares of the same size, simply duplicate to the spare:
>
> ddrescue -b 4096 /dev/sdx /dev/sdy
>
ok. i copy the disks to my nas and than i will have a look how many data 
i can recover. if i can many data recover than i buy spare disk. if i 
dont can recover my data than i can copy back from the nas to the disks 
and try my luck with the secound case partiontable

i started the copy to my nas with this command:
with simple: ddrescue /dev/sdb /mnt/nas/TEMP/sdb.dd

the rate over the network is good with 112mb/s
i think i will finish the copy tomorrow.

if i can copy data. is there a way to check the files of consistency?
thx

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 20:38                                                                                                                             ` Stone
@ 2013-02-24 20:44                                                                                                                               ` Phil Turmel
  2013-02-24 20:47                                                                                                                                 ` Stone
  2013-02-25 18:31                                                                                                                                 ` Stone
  0 siblings, 2 replies; 79+ messages in thread
From: Phil Turmel @ 2013-02-24 20:44 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/24/2013 03:38 PM, Stone wrote:
> if i can copy data. is there a way to check the files of consistency?
> thx

I'm not sure which files' consistency you mean here...  the dd image in
your nas?  If so, you compute md5sums of each entire disk (rather time
consuming).

If you mean the data inside your encrypted array...  You can only use
whatever consistency mechanisms you already have for those files, like
"par2" Reed-Solomon checksums.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 20:44                                                                                                                               ` Phil Turmel
@ 2013-02-24 20:47                                                                                                                                 ` Stone
  2013-02-25  9:06                                                                                                                                   ` stone
  2013-02-25 18:31                                                                                                                                 ` Stone
  1 sibling, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-24 20:47 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 21:44, schrieb Phil Turmel:
> On 02/24/2013 03:38 PM, Stone wrote:
>> if i can copy data. is there a way to check the files of consistency?
>> thx
> I'm not sure which files' consistency you mean here...  the dd image in
> your nas?  If so, you compute md5sums of each entire disk (rather time
> consuming).
>
> If you mean the data inside your encrypted array...  You can only use
> whatever consistency mechanisms you already have for those files, like
> "par2" Reed-Solomon checksums.
>
> Phil
Yes i mean the data inside my encrypted array. i cannot check every file 
per hand ;-)

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 20:47                                                                                                                                 ` Stone
@ 2013-02-25  9:06                                                                                                                                   ` stone
  0 siblings, 0 replies; 79+ messages in thread
From: stone @ 2013-02-25  9:06 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 21:47, schrieb Stone:
> Am 24.02.2013 21:44, schrieb Phil Turmel:
>> On 02/24/2013 03:38 PM, Stone wrote:
>>> if i can copy data. is there a way to check the files of consistency?
>>> thx
>> I'm not sure which files' consistency you mean here...  the dd image in
>> your nas?  If so, you compute md5sums of each entire disk (rather time
>> consuming).
>>
>> If you mean the data inside your encrypted array...  You can only use
>> whatever consistency mechanisms you already have for those files, like
>> "par2" Reed-Solomon checksums.
>>
>> Phil
> Yes i mean the data inside my encrypted array. i cannot check every 
> file per hand ;-)

on the sde disk i get some errors:
here some examples.
    ipos:     1999 GB,   errors:       0,    average rate:   95478 kB/s
    opos:     1999 GB,     time from last successful read:       0 s
Copying non-trrescued:     1999 GB,  errsize:       0 B,  current 
rate:   56885 kB/s
    ipos:     1999 GB,   errors:       0,    average rate:   95476 kB/s
    opos:     1999 GB,     time from last successful read:       0 s
Copying non-trrescued:     1999 GB,  errsize:       0 B,  current 
rate:   58851 kB/s
    ipos:     1999 GB,   errors:       0,    average rate:   95474 kB/s
    opos:     1999 GB,     time from last successful read:       0 s
Copying non-trrescued:     1999 GB,  errsize:       0 B,  current 
rate:   57802 kB/s
    ipos:     1999 GB,   errors:       0,    average rate:   95472 kB/s
    opos:     1999 GB,     time from last successful read:       0 s
Copying non-trrescued:     1999 GB,  errsize:       0 B,  current 
rate:   53870 kB/s
    ipos:     1999 GB,   errors:       0,    average rate:   95470 kB/s
    opos:     1999 GB,     time from last successful read:       0 s
Copying non-trrescued:     1999 GB,  errsize:       0 B,  current 
rate:   57540 kB/s
    ipos:     1999 GB,   errors:       0,    average rate:   95468 kB/s
    opos:     1999 GB,     time from last successful read:       0 s


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-24 20:44                                                                                                                               ` Phil Turmel
  2013-02-24 20:47                                                                                                                                 ` Stone
@ 2013-02-25 18:31                                                                                                                                 ` Stone
  2013-02-25 20:11                                                                                                                                   ` Stone
  1 sibling, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-25 18:31 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 24.02.2013 21:44, schrieb Phil Turmel:
> On 02/24/2013 03:38 PM, Stone wrote:
>> if i can copy data. is there a way to check the files of consistency?
>> thx
> I'm not sure which files' consistency you mean here...  the dd image in
> your nas?  If so, you compute md5sums of each entire disk (rather time
> consuming).
>
> If you mean the data inside your encrypted array...  You can only use
> whatever consistency mechanisms you already have for those files, like
> "par2" Reed-Solomon checksums.
>
> Phil
i have a backup from the three devices and i switched back to partion 
case 1 and i fire up the fsck:

fsck -y /dev/mapper/md2_nas
fsck from util-linux 2.19.1
e2fsck 1.41.14 (22-Dec-2010)
fsck.ext4: Group descriptors look bad... trying backup blocks...
The filesystem size (according to the superblock) is 1465134336 blocks
The physical size of the device is 1465133568 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? yes

thats all?
is my chance on case 1 over?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-25 18:31                                                                                                                                 ` Stone
@ 2013-02-25 20:11                                                                                                                                   ` Stone
  2013-02-26  0:19                                                                                                                                     ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-25 20:11 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 25.02.2013 19:31, schrieb Stone:
> Am 24.02.2013 21:44, schrieb Phil Turmel:
>> On 02/24/2013 03:38 PM, Stone wrote:
>>> if i can copy data. is there a way to check the files of consistency?
>>> thx
>> I'm not sure which files' consistency you mean here...  the dd image in
>> your nas?  If so, you compute md5sums of each entire disk (rather time
>> consuming).
>>
>> If you mean the data inside your encrypted array...  You can only use
>> whatever consistency mechanisms you already have for those files, like
>> "par2" Reed-Solomon checksums.
>>
>> Phil
> i have a backup from the three devices and i switched back to partion 
> case 1 and i fire up the fsck:
>
> fsck -y /dev/mapper/md2_nas
> fsck from util-linux 2.19.1
> e2fsck 1.41.14 (22-Dec-2010)
> fsck.ext4: Group descriptors look bad... trying backup blocks...
> The filesystem size (according to the superblock) is 1465134336 blocks
> The physical size of the device is 1465133568 blocks
> Either the superblock or the partition table is likely to be corrupt!
> Abort? yes
>
> thats all?
> is my chance on case 1 over?
should i try a fsck -a?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-25 20:11                                                                                                                                   ` Stone
@ 2013-02-26  0:19                                                                                                                                     ` Phil Turmel
  2013-02-27  7:26                                                                                                                                       ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Phil Turmel @ 2013-02-26  0:19 UTC (permalink / raw)
  To: Stone; +Cc: linux-raid

On 02/25/2013 03:11 PM, Stone wrote:

>> i have a backup from the three devices and i switched back to partion
>> case 1 and i fire up the fsck:
>>
>> fsck -y /dev/mapper/md2_nas
>> fsck from util-linux 2.19.1
>> e2fsck 1.41.14 (22-Dec-2010)
>> fsck.ext4: Group descriptors look bad... trying backup blocks...
>> The filesystem size (according to the superblock) is 1465134336 blocks
>> The physical size of the device is 1465133568 blocks
>> Either the superblock or the partition table is likely to be corrupt!
>> Abort? yes
>>
>> thats all?
>> is my chance on case 1 over?
> should i try a fsck -a?

No, go to case #2.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-26  0:19                                                                                                                                     ` Phil Turmel
@ 2013-02-27  7:26                                                                                                                                       ` Stone
  2013-02-27 19:04                                                                                                                                         ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-27  7:26 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Am 26.02.2013 01:19, schrieb Phil Turmel:
> On 02/25/2013 03:11 PM, Stone wrote:
>
>>> i have a backup from the three devices and i switched back to partion
>>> case 1 and i fire up the fsck:
>>>
>>> fsck -y /dev/mapper/md2_nas
>>> fsck from util-linux 2.19.1
>>> e2fsck 1.41.14 (22-Dec-2010)
>>> fsck.ext4: Group descriptors look bad... trying backup blocks...
>>> The filesystem size (according to the superblock) is 1465134336 blocks
>>> The physical size of the device is 1465133568 blocks
>>> Either the superblock or the partition table is likely to be corrupt!
>>> Abort? yes
>>>
>>> thats all?
>>> is my chance on case 1 over?
>> should i try a fsck -a?
> No, go to case #2.
i dont understand anything

i have now three more disks an the mainboard for the spare copy.
here now my new systeminfo:
sda - system
sdb - new spare
sdc - new space
sdd - new space
sde - old raid
sdf - old raid
sdg - missing
sdh - old raid

i have take a copy from the sde sdf sdh to the new spare devices.
ddrescue /dev/sde /dev/sdb --force
ddrescue /dev/sdf /dev/sdc --force
ddrescue /dev/sdh /dev/sdd --force

now i would test case #2 with fsck -y and i have a look with parted over 
the devices.
ALL devices (new space and old raid) have now the start sector at 34 and 
not on 2048
here a example:
parted /dev/sdh unit s print
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdh: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
  1      34s    3907029118s  3907029085s                     raid

i dont understand this. all devices had his start sector on 2048 also 
the one i changed for the case #1 test.
how can this happend?
if i try a
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
--raid-devices=4 /dev/sde1 /dev/sdf1 missing /dev/sdh1
cryptsetup luksOpen /dev/md2 md2_nas
i cant open luks because -> No key available with this passphrase

i dont understand this.
should i change the sector on all devices back to 2048 to check case #2?



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-27  7:26                                                                                                                                       ` Stone
@ 2013-02-27 19:04                                                                                                                                         ` Stone
  2013-02-27 19:33                                                                                                                                           ` Hans-Peter Jansen
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-27 19:04 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

should i try a fsck -a?
>> No, go to case #2.
> i dont understand anything
>
> i have now three more disks an the mainboard for the spare copy.
> here now my new systeminfo:
> sda - system
> sdb - new spare
> sdc - new space
> sdd - new space
> sde - old raid
> sdf - old raid
> sdg - missing
> sdh - old raid
>
> i have take a copy from the sde sdf sdh to the new spare devices.
> ddrescue /dev/sde /dev/sdb --force
> ddrescue /dev/sdf /dev/sdc --force
> ddrescue /dev/sdh /dev/sdd --force
>
> now i would test case #2 with fsck -y and i have a look with parted 
> over the devices.
> ALL devices (new space and old raid) have now the start sector at 34 
> and not on 2048
> here a example:
> parted /dev/sdh unit s print
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdh: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
>
> Number  Start  End          Size         File system  Name  Flags
>  1      34s    3907029118s  3907029085s                     raid
>
> i dont understand this. all devices had his start sector on 2048 also 
> the one i changed for the case #1 test.
> how can this happend?
> if i try a
> mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5 
> --raid-devices=4 /dev/sde1 /dev/sdf1 missing /dev/sdh1
> cryptsetup luksOpen /dev/md2 md2_nas
> i cant open luks because -> No key available with this passphrase
>
> i dont understand this.
> should i change the sector on all devices back to 2048 to check case #2?
>
>
sorry :-)
the last night was very short...

know i have tested case #2
for x in /dev/sd[efh] ; do parted $x unit s print ; done
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sde: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
  1      34s    3907029118s  3907029085s                     raid

Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdf: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name     Flags
  1      2048s  3907029118s  3907027071s               prmiary

Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdh: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name  Flags
  1      34s    3907029118s  3907029085s                     raid

fsck -y /dev/mapper/md2_nas
fsck from util-linux 2.19.1
e2fsck 1.41.14 (22-Dec-2010)
fsck.ext4: Group descriptors look bad... trying backup blocks...
The filesystem size (according to the superblock) is 1465134336 blocks
The physical size of the device is 1465133568 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? yes

no success.

do you have any idea?


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-27 19:04                                                                                                                                         ` Stone
@ 2013-02-27 19:33                                                                                                                                           ` Hans-Peter Jansen
  2013-02-27 19:51                                                                                                                                             ` Stone
  0 siblings, 1 reply; 79+ messages in thread
From: Hans-Peter Jansen @ 2013-02-27 19:33 UTC (permalink / raw)
  To: Stone; +Cc: Phil Turmel, linux-raid

Am Mittwoch, 27. Februar 2013, 20:04:41 schrieb Stone:
> 
> sorry :-)
> the last night was very short...
> 
> know i have tested case #2
> for x in /dev/sd[efh] ; do parted $x unit s print ; done
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sde: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start  End          Size         File system  Name  Flags
>   1      34s    3907029118s  3907029085s                     raid
> 
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdf: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start  End          Size         File system  Name     Flags
>   1      2048s  3907029118s  3907027071s               prmiary
> 
> Model: ATA WDC WD20EARS-00M (scsi)
> Disk /dev/sdh: 3907029168s
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> 
> Number  Start  End          Size         File system  Name  Flags
>   1      34s    3907029118s  3907029085s                     raid

Your whole partitioning looks garbled. /dev/sdf1 starts at 2048, while the 
others start at 34 (even 34 is a strange value), but all end on the same 
sector, sdf carries no raid flag, but primary (with a funny letter swap - my 
parted gets this word right, at least).

I would compare the first sectors of these devices (especially around sector 
34). Probably you managed to destroy your former partition table on sdf, e.g. 
try to adjust this one similar to the others.

Cheers,
Pete

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-27 19:33                                                                                                                                           ` Hans-Peter Jansen
@ 2013-02-27 19:51                                                                                                                                             ` Stone
  2013-03-02 17:13                                                                                                                                               ` Phil Turmel
  0 siblings, 1 reply; 79+ messages in thread
From: Stone @ 2013-02-27 19:51 UTC (permalink / raw)
  To: Hans-Peter Jansen; +Cc: Phil Turmel, linux-raid

Am 27.02.2013 20:33, schrieb Hans-Peter Jansen:
> Am Mittwoch, 27. Februar 2013, 20:04:41 schrieb Stone:
>> sorry :-)
>> the last night was very short...
>>
>> know i have tested case #2
>> for x in /dev/sd[efh] ; do parted $x unit s print ; done
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sde: 3907029168s
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start  End          Size         File system  Name  Flags
>>    1      34s    3907029118s  3907029085s                     raid
>>
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sdf: 3907029168s
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start  End          Size         File system  Name     Flags
>>    1      2048s  3907029118s  3907027071s               prmiary
>>
>> Model: ATA WDC WD20EARS-00M (scsi)
>> Disk /dev/sdh: 3907029168s
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>>
>> Number  Start  End          Size         File system  Name  Flags
>>    1      34s    3907029118s  3907029085s                     raid
> Your whole partitioning looks garbled. /dev/sdf1 starts at 2048, while the
> others start at 34 (even 34 is a strange value), but all end on the same
> sector, sdf carries no raid flag, but primary (with a funny letter swap - my
> parted gets this word right, at least).
>
> I would compare the first sectors of these devices (especially around sector
> 34). Probably you managed to destroy your former partition table on sdf, e.g.
> try to adjust this one similar to the others.
>
> Cheers,
> Pete
i also have set all drives to start with sector 34 and then i started a 
fsck -y but the output was the same...

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Brocken Raid & LUKS
  2013-02-27 19:51                                                                                                                                             ` Stone
@ 2013-03-02 17:13                                                                                                                                               ` Phil Turmel
  0 siblings, 0 replies; 79+ messages in thread
From: Phil Turmel @ 2013-03-02 17:13 UTC (permalink / raw)
  To: Stone; +Cc: Hans-Peter Jansen, linux-raid

Hi All,

[Sorry about the delay--I had to travel this week.]

On 02/27/2013 02:51 PM, Stone wrote:

> i also have set all drives to start with sector 34 and then i started a
> fsck -y but the output was the same...

So case #2 with "fsck -y" was just as bad.  I'm sorry, I'm out of ideas.

Phil

^ permalink raw reply	[flat|nested] 79+ messages in thread

end of thread, other threads:[~2013-03-02 17:13 UTC | newest]

Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-19 16:01 Brocken Raid & LUKS stone
2013-02-19 17:57 ` Phil Turmel
     [not found]   ` <5123E4E9.3020609@heisl.org>
2013-02-19 21:16     ` Phil Turmel
     [not found]       ` <5123EF45.6080405@heisl.org>
     [not found]         ` <5123F7C7.7000406@turmel.org>
     [not found]           ` <5123FB71.3060509@heisl.org>
2013-02-20  0:31             ` Phil Turmel
2013-02-20 18:32               ` Stone
2013-02-20 18:39                 ` Phil Turmel
2013-02-21  7:04                   ` Stone
2013-02-21  9:42                     ` stone
2013-02-21 13:29                       ` Phil Turmel
2013-02-21 14:19                         ` stone
2013-02-21 15:04                           ` Phil Turmel
2013-02-21 15:30                             ` stone
2013-02-21 15:38                               ` Phil Turmel
2013-02-21 15:49                                 ` Phil Turmel
2013-02-21 16:32                                   ` Stone
2013-02-21 16:41                                     ` Phil Turmel
2013-02-21 16:43                                       ` Stone
2013-02-21 16:46                                         ` Phil Turmel
2013-02-21 16:51                                           ` Stone
2013-02-21 16:54                                             ` Phil Turmel
2013-02-21 17:17                                               ` Stone
2013-02-21 17:23                                                 ` Stone
2013-02-21 17:36                                                   ` Phil Turmel
2013-02-21 17:47                                                     ` Stone
2013-02-21 18:00                                                       ` Phil Turmel
2013-02-21 18:08                                                         ` Stone
2013-02-21 18:11                                                           ` Phil Turmel
2013-02-21 18:29                                                             ` Stone
2013-02-21 18:54                                                               ` Phil Turmel
2013-02-21 19:12                                                                 ` Stone
2013-02-21 19:17                                                                   ` Stone
2013-02-21 19:24                                                                   ` Phil Turmel
2013-02-21 19:29                                                                     ` Stone
2013-02-21 19:45                                                                       ` Phil Turmel
2013-02-21 19:46                                                                       ` Stone
     [not found]                                                                         ` <51269DE0.5070905@heisl.org>
2013-02-22 10:31                                                                           ` stone
2013-02-22 13:53                                                                             ` Phil Turmel
2013-02-22 14:58                                                                               ` Stone
2013-02-22 15:37                                                                                 ` Phil Turmel
2013-02-22 18:17                                                                                   ` Stone
2013-02-22 18:23                                                                                     ` Phil Turmel
2013-02-22 20:43                                                                                     ` Stone
2013-02-22 22:35                                                                                       ` Phil Turmel
2013-02-22 22:42                                                                                         ` Stone
2013-02-23  2:22                                                                                           ` Phil Turmel
2013-02-23  3:11                                                                                             ` Stone
2013-02-23  4:36                                                                                               ` Phil Turmel
2013-02-23 10:19                                                                                                 ` Stone
2013-02-23 16:10                                                                                                   ` Phil Turmel
2013-02-23 22:26                                                                                                     ` Stone
2013-02-23 23:49                                                                                                       ` Phil Turmel
2013-02-24  0:13                                                                                                         ` Stone
2013-02-24  4:04                                                                                                           ` Phil Turmel
2013-02-24  7:10                                                                                                             ` Stone
2013-02-24 14:15                                                                                                               ` Phil Turmel
2013-02-24 18:22                                                                                                                 ` Stone
2013-02-24 18:33                                                                                                                   ` Phil Turmel
2013-02-24 19:23                                                                                                                     ` Stone
2013-02-24 19:51                                                                                                                       ` Phil Turmel
2013-02-24 20:15                                                                                                                         ` Stone
2013-02-24 20:25                                                                                                                           ` Phil Turmel
2013-02-24 20:38                                                                                                                             ` Stone
2013-02-24 20:44                                                                                                                               ` Phil Turmel
2013-02-24 20:47                                                                                                                                 ` Stone
2013-02-25  9:06                                                                                                                                   ` stone
2013-02-25 18:31                                                                                                                                 ` Stone
2013-02-25 20:11                                                                                                                                   ` Stone
2013-02-26  0:19                                                                                                                                     ` Phil Turmel
2013-02-27  7:26                                                                                                                                       ` Stone
2013-02-27 19:04                                                                                                                                         ` Stone
2013-02-27 19:33                                                                                                                                           ` Hans-Peter Jansen
2013-02-27 19:51                                                                                                                                             ` Stone
2013-03-02 17:13                                                                                                                                               ` Phil Turmel
     [not found]                                                                                   ` <5127B0AB.5040108@heisl.org>
2013-02-22 18:30                                                                                     ` Phil Turmel
2013-02-21 22:29                                       ` Chris Murphy
2013-02-21 22:34                                         ` Phil Turmel
2013-02-21 22:20                                     ` Chris Murphy
2013-02-21 22:26                                       ` Phil Turmel
2013-02-21 13:15                     ` Phil Turmel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.