All of lore.kernel.org
 help / color / mirror / Atom feed
* [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
@ 2011-08-04 21:31 Paul Menzel
  2011-08-04 23:18 ` Paul Menzel
  0 siblings, 1 reply; 12+ messages in thread
From: Paul Menzel @ 2011-08-04 21:31 UTC (permalink / raw)
  To: dm-crypt

Dear dm-crypt folks,


trying to save my data [1][2][3] I do not understand the following.

The partitions of two drives `/dev/sd{a,b}2` start at exactly the same point.

------- 8< --- partition table --- >8 -------
# partition table of /dev/sda
unit: sectors

/dev/sda1 : start=       63, size=   995967, Id=fd, bootable
/dev/sda2 : start=   996030, size=3906028035, Id=fd
/dev/sda3 : start=        0, size=        0, Id= 0
/dev/sda4 : start=        0, size=        0, Id= 0

# partition table of /dev/sdb
unit: sectors

/dev/sdb1 : start=       63, size=   995967, Id=fd, bootable
/dev/sdb2 : start=   996030, size=975772035, Id=fd
/dev/sdb3 : start=        0, size=        0, Id= 0
/dev/sdb4 : start=        0, size=        0, Id= 0
------- 8< --- partition table --- >8 -------

Doing `cryptsetup luksHeaderRestore /dev/sda2 --header-backup-file
sdb.luksHeaderBackup` with `sdb.luksHeaderBackup` obtained from
`/dev/sdb2` the passphrase, which works on sdb, should definitely work
on sda although the data might be read as garbage.


Thanks,

Paul


[1] http://www.saout.de/pipermail/dm-crypt/2011-August/001858.html
[2] http://www.saout.de/pipermail/dm-crypt/2011-August/001858.html
[3] http://marc.info/?l=linux-raid&m=131248606026407&w=2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-04 21:31 [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`? Paul Menzel
@ 2011-08-04 23:18 ` Paul Menzel
  2011-08-05  2:20   ` Milan Broz
  0 siblings, 1 reply; 12+ messages in thread
From: Paul Menzel @ 2011-08-04 23:18 UTC (permalink / raw)
  To: dm-crypt

2011/8/4 Paul Menzel <pm.debian@googlemail.com>:

> trying to save my data [1][2][3] I do not understand the following.
>
> The partitions of two drives `/dev/sd{a,b}2` start at exactly the same point.
>
> ------- 8< --- partition table --- >8 -------
> # partition table of /dev/sda
> unit: sectors
>
> /dev/sda1 : start=       63, size=   995967, Id=fd, bootable
> /dev/sda2 : start=   996030, size=3906028035, Id=fd
> /dev/sda3 : start=        0, size=        0, Id= 0
> /dev/sda4 : start=        0, size=        0, Id= 0
>
> # partition table of /dev/sdb
> unit: sectors
>
> /dev/sdb1 : start=       63, size=   995967, Id=fd, bootable
> /dev/sdb2 : start=   996030, size=975772035, Id=fd
> /dev/sdb3 : start=        0, size=        0, Id= 0
> /dev/sdb4 : start=        0, size=        0, Id= 0
> ------- 8< --- partition table --- >8 -------
>
> Doing `cryptsetup luksHeaderRestore /dev/sda2 --header-backup-file
> sdb.luksHeaderBackup` with `sdb.luksHeaderBackup` obtained from
> `/dev/sdb2` the passphrase, which works on sdb, should definitely work
> on sda although the data might be read as garbage.

It looks like `luksBackupRestore` is not working for me correctly.
Please take a look at the following results. `/dev/sdb` is the old
drive with the working LUKS setup, that means my passphrase gets
accepted. I am sorry for that Google Mail will probably line wrap
everything.

------- 8< --- entered commands --- >8 -------
% sudo cryptsetup luksHeaderBackup /dev/sda2 --header-backup-file
/tmp/sda.header
% sudo cryptsetup luksHeaderBackup /dev/sdb2 --header-backup-file
/tmp/sdb.header


% sudo md5sum /tmp/sd*
7b897c620776f549324810a8aeb9921e  /tmp/sda.header
ce314509007b2c76eb85e7b89ee25da5  /tmp/sdb.header

% sudo cryptsetup --verbose --debug luksHeaderRestore /dev/sda2
--header-backup-file /tmp/sdb.header
# cryptsetup 1.3.0 processing "cryptsetup --verbose --debug
luksHeaderRestore /dev/sda2 --header-backup-file /tmp/sdb.header"
# Running command luksHeaderRestore.
# Locking memory.
# Allocating crypt device /dev/sda2 context.
# Trying to open and read device /dev/sda2.
# Initialising device-mapper backend, UDEV is enabled.
# Detected dm-crypt version 1.10.0, dm-ioctl version 4.19.1.
# Initialising gcrypt crypto backend.
# Requested header restore to device /dev/sda2 (LUKS1) from file
/tmp/sdb.header.
# Reading LUKS header of size 1024 from backup file /tmp/sdb.header
# Reading LUKS header of size 1024 from device /dev/sda2
# Device /dev/sda2 already contains LUKS header, checking UUID and offset.

WARNING!
========
Device /dev/sda2 already contains LUKS header. Replacing header will
destroy existing keyslots.

Are you sure? (Type uppercase yes): YES
# Storing backup of header (1024 bytes) and keyslot area (1048576
bytes) to device /dev/sda2.
# Reading LUKS header of size 1024 from device /dev/sda2
# Releasing crypt device /dev/sda2 context.
# Releasing device-mapper backend.
# Unlocking memory.
Command successful.

% sudo cryptsetup --verbose --debug luksHeaderBackup /dev/sda2
--header-backup-file /tmp/sda2.header
# cryptsetup 1.3.0 processing "cryptsetup --verbose --debug
luksHeaderBackup /dev/sda2 --header-backup-file /tmp/sda2.header"
# Running command luksHeaderBackup.
# Locking memory.
# Allocating crypt device /dev/sda2 context.
# Trying to open and read device /dev/sda2.
# Initialising device-mapper backend, UDEV is enabled.
# Detected dm-crypt version 1.10.0, dm-ioctl version 4.19.1.
# Initialising gcrypt crypto backend.
# Requested header backup of device /dev/sda2 (LUKS1) to file /tmp/sda2.header.
# Reading LUKS header of size 1024 from device /dev/sda2
# Storing backup of header (1024 bytes) and keyslot area (1048576 bytes).
# Releasing crypt device /dev/sda2 context.
# Releasing device-mapper backend.
# Unlocking memory.
Command successful.

% sudo md5sum /tmp/*header
7b897c620776f549324810a8aeb9921e  /tmp/sda2.header
7b897c620776f549324810a8aeb9921e  /tmp/sda.header
ce314509007b2c76eb85e7b89ee25da5  /tmp/sdb.header
------- 8< --- entered commands --- >8 -------

I would have assumed that all files are identical, i. e. they have the
same hash.


Thanks,

Paul


> [1] http://www.saout.de/pipermail/dm-crypt/2011-August/001858.html
> [2] http://www.saout.de/pipermail/dm-crypt/2011-August/001858.html
> [3] http://marc.info/?l=linux-raid&m=131248606026407&w=2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-04 23:18 ` Paul Menzel
@ 2011-08-05  2:20   ` Milan Broz
  2011-08-05  8:41     ` Paul Menzel
  0 siblings, 1 reply; 12+ messages in thread
From: Milan Broz @ 2011-08-05  2:20 UTC (permalink / raw)
  To: Paul Menzel; +Cc: dm-crypt

On 08/05/2011 01:18 AM, Paul Menzel wrote:
> % sudo md5sum /tmp/*header
> 7b897c620776f549324810a8aeb9921e  /tmp/sda2.header
> 7b897c620776f549324810a8aeb9921e  /tmp/sda.header
> ce314509007b2c76eb85e7b89ee25da5  /tmp/sdb.header
> ------- 8< --- entered commands --- >8 -------
> 
> I would have assumed that all files are identical, i. e. they have the
> same hash.

It should be the same.
(But there is gap between header and keyslot which is explicitly wiped
during backup. But from the commands you run it should be the same now.)

On which binary offsets it differs?

Can you try the same exercise but running it through loop device?

(dd e.g. 4M from both sd[ab] disks, map it to loop devices and run the same
commands - luksHeaderBackup/Restore.
Do you see the same problem?

Milan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05  2:20   ` Milan Broz
@ 2011-08-05  8:41     ` Paul Menzel
  2011-08-05 12:11       ` Paul Menzel
  0 siblings, 1 reply; 12+ messages in thread
From: Paul Menzel @ 2011-08-05  8:41 UTC (permalink / raw)
  To: Milan Broz; +Cc: dm-crypt

2011/8/5 Milan Broz <mbroz@redhat.com>:
> On 08/05/2011 01:18 AM, Paul Menzel wrote:
>> % sudo md5sum /tmp/*header
>> 7b897c620776f549324810a8aeb9921e  /tmp/sda2.header
>> 7b897c620776f549324810a8aeb9921e  /tmp/sda.header
>> ce314509007b2c76eb85e7b89ee25da5  /tmp/sdb.header
>> ------- 8< --- entered commands --- >8 -------
>>
>> I would have assumed that all files are identical, i. e. they have the
>> same hash.
>
> It should be the same.
> (But there is gap between header and keyslot which is explicitly wiped
> during backup. But from the commands you run it should be the same now.)
>
> On which binary offsets it differs?

Do you mean the value of Payload offset in the output of `cryptsetup
luksDump /dev/sda2`? Both have the value 2048.

> Can you try the same exercise but running it through loop device?
>
> (dd e.g. 4M from both sd[ab] disks, map it to loop devices and run the same
> commands - luksHeaderBackup/Restore.

------- 8< --- entered commands --- >8 -------
root@grml ~ # dd bs=1024 count=4096 if=/dev/sda2 of=new-drive--dd-bs4M
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB) copied, 0.563301 s, 7.4 MB/s
root@grml ~ # dd bs=1024 count=4096 if=/dev/sdb2 of=old-drive--dd-bs4M
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB) copied, 0.121917 s, 34.4 MB/s
root@grml ~ # dd bs=1024 count=1024 if=/dev/sda2 of=new-drive--dd-bs1M
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0256151 s, 40.9 MB/s
root@grml ~ # dd bs=1024 count=1024 if=/dev/sdb2 of=old-drive--dd-bs1M
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.0223845 s, 46.8 MB/s
root@grml ~ # md5sum *drive*
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs1M
9d30117b0d9d3e57d6269916123ed9f2  new-drive--dd-bs4M
11faaf01449e87f40378945392819c09  old-drive--dd-bs1M
bd7aa8cc17a59cd74f2fc30a154cb823  old-drive--dd-bs4M

# no filesystem on there, so error. Error code 32 on next line in ZSH.
root@grml ~ # mount -o loop new-drive--dd-bs4M la
mount: unknown filesystem type 'crypto_LUKS'
32 root@grml ~ # losetup /dev/loop3 new-drive--dd-bs4M
root@grml ~ # cryptsetup isLuks /dev/loop3 /dev/loop3 # True because
on next line no error code in the beginning.

root@grml ~ # cryptsetup luksHeaderBackup /dev/loop3
--header-backup-file sda.header
root@grml ~ # losetup /dev/loop4 old-drive--dd-bs4M
root@grml ~ # cryptsetup isLuks /dev/loop4
root@grml ~ # cryptsetup luksHeaderBackup /dev/loop4
--header-backup-file sdb.header
root@grml ~ # md5sum *header
7b897c620776f549324810a8aeb9921e  sda.header
ce314509007b2c76eb85e7b89ee25da5  sdb.header
root@grml ~ # cryptsetup luksHeaderRestore /dev/loop3
--header-backup-file sdb.header

WARNING!
========
Device /dev/loop3 already contains LUKS header. Replacing header will
destroy existing keyslots.

Are you sure? (Type uppercase yes): YES
root@grml ~ # cryptsetup luksHeaderBackup /dev/loop3
--header-backup-file sda.header2
root@grml ~ # md5sum *header*
7b897c620776f549324810a8aeb9921e  sda.header
ce314509007b2c76eb85e7b89ee25da5  sda.header2
ce314509007b2c76eb85e7b89ee25da5  sdb.header
------- 8< --- entered commands --- >8 -------

> Do you see the same problem?

No, as from the output above, I do not see the same problem. What
could be the reason for this difference in behaviour?


Thanks,

Paul

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05  8:41     ` Paul Menzel
@ 2011-08-05 12:11       ` Paul Menzel
  2011-08-05 14:16         ` Milan Broz
  0 siblings, 1 reply; 12+ messages in thread
From: Paul Menzel @ 2011-08-05 12:11 UTC (permalink / raw)
  To: Milan Broz; +Cc: dm-crypt

[-- Attachment #1: Type: text/plain, Size: 4230 bytes --]

2011/8/5 Paul Menzel <pm.debian@googlemail.com>:
> 2011/8/5 Milan Broz <mbroz@redhat.com>:
>> On 08/05/2011 01:18 AM, Paul Menzel wrote:
>>> % sudo md5sum /tmp/*header
>>> 7b897c620776f549324810a8aeb9921e  /tmp/sda2.header
>>> 7b897c620776f549324810a8aeb9921e  /tmp/sda.header
>>> ce314509007b2c76eb85e7b89ee25da5  /tmp/sdb.header
>>> ------- 8< --- entered commands --- >8 -------
>>>
>>> I would have assumed that all files are identical, i. e. they have the
>>> same hash.
>>
>> It should be the same.
>> (But there is gap between header and keyslot which is explicitly wiped
>> during backup. But from the commands you run it should be the same now.)

[…]

>> Can you try the same exercise but running it through loop device?
>>
>> (dd e.g. 4M from both sd[ab] disks, map it to loop devices and run the same
>> commands - luksHeaderBackup/Restore.
>
> ------- 8< --- entered commands --- >8 -------

[ Got header from loop mounted `dd copies`. ]

> root@grml ~ # md5sum *header
> 7b897c620776f549324810a8aeb9921e  sda.header
> ce314509007b2c76eb85e7b89ee25da5  sdb.header
> root@grml ~ # cryptsetup luksHeaderRestore /dev/loop3
> --header-backup-file sdb.header
>
> WARNING!
> ========
> Device /dev/loop3 already contains LUKS header. Replacing header will
> destroy existing keyslots.
>
> Are you sure? (Type uppercase yes): YES
> root@grml ~ # cryptsetup luksHeaderBackup /dev/loop3
> --header-backup-file sda.header2
> root@grml ~ # md5sum *header*
> 7b897c620776f549324810a8aeb9921e  sda.header
> ce314509007b2c76eb85e7b89ee25da5  sda.header2
> ce314509007b2c76eb85e7b89ee25da5  sdb.header
> ------- 8< --- entered commands --- >8 -------

One addition. `cryptsetup luksOpen /dev/loop3` does *not* work on the
original file gotten from `/dev/sda2` with `dd`. It *does* work after
`cryptsetup luksHeaderRestore /dev/loop3 --header-backup-file
sdb.header`.

>> Do you see the same problem?
>
> No, as from the output above, I do not see the same problem. What
> could be the reason for this difference in behaviour?

On #lvm Milan suggested that the problem lies with the new drive
having some misalignment

--- 8< --- sfdisk output --- >8 ---
% sudo sfdisk -l /dev/sda

Disk /dev/sda: 243201 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1   *      0+     61      62-    497983+  fd  Linux raid autodetect
/dev/sda2         62  243200  243139  1953014017+  fd  Linux raid autodetect
                end: (c,h,s) expected (1023,254,63) found (512,254,63)
/dev/sda3          0       -       0          0    0  Empty
/dev/sda4          0       -       0          0    0  Empty
% sudo sfdisk -V /dev/sda
partition 2: end: (c,h,s) expected (1023,254,63) found (512,254,63)
/dev/sda: OK
--- 8< --- sfdisk output --- >8 ---

and he guesses that I will be able to reproduce the problem when
writing with `dd oflag=direct …`.

Unfortunately, this does not seem to be the case.

I attach my commands and their outputs since it would be horrible to
read  with Google Mail line wrapping “feature”.

--- 8< --- md5sum of dd commands --- >8 ---
# md5sum *drive*

62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048-iflag-direct
11faaf01449e87f40378945392819c09
new-drive--dd-bs512-count2048-iflag-direct-sync--with-dd-from-old--written-with-oflag-direct-sync-to-new
11faaf01449e87f40378945392819c09
new-drive--dd-bs512-count2048--with-dd-from-old
11faaf01449e87f40378945392819c09
new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-sync-to-new
11faaf01449e87f40378945392819c09
new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new
11faaf01449e87f40378945392819c09
new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new2
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048-iflag-direct
--- 8< --- md5sum of dd commands --- >8 ---


Thanks,

Paul

[-- Attachment #2: 20110805--history-of-shell-comands-and-output --]
[-- Type: application/octet-stream, Size: 6332 bytes --]

root@grml ~/ein # dd if=/dev/sdb2 of=old-drive--dd-bs512-count2048 bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.0343698 s, 30.5 MB/s
root@grml ~/ein # dd if=/dev/sdb2 of=old-drive--dd-bs512-count2048-iflag-direct bs=512 count=2048 iflag=direct
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.209783 s, 5.0 MB/s
root@grml ~/ein # dd if=/dev/sda2 of=new-drive--dd-bs512-count2048 bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.455503 s, 2.3 MB/s
root@grml ~/ein # dd if=/dev/sda2 of=new-drive--dd-bs512-count2048-iflag-direct bs=512 count=2048 iflag=direct
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.767138 s, 1.4 MB/s
root@grml ~/ein # md5sum *drive*
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048-iflag-direct
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048-iflag-direct
root@grml ~/ein # dd if=old-drive--dd-bs512-count2048 of=/dev/sda2 bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.0365978 s, 28.7 MB/s
root@grml ~/ein # dd if=/dev/sda2 of=new-drive--dd-bs512-count2048--with-dd-from-old bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.463175 s, 2.3 MB/s
root@grml ~/ein # md5sum old-drive--dd-bs512-count2048 new-drive--dd-bs512-count2048--with-dd-from-old
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old
root@grml ~/ein # cryptsetup luksOpen /dev/sda2 sda2_crypt
Enter passphrase for /dev/sda2:
root@grml ~/ein # # That worked and I could access my data after mounting `sda2_crypt`.
root@grml ~/ein # dd if=old-drive--dd-bs512-count2048 of=/dev/sda2 bs=512 count=2048 oflag=direct
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.365299 s, 2.9 MB/s
root@grml ~/ein # dd if=/dev/sda2 of=new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.442945 s, 2.4 MB/s
root@grml ~/ein # md5sum *drive*
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048-iflag-direct
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048-iflag-direct
root@grml ~/ein # # I tested if `blockdev --setro /dev/sda2` for *reading* (for writing it was turned on) made a difference. It did not.
root@grml ~/ein # dd if=/dev/sda2 of=new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new2 bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.0204029 s, 51.4 MB/s
root@grml ~/ein # md5sum *drive*
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048-iflag-direct
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new2
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048-iflag-direct
root@grml ~/ein # dd if=old-drive--dd-bs512-count2048 of=/dev/sda2 bs=512 count=2048 oflag=direct,sync
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 51.4745 s, 20.4 kB/s
root@grml ~/ein # dd if=/dev/sda2 of=new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-sync-to-new bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.0296868 s, 35.3 MB/s
root@grml ~/ein # md5sum *drive*                                                                                                             62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048-iflag-direct
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-sync-to-new
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new2
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048-iflag-direct
root@grml ~/ein # dd iflag=direct,sync if=/dev/sda2 of=new-drive--dd-bs512-count2048-iflag-direct-sync--with-dd-from-old--written-with-oflag-direct-sync-to-new bs=512 count=2048
2048+0 records in
2048+0 records out
1048576 bytes (1.0 MB) copied, 0.328304 s, 3.2 MB/s
root@grml ~/ein # md5sum *drive*                                                                                                             62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048
62ca46f7ed57f7ef673f58547fd438c6  new-drive--dd-bs512-count2048-iflag-direct
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048-iflag-direct-sync--with-dd-from-old--written-with-oflag-direct-sync-to-new
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-sync-to-new
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new
11faaf01449e87f40378945392819c09  new-drive--dd-bs512-count2048--with-dd-from-old--written-with-oflag-direct-to-new2
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048
11faaf01449e87f40378945392819c09  old-drive--dd-bs512-count2048-iflag-direct

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 12:11       ` Paul Menzel
@ 2011-08-05 14:16         ` Milan Broz
  2011-08-05 14:52           ` Arno Wagner
                             ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Milan Broz @ 2011-08-05 14:16 UTC (permalink / raw)
  To: Paul Menzel; +Cc: dm-crypt

On 08/05/2011 02:11 PM, Paul Menzel wrote:
>> No, as from the output above, I do not see the same problem. What
>> could be the reason for this difference in behaviour?
> 
> On #lvm Milan suggested that the problem lies with the new drive
> having some misalignment

I have checked the dump and there is clear corruption of first keyslot
(0x1000 - 0x1400 offset).

I'll try to find the source of problem now.

Milan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 14:16         ` Milan Broz
@ 2011-08-05 14:52           ` Arno Wagner
  2011-08-05 14:55             ` Arno Wagner
  2011-08-05 17:47             ` Milan Broz
  2011-08-05 15:02           ` Paul Menzel
  2011-09-01 19:08           ` Paul Menzel
  2 siblings, 2 replies; 12+ messages in thread
From: Arno Wagner @ 2011-08-05 14:52 UTC (permalink / raw)
  To: dm-crypt

On Fri, Aug 05, 2011 at 04:16:47PM +0200, Milan Broz wrote:
> On 08/05/2011 02:11 PM, Paul Menzel wrote:
> >> No, as from the output above, I do not see the same problem. What
> >> could be the reason for this difference in behaviour?
> > 
> > On #lvm Milan suggested that the problem lies with the new drive
> > having some misalignment
> 
> I have checked the dump and there is clear corruption of first keyslot
> (0x1000 - 0x1400 offset).
> 
> I'll try to find the source of problem now.
> 
> Milan

Hi Milan,

just a thought: May this be a stray v1.2 RAID/md superblock?
They are at 4k offset from the device start according to this:

https://raid.wiki.kernel.org/index.php/RAID_superblock_formats

Arno
-- 
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno@wagner.name 
GnuPG:  ID: 1E25338F  FP: 0C30 5782 9D93 F785 E79C  0296 797F 6B50 1E25 338F
----
Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 14:52           ` Arno Wagner
@ 2011-08-05 14:55             ` Arno Wagner
  2011-08-05 17:47             ` Milan Broz
  1 sibling, 0 replies; 12+ messages in thread
From: Arno Wagner @ 2011-08-05 14:55 UTC (permalink / raw)
  To: dm-crypt

On Fri, Aug 05, 2011 at 04:52:29PM +0200, Arno Wagner wrote:
> On Fri, Aug 05, 2011 at 04:16:47PM +0200, Milan Broz wrote:
> > On 08/05/2011 02:11 PM, Paul Menzel wrote:
> > >> No, as from the output above, I do not see the same problem. What
> > >> could be the reason for this difference in behaviour?
> > > 
> > > On #lvm Milan suggested that the problem lies with the new drive
> > > having some misalignment
> > 
> > I have checked the dump and there is clear corruption of first keyslot
> > (0x1000 - 0x1400 offset).
> > 
> > I'll try to find the source of problem now.
> > 
> > Milan
> 
> Hi Milan,
> 
> just a thought: May this be a stray v1.2 RAID/md superblock?
> They are at 4k offset from the device start according to this:
> 
> https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
> 

And an additional thought: If using /etc/raidtab instead of
autodetection (or a similar mechanism), is it possible the 
RAID superblock gets rewritten on boot and destroys the 
LUKS keyslot?
 
I have no idea whether this is possible, as I only ever used 
autodetection and distrust distro automagic even in Debian.

Arno
-- 
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno@wagner.name 
GnuPG:  ID: 1E25338F  FP: 0C30 5782 9D93 F785 E79C  0296 797F 6B50 1E25 338F
----
Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 14:16         ` Milan Broz
  2011-08-05 14:52           ` Arno Wagner
@ 2011-08-05 15:02           ` Paul Menzel
  2011-08-05 15:08             ` Arno Wagner
  2011-09-01 19:08           ` Paul Menzel
  2 siblings, 1 reply; 12+ messages in thread
From: Paul Menzel @ 2011-08-05 15:02 UTC (permalink / raw)
  To: Milan Broz; +Cc: dm-crypt

2011/8/5 Milan Broz <mbroz@redhat.com>:
> On 08/05/2011 02:11 PM, Paul Menzel wrote:
>>> No, as from the output above, I do not see the same problem. What
>>> could be the reason for this difference in behaviour?
>>
>> On #lvm Milan suggested that the problem lies with the new drive
>> having some misalignment
>
> I have checked the dump and there is clear corruption of first keyslot
> (0x1000 - 0x1400 offset).

Is the key slot corruption the only corruption? So `dd`ing the part
from the old drive to the new (corrupted) drive should have fixed the
LUKS setup and no other metadata (LVM, ext3) should be influenced?

> I'll try to find the source of problem now.

Thank you for your help.

I emphasize again these errors because the RAID1 was still active when
I tried `luksOpen` and the passphrases started to be declined.

--- dmesg ---
Aug  4 00:16:01 grml kernel: [ 7964.786362] device-mapper:
table: 253:0: crypt: Device lookup failed
       Aug  4 00:16:01 grml kernel: [ 7964.786367] device-mapper:
ioctl: error adding target to table
       Aug  4 00:16:01 grml udevd[2409]: inotify_add_watch(6,
/dev/dm-0, 10) failed: No such file or directory
       Aug  4 00:16:01 grml udevd[2409]: inotify_add_watch(6,
/dev/dm-0, 10) failed: No such file or directory

       Aug  4 00:17:14 grml kernel: [ 8038.196371] md1: detected
capacity change from 1999886286848 to 0
       Aug  4 00:17:14 grml kernel: [ 8038.196395] md: md1 stopped.
       Aug  4 00:17:14 grml kernel: [ 8038.196407] md: unbind<sda2>
       Aug  4 00:17:14 grml kernel: [ 8038.212653] md: export_rdev(sda2)
--- dmesg ---

Additionally right after that I did `luksHeaderBackup` and the
checksum of that file

$ md5sum 20110804--new-drive--luksHeaderBackup--sda2--after-command-b
7b897c620776f549324810a8aeb9921e
20110804--new-drive--luksHeaderBackup--sda2--after-command-b

is the same as when doing `luksHeaderRestore` from the old working
drive and then `luksHeaderBackup`.

# md5sum sda.header
7b897c620776f549324810a8aeb9921e  sda.header

So `luksHeaderRestore` does not seem to update it or only parts which
were not corrupted (since the passphrase does not work with it).


Thanks and sorry for stating the obvious,

Paul


PS: I hope, someone on linux-raid will shed some light into how the
corruption could have happened.


[1] http://marc.info/?l=linux-raid&m=131248606026407&w=2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 15:02           ` Paul Menzel
@ 2011-08-05 15:08             ` Arno Wagner
  0 siblings, 0 replies; 12+ messages in thread
From: Arno Wagner @ 2011-08-05 15:08 UTC (permalink / raw)
  To: dm-crypt

On Fri, Aug 05, 2011 at 05:02:13PM +0200, Paul Menzel wrote:
> 2011/8/5 Milan Broz <mbroz@redhat.com>:
> > On 08/05/2011 02:11 PM, Paul Menzel wrote:
> >>> No, as from the output above, I do not see the same problem. What
> >>> could be the reason for this difference in behaviour?
> >>
> >> On #lvm Milan suggested that the problem lies with the new drive
> >> having some misalignment
> >
> > I have checked the dump and there is clear corruption of first keyslot
> > (0x1000 - 0x1400 offset).
> 
> Is the key slot corruption the only corruption? So `dd`ing the part
> from the old drive to the new (corrupted) drive should have fixed the
> LUKS setup and no other metadata (LVM, ext3) should be influenced?
> 
> > I'll try to find the source of problem now.
> 
> Thank you for your help.
> 
> I emphasize again these errors because the RAID1 was still active when
> I tried `luksOpen` and the passphrases started to be declined.

The RAID superblocks get frequently rewritten. There is an 
"event-count" in there used to detect when a disk fell out 
of the RAID (will have an older event counter).

This may trash the keyslot if you wrote the header to the underlying
device, not the RAID device or there is some device overlap.

In fact that explanation now sounds most likely to me.

Arno




 
> --- dmesg ---
> Aug  4 00:16:01 grml kernel: [ 7964.786362] device-mapper:
> table: 253:0: crypt: Device lookup failed
>        Aug  4 00:16:01 grml kernel: [ 7964.786367] device-mapper:
> ioctl: error adding target to table
>        Aug  4 00:16:01 grml udevd[2409]: inotify_add_watch(6,
> /dev/dm-0, 10) failed: No such file or directory
>        Aug  4 00:16:01 grml udevd[2409]: inotify_add_watch(6,
> /dev/dm-0, 10) failed: No such file or directory
> 
>        Aug  4 00:17:14 grml kernel: [ 8038.196371] md1: detected
> capacity change from 1999886286848 to 0
>        Aug  4 00:17:14 grml kernel: [ 8038.196395] md: md1 stopped.
>        Aug  4 00:17:14 grml kernel: [ 8038.196407] md: unbind<sda2>
>        Aug  4 00:17:14 grml kernel: [ 8038.212653] md: export_rdev(sda2)
> --- dmesg ---
> 
> Additionally right after that I did `luksHeaderBackup` and the
> checksum of that file
> 
> $ md5sum 20110804--new-drive--luksHeaderBackup--sda2--after-command-b
> 7b897c620776f549324810a8aeb9921e
> 20110804--new-drive--luksHeaderBackup--sda2--after-command-b
> 
> is the same as when doing `luksHeaderRestore` from the old working
> drive and then `luksHeaderBackup`.
> 
> # md5sum sda.header
> 7b897c620776f549324810a8aeb9921e  sda.header
> 
> So `luksHeaderRestore` does not seem to update it or only parts which
> were not corrupted (since the passphrase does not work with it).
> 
> 
> Thanks and sorry for stating the obvious,
> 
> Paul
> 
> 
> PS: I hope, someone on linux-raid will shed some light into how the
> corruption could have happened.
> 
> 
> [1] http://marc.info/?l=linux-raid&m=131248606026407&w=2
> _______________________________________________
> dm-crypt mailing list
> dm-crypt@saout.de
> http://www.saout.de/mailman/listinfo/dm-crypt
> 

-- 
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno@wagner.name 
GnuPG:  ID: 1E25338F  FP: 0C30 5782 9D93 F785 E79C  0296 797F 6B50 1E25 338F
----
Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 14:52           ` Arno Wagner
  2011-08-05 14:55             ` Arno Wagner
@ 2011-08-05 17:47             ` Milan Broz
  1 sibling, 0 replies; 12+ messages in thread
From: Milan Broz @ 2011-08-05 17:47 UTC (permalink / raw)
  To: dm-crypt

On 08/05/2011 04:52 PM, Arno Wagner wrote:
> On Fri, Aug 05, 2011 at 04:16:47PM +0200, Milan Broz wrote:
>> On 08/05/2011 02:11 PM, Paul Menzel wrote:
>>>> No, as from the output above, I do not see the same problem. What
>>>> could be the reason for this difference in behaviour?
>>>
>>> On #lvm Milan suggested that the problem lies with the new drive
>>> having some misalignment
>>
>> I have checked the dump and there is clear corruption of first keyslot
>> (0x1000 - 0x1400 offset).
>>
>> I'll try to find the source of problem now.
>>
>> Milan
> 
> Hi Milan,
> 
> just a thought: May this be a stray v1.2 RAID/md superblock?
> They are at 4k offset from the device start according to this:

yes, but not the superblock but the RAID1 bitmap. even mdadm --zero-superblock
keep the RAID1 bitmap intact.
With MD format 1.2 it is exactly area where I see it now.

So, the first question: what was "cat /proc/mdstat" when doing luksHeaderRestore
(which apparently failed)? Was that drive still in use in some raid?

Milan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`?
  2011-08-05 14:16         ` Milan Broz
  2011-08-05 14:52           ` Arno Wagner
  2011-08-05 15:02           ` Paul Menzel
@ 2011-09-01 19:08           ` Paul Menzel
  2 siblings, 0 replies; 12+ messages in thread
From: Paul Menzel @ 2011-09-01 19:08 UTC (permalink / raw)
  To: dm-crypt

[-- Attachment #1: Type: text/plain, Size: 652 bytes --]

Am Freitag, den 05.08.2011, 16:16 +0200 schrieb Milan Broz:
> On 08/05/2011 02:11 PM, Paul Menzel wrote:
> >> No, as from the output above, I do not see the same problem. What
> >> could be the reason for this difference in behaviour?
> > 
> > On #lvm Milan suggested that the problem lies with the new drive
> > having some misalignment
> 
> I have checked the dump and there is clear corruption of first keyslot
> (0x1000 - 0x1400 offset).
> 
> I'll try to find the source of problem now.

Just for my interest, did you find the source of the problem? Skimming
over the commit log, I did not see anything related.


Thanks,

Paul

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-09-01 19:14 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-04 21:31 [dm-crypt] How can a passphrase be incorrect even after `luksHeaderBackup` and `luksHeaderRestore`? Paul Menzel
2011-08-04 23:18 ` Paul Menzel
2011-08-05  2:20   ` Milan Broz
2011-08-05  8:41     ` Paul Menzel
2011-08-05 12:11       ` Paul Menzel
2011-08-05 14:16         ` Milan Broz
2011-08-05 14:52           ` Arno Wagner
2011-08-05 14:55             ` Arno Wagner
2011-08-05 17:47             ` Milan Broz
2011-08-05 15:02           ` Paul Menzel
2011-08-05 15:08             ` Arno Wagner
2011-09-01 19:08           ` Paul Menzel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.