All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
@ 2014-09-16  0:53 Boylan, Ross
  2014-09-16  6:39 ` Heinz Diehl
  0 siblings, 1 reply; 11+ messages in thread
From: Boylan, Ross @ 2014-09-16  0:53 UTC (permalink / raw)
  To: dm-crypt

[switching edresses]
<Arno>
If I see this correcly, you have

1. Partition
2. RAID
3. LVM 
4. LUKS

That is decidedly too many. KISS is not even in the building
anymore with that. I know, likely the distro gave you something 
like this, but really it is a symptom of a failed enginering 
mind-set that keeps stacking up complexity until things fail.
</Arno>

It's not simple regardless of whether or not LVM is in the mix.  It's certainly true that every layer adds complexity, but I don't see a good alternative given my needs, which include regularly growing file systems and creating and deleting underlying devices for file system and virtual disks.

Since I now have the "opportunity" to rebuild the system I can change how I approach things.  But the reasons for the original design still seem valid.

One consideration may be less pressing now: I did RAID over partitions, not RAID over disks, so that there was a way to bootstrap the system start.   grub may not need that; OTOH with GPT the recommendation is to reserve some initial space for the boot loader; if I do that I'll be left with RAID-1 over partitions.

To some extent the layers are the result of the Unix philosophy of having tools focused on one thing; each layer has a different goal.

Obviously something went wrong this time, and since the symptom is at the level of LVM volume groups it's possible it was an LVM bug, perhaps triggered by allocating all available space and writing to it (did that with both volume groups).  But that's speculative, and it seems more likely either the mistake I know I made or some other I don't know about is the cause.  Since the software was old, newer software may resolve some bugs (or introduce others!).

I do plan to stick with 0.90 for RAID.

Ross
 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-16  0:53 [dm-crypt] KISS (was disappearing luks header and other mysteries) Boylan, Ross
@ 2014-09-16  6:39 ` Heinz Diehl
  2014-09-16  8:07   ` Arno Wagner
  0 siblings, 1 reply; 11+ messages in thread
From: Heinz Diehl @ 2014-09-16  6:39 UTC (permalink / raw)
  To: dm-crypt

On 16.09.2014, Boylan, Ross wrote: 

> 1. Partition
> 2. RAID
> 3. LVM 
> 4. LUKS
 
> That is decidedly too many. KISS is not even in the building
> anymore with that.

It is. Every single process does one thing. The problem is that most
of the distributions out there automatically install LVM. In my case,
I always chose four primary partitions manually, because they fit my
needs and are simple to manage, while not adding more complexity than
neccessary (/, /boot, /home, swap).

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-16  6:39 ` Heinz Diehl
@ 2014-09-16  8:07   ` Arno Wagner
  2014-09-20  0:29     ` Sven Eschenberg
  0 siblings, 1 reply; 11+ messages in thread
From: Arno Wagner @ 2014-09-16  8:07 UTC (permalink / raw)
  To: dm-crypt

On Tue, Sep 16, 2014 at 08:39:45 CEST, Heinz Diehl wrote:
> On 16.09.2014, Boylan, Ross wrote: 
> 
> > 1. Partition
> > 2. RAID
> > 3. LVM 
> > 4. LUKS
>  
> > That is decidedly too many. KISS is not even in the building
> > anymore with that.
> 
> It is. Every single process does one thing. The problem is that most
> of the distributions out there automatically install LVM. In my case,
> I always chose four primary partitions manually, because they fit my
> needs and are simple to manage, while not adding more complexity than
> neccessary (/, /boot, /home, swap).

The primary indicator that it is too complex is that debugging
this fails. There is siome modern "engineering" faction that 
likes to pile up complexity until things start to fail. This is
a symptom.

Arno
-- 
Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718
----
A good decision is based on knowledge and not on numbers. -- Plato

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-16  8:07   ` Arno Wagner
@ 2014-09-20  0:29     ` Sven Eschenberg
  2014-09-21  9:58       ` Arno Wagner
  0 siblings, 1 reply; 11+ messages in thread
From: Sven Eschenberg @ 2014-09-20  0:29 UTC (permalink / raw)
  To: dm-crypt

Well, it is not THAT easy.

If you want resilience/availability, you'll need RAID. Now what do you put
ontop of the RAID when you need to slice it? Put a disklabel/partition on
top of it and stick with a static setup or use LVM which can span multiple
RAIDs (and types) supports snapshotting etc. . Depending on your needs and
usage you will end up with LVM in the end. If you want encryption, you'll
need a crypto layer (or you put it in the FS alongside volume slicing).
Partitions underaneath the RAID, not necessary if the RAID implementation
can subslice physical devices and arrange for different levels on the same
disk. Except unfortunately, when you need a bootloader.

I don't see any alternative which would be KISS enough, except merging the
layers to avoid collissions due to stacking order etc. . Simple usage and
debugging for the user, but the actual single merged layer would be
anything but KISS.

Regards

-Sven

On Tue, September 16, 2014 10:07, Arno Wagner wrote:
> On Tue, Sep 16, 2014 at 08:39:45 CEST, Heinz Diehl wrote:
>> On 16.09.2014, Boylan, Ross wrote:
>>
>> > 1. Partition
>> > 2. RAID
>> > 3. LVM
>> > 4. LUKS
>>
>> > That is decidedly too many. KISS is not even in the building
>> > anymore with that.
>>
>> It is. Every single process does one thing. The problem is that most
>> of the distributions out there automatically install LVM. In my case,
>> I always chose four primary partitions manually, because they fit my
>> needs and are simple to manage, while not adding more complexity than
>> neccessary (/, /boot, /home, swap).
>
> The primary indicator that it is too complex is that debugging
> this fails. There is siome modern "engineering" faction that
> likes to pile up complexity until things start to fail. This is
> a symptom.
>
> Arno
> --
> Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
> GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D
> 9718
> ----
> A good decision is based on knowledge and not on numbers. -- Plato
>
> If it's in the news, don't worry about it.  The very definition of
> "news" is "something that hardly ever happens." -- Bruce Schneier
> _______________________________________________
> dm-crypt mailing list
> dm-crypt@saout.de
> http://www.saout.de/mailman/listinfo/dm-crypt
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-20  0:29     ` Sven Eschenberg
@ 2014-09-21  9:58       ` Arno Wagner
  2014-09-21 14:29         ` Marc Ballarin
  2014-09-21 14:51         ` Sven Eschenberg
  0 siblings, 2 replies; 11+ messages in thread
From: Arno Wagner @ 2014-09-21  9:58 UTC (permalink / raw)
  To: dm-crypt

On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
> Well, it is not THAT easy.

Actially it is.
 
> If you want resilience/availability, you'll need RAID. Now what do you put
> ontop of the RAID when you need to slice it? 

And there the desaster starts: Don't slice RAID. It isnot a good 
idea.


> Put a disklabel/partition on
> top of it and stick with a static setup or use LVM which can span multiple
> RAIDs (and types) supports snapshotting etc. . Depending on your needs and
> usage you will end up with LVM in the end. If you want encryption, you'll
> need a crypto layer (or you put it in the FS alongside volume slicing).
> Partitions underaneath the RAID, not necessary if the RAID implementation
> can subslice physical devices and arrange for different levels on the same
> disk. Except unfortunately, when you need a bootloader.
> 
> I don't see any alternative which would be KISS enough, except merging the
> layers to avoid collissions due to stacking order etc. . Simple usage and
> debugging for the user, but the actual single merged layer would be
> anything but KISS.

You miss one thing: LVM breaks layereing and rather badly so. That
is a deadly sin. Partitioning should only ever been done on
monolithic devices. There is a good reason for that, namely that
parition-raid, filesystems and LUKS all respect partitioning per
default, and hence it actually takes work to break the container 
structure.

LVM rides all over that and hence it is absolutely no surprise
at all that people keep breaking things using it. It is like
a chainsaw without safety features. Until those safety-features
are present and work reliably, LVM should be avoided in all 
situation where there is an alternative. There almost always is.

But please, be my guest shooting yourself in the foot all
you like. I will just not refrain from telling you "I told
you so".


Arno

> Regards
> 
> -Sven
> 
> On Tue, September 16, 2014 10:07, Arno Wagner wrote:
> > On Tue, Sep 16, 2014 at 08:39:45 CEST, Heinz Diehl wrote:
> >> On 16.09.2014, Boylan, Ross wrote:
> >>
> >> > 1. Partition
> >> > 2. RAID
> >> > 3. LVM
> >> > 4. LUKS
> >>
> >> > That is decidedly too many. KISS is not even in the building
> >> > anymore with that.
> >>
> >> It is. Every single process does one thing. The problem is that most
> >> of the distributions out there automatically install LVM. In my case,
> >> I always chose four primary partitions manually, because they fit my
> >> needs and are simple to manage, while not adding more complexity than
> >> neccessary (/, /boot, /home, swap).
> >
> > The primary indicator that it is too complex is that debugging
> > this fails. There is siome modern "engineering" faction that
> > likes to pile up complexity until things start to fail. This is
> > a symptom.
> >
> > Arno
> > --
> > Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
> > GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D
> > 9718
> > ----
> > A good decision is based on knowledge and not on numbers. -- Plato
> >
> > If it's in the news, don't worry about it.  The very definition of
> > "news" is "something that hardly ever happens." -- Bruce Schneier
> > _______________________________________________
> > dm-crypt mailing list
> > dm-crypt@saout.de
> > http://www.saout.de/mailman/listinfo/dm-crypt
> >
> 
> 
> _______________________________________________
> dm-crypt mailing list
> dm-crypt@saout.de
> http://www.saout.de/mailman/listinfo/dm-crypt

-- 
Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718
----
A good decision is based on knowledge and not on numbers. -- Plato

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-21  9:58       ` Arno Wagner
@ 2014-09-21 14:29         ` Marc Ballarin
  2014-09-21 15:38           ` Sven Eschenberg
  2014-09-22  9:14           ` Arno Wagner
  2014-09-21 14:51         ` Sven Eschenberg
  1 sibling, 2 replies; 11+ messages in thread
From: Marc Ballarin @ 2014-09-21 14:29 UTC (permalink / raw)
  To: dm-crypt

Am 21.09.2014 um 11:58 schrieb Arno Wagner:
> On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
>> Well, it is not THAT easy.
> Actially it is.
>  
>> If you want resilience/availability, you'll need RAID. Now what do you put
>> ontop of the RAID when you need to slice it? 
> And there the desaster starts: Don't slice RAID. It isnot a good 
> idea.
>
>
>> Put a disklabel/partition on
>> top of it and stick with a static setup or use LVM which can span multiple
>> RAIDs (and types) supports snapshotting etc. . Depending on your needs and
>> usage you will end up with LVM in the end. If you want encryption, you'll
>> need a crypto layer (or you put it in the FS alongside volume slicing).
>> Partitions underaneath the RAID, not necessary if the RAID implementation
>> can subslice physical devices and arrange for different levels on the same
>> disk. Except unfortunately, when you need a bootloader.
>>
>> I don't see any alternative which would be KISS enough, except merging the
>> layers to avoid collissions due to stacking order etc. . Simple usage and
>> debugging for the user, but the actual single merged layer would be
>> anything but KISS.
> You miss one thing: LVM breaks layereing and rather badly so. That
> is a deadly sin. Partitioning should only ever been done on
> monolithic devices. There is a good reason for that, namely that
> parition-raid, filesystems and LUKS all respect partitioning per
> default, and hence it actually takes work to break the container 
> structure.

Hi,

I don't see how LVM breaks layering. In theory it replaces partitioning,
but in practice it is still a very good idea to use one single partition
per visible disk as a (more or less) universally accepted way to say
"there is something here, stay away!". The same applies to LUKS or plain
fiilesystems. No reason to put them on whole disks.
The megabyte or so that you sacrifice for the partition table (plus
alignment) is well spent. Partitions do not cause any further overhead,
as unlike device mapper, they do not add a layer to the storage stack
(from a users POV they do, but not from the kernel's).

Note that there is little reason to use mdraid for data volumes nowadays
(that includes "/" when using a proper initramfs). LVM can handle this
just fine and unlike mdadm has not seen any major metadata changes, or
even metadata location changes, in the last years. But I'm not sure, it
can offer redundancy on boot devices. In theory it should, if the boot
loader knows how to handle it, but I have never tested it. This is
basically the "merging of layers" that Sven talked about.
Btrfs and ZFS push this even further, and while they are complex beasts,
they actually eliminate a lot of complexity for applications and users.
Just look at how simple, generic and cheap it becomes to create a
consistent backup by using temporary snapshots, or to preserve old
versions by using long lived snapshots. This can replace application
specific backup solutions, that cost an insane amount of money and whose
user interfaces are based on the principles of Discordianism (so that
training becomes mandatory).

Also: Stay away from tools like gparted or parted. Resizing and, above
all, moving volumes is bound to cause problems. For example, looking at
John Wells issue from august 18th (especially mail
CADt3ZtscbX-rmMt++aXme9Oiu3sxiBW_MD_CGJM_b=t+iMaerQ), the most likely
culprit really wasn't LVM, but parted. It seems to have set up scratch
space where it should not have.
Once resizing or volume deletions/additions are necessary, LVM is
actually the much simpler and more robust solution. Resizing as well as
deletions and additions in LVM are well defined, robust and even
undoable (as long as the filesystem was not adjusted/created). At work,
we use that on 10,000s of systems.

Lastly, it should be noted, that complex storage stacks like
MD-RAID->LVM->LUKS->(older)XFS can have reliability issues due to stack
exhaustion (you can make it even worse by adding iSCSI, virtio,
multi-path and many other things to your storage stack). When and if
problems occur, depends strongly on the architecture, low-level drivers
involved  and the kernel version, but it is likely to happen at some
point. Kernel 3.15 defused this, by doubling the stack size on x86_64.
(btw: That, and not bad memory, might actually be the most common cause
behind FAQ item 4.3).

Regards,
Marc

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-21  9:58       ` Arno Wagner
  2014-09-21 14:29         ` Marc Ballarin
@ 2014-09-21 14:51         ` Sven Eschenberg
  2014-09-22  9:41           ` Arno Wagner
  1 sibling, 1 reply; 11+ messages in thread
From: Sven Eschenberg @ 2014-09-21 14:51 UTC (permalink / raw)
  To: dm-crypt

Hi Arno,

On Sun, September 21, 2014 11:58, Arno Wagner wrote:
> On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
>> Well, it is not THAT easy.
>
> Actially it is.
>
>> If you want resilience/availability, you'll need RAID. Now what do you
>> put
>> ontop of the RAID when you need to slice it?
>
> And there the desaster starts: Don't slice RAID. It isnot a good
> idea.

While in principle this is true, in practise you cannot have different
filesystems on the same RAID then in such a setup. So you'll need as many
RAIDs as filesystems. Which in turn means you will have to go for RAID on
partitions in respect to current disk sizes.

The second aspect I mentioned is spanning filesystems over RAIDs. The
reasonable number of disks in a single RAID is quite limited and as such
really huge filesystems need to span multiple RAIDs. I know, as long as
single files don't exceed the size of a reasonable RAID you could still
use multiple FSes.

>
>
>> Put a disklabel/partition on
>> top of it and stick with a static setup or use LVM which can span
>> multiple
>> RAIDs (and types) supports snapshotting etc. . Depending on your needs
>> and
>> usage you will end up with LVM in the end. If you want encryption,
>> you'll
>> need a crypto layer (or you put it in the FS alongside volume slicing).
>> Partitions underaneath the RAID, not necessary if the RAID
>> implementation
>> can subslice physical devices and arrange for different levels on the
>> same
>> disk. Except unfortunately, when you need a bootloader.
>>
>> I don't see any alternative which would be KISS enough, except merging
>> the
>> layers to avoid collissions due to stacking order etc. . Simple usage
>> and
>> debugging for the user, but the actual single merged layer would be
>> anything but KISS.
>
> You miss one thing: LVM breaks layereing and rather badly so. That
> is a deadly sin. Partitioning should only ever been done on
> monolithic devices. There is a good reason for that, namely that
> parition-raid, filesystems and LUKS all respect partitioning per
> default, and hence it actually takes work to break the container
> structure.

That is true, usually slicing, RAIDs and subvolumes are all part of the
RAID layer and as such RAID subvolumes are monolithic devices from an OS
point of view (read with HW-RAID HBAs). AFAIK with DDF metadata mdraid
takes this path and LVM could (except for spanning/snapshotting) be taken
out of the equation.

>
> LVM rides all over that and hence it is absolutely no surprise
> at all that people keep breaking things using it. It is like
> a chainsaw without safety features. Until those safety-features
> are present and work reliably, LVM should be avoided in all
> situation where there is an alternative. There almost always is.
>

I doubt you'll ever get foolproofness and sophistication/flexibility at
the same time, just look at cryptsetup and the libgcrypt/whirlpool issues.
Foolproof mostly means lack of choise or 'features' ;-).

> But please, be my guest shooting yourself in the foot all
> you like. I will just not refrain from telling you "I told
> you so".

In a way you are right, then again, at some point in time, you'll let kids
use forks, knives and fire, you know ;-).

>
>
> Arno
>

Regards

-Sven

>> Regards
>>
>> -Sven
>>
>> On Tue, September 16, 2014 10:07, Arno Wagner wrote:
>> > On Tue, Sep 16, 2014 at 08:39:45 CEST, Heinz Diehl wrote:
>> >> On 16.09.2014, Boylan, Ross wrote:
>> >>
>> >> > 1. Partition
>> >> > 2. RAID
>> >> > 3. LVM
>> >> > 4. LUKS
>> >>
>> >> > That is decidedly too many. KISS is not even in the building
>> >> > anymore with that.
>> >>
>> >> It is. Every single process does one thing. The problem is that most
>> >> of the distributions out there automatically install LVM. In my case,
>> >> I always chose four primary partitions manually, because they fit my
>> >> needs and are simple to manage, while not adding more complexity than
>> >> neccessary (/, /boot, /home, swap).
>> >
>> > The primary indicator that it is too complex is that debugging
>> > this fails. There is siome modern "engineering" faction that
>> > likes to pile up complexity until things start to fail. This is
>> > a symptom.
>> >
>> > Arno
>> > --
>> > Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email:
>> arno@wagner.name
>> > GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D
>> > 9718
>> > ----
>> > A good decision is based on knowledge and not on numbers. -- Plato
>> >
>> > If it's in the news, don't worry about it.  The very definition of
>> > "news" is "something that hardly ever happens." -- Bruce Schneier
>> > _______________________________________________
>> > dm-crypt mailing list
>> > dm-crypt@saout.de
>> > http://www.saout.de/mailman/listinfo/dm-crypt
>> >
>>
>>
>> _______________________________________________
>> dm-crypt mailing list
>> dm-crypt@saout.de
>> http://www.saout.de/mailman/listinfo/dm-crypt
>
> --
> Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
> GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D
> 9718
> ----
> A good decision is based on knowledge and not on numbers. -- Plato
>
> If it's in the news, don't worry about it.  The very definition of
> "news" is "something that hardly ever happens." -- Bruce Schneier
> _______________________________________________
> dm-crypt mailing list
> dm-crypt@saout.de
> http://www.saout.de/mailman/listinfo/dm-crypt
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-21 14:29         ` Marc Ballarin
@ 2014-09-21 15:38           ` Sven Eschenberg
  2014-09-22  9:14           ` Arno Wagner
  1 sibling, 0 replies; 11+ messages in thread
From: Sven Eschenberg @ 2014-09-21 15:38 UTC (permalink / raw)
  To: dm-crypt

On Sun, September 21, 2014 16:29, Marc Ballarin wrote:
> Am 21.09.2014 um 11:58 schrieb Arno Wagner:
>> On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
>>> Well, it is not THAT easy.
>> Actially it is.
>>
>>> If you want resilience/availability, you'll need RAID. Now what do you
>>> put
>>> ontop of the RAID when you need to slice it?
>> And there the desaster starts: Don't slice RAID. It isnot a good
>> idea.
>>
>>
>>> Put a disklabel/partition on
>>> top of it and stick with a static setup or use LVM which can span
>>> multiple
>>> RAIDs (and types) supports snapshotting etc. . Depending on your needs
>>> and
>>> usage you will end up with LVM in the end. If you want encryption,
>>> you'll
>>> need a crypto layer (or you put it in the FS alongside volume slicing).
>>> Partitions underaneath the RAID, not necessary if the RAID
>>> implementation
>>> can subslice physical devices and arrange for different levels on the
>>> same
>>> disk. Except unfortunately, when you need a bootloader.
>>>
>>> I don't see any alternative which would be KISS enough, except merging
>>> the
>>> layers to avoid collissions due to stacking order etc. . Simple usage
>>> and
>>> debugging for the user, but the actual single merged layer would be
>>> anything but KISS.
>> You miss one thing: LVM breaks layereing and rather badly so. That
>> is a deadly sin. Partitioning should only ever been done on
>> monolithic devices. There is a good reason for that, namely that
>> parition-raid, filesystems and LUKS all respect partitioning per
>> default, and hence it actually takes work to break the container
>> structure.
>
> Hi,
>
> I don't see how LVM breaks layering. In theory it replaces partitioning,
> but in practice it is still a very good idea to use one single partition
> per visible disk as a (more or less) universally accepted way to say
> "there is something here, stay away!". The same applies to LUKS or plain
> fiilesystems. No reason to put them on whole disks.
> The megabyte or so that you sacrifice for the partition table (plus
> alignment) is well spent. Partitions do not cause any further overhead,
> as unlike device mapper, they do not add a layer to the storage stack
> (from a users POV they do, but not from the kernel's).

I always wondered why there werem't arbitary slicing schemes. In the very
beginning firmware blindly loaded code from sector 0, why waste codespace
for slicing metadata? Admitted, having it alongside the code data made
things a little easier decades back.

>
> Note that there is little reason to use mdraid for data volumes nowadays
> (that includes "/" when using a proper initramfs). LVM can handle this
> just fine and unlike mdadm has not seen any major metadata changes, or
> even metadata location changes, in the last years. But I'm not sure, it
> can offer redundancy on boot devices. In theory it should, if the boot
> loader knows how to handle it, but I have never tested it. This is
> basically the "merging of layers" that Sven talked about.

I overlooked that, I guess I'll have to look into this, maybe I can
eliminate mdraid in the long run. The bootloader itself is the problem
here.
If the firmware was extensible in a sane way, you'd add a module that
takes care of reading the metadata and providing access to the actual
bootloader, or you could have the bootloader within the firmware. That's
even true for (U)EFI where extensions (haha) need to reside on an ESP
readable by the firmware. Quite insane.

> Btrfs and ZFS push this even further, and while they are complex beasts,
> they actually eliminate a lot of complexity for applications and users.
> Just look at how simple, generic and cheap it becomes to create a
> consistent backup by using temporary snapshots, or to preserve old
> versions by using long lived snapshots. This can replace application
> specific backup solutions, that cost an insane amount of money and whose
> user interfaces are based on the principles of Discordianism (so that
> training becomes mandatory).
>
> Also: Stay away from tools like gparted or parted. Resizing and, above
> all, moving volumes is bound to cause problems. For example, looking at
> John Wells issue from august 18th (especially mail
> CADt3ZtscbX-rmMt++aXme9Oiu3sxiBW_MD_CGJM_b=t+iMaerQ), the most likely
> culprit really wasn't LVM, but parted. It seems to have set up scratch
> space where it should not have.
> Once resizing or volume deletions/additions are necessary, LVM is
> actually the much simpler and more robust solution. Resizing as well as
> deletions and additions in LVM are well defined, robust and even
> undoable (as long as the filesystem was not adjusted/created). At work,
> we use that on 10,000s of systems.

Until now I always was quite lucky doing resizing and other
transformations (including parted operations and hex editing metadata),
but indeed there's no safety-net and no double bottom. Once you go there,
there's no turning back when things start to wreck.

BTW, it has been quite some time since I deeply looked into LVM, can LVM
nowadays 'defrag' LVs? Say I grow LVs and they reside on different PE
groups on the same PV, can I merge these groups down to get a single
continuous area?

>
> Lastly, it should be noted, that complex storage stacks like
> MD-RAID->LVM->LUKS->(older)XFS can have reliability issues due to stack
> exhaustion (you can make it even worse by adding iSCSI, virtio,
> multi-path and many other things to your storage stack). When and if
> problems occur, depends strongly on the architecture, low-level drivers
> involved  and the kernel version, but it is likely to happen at some
> point. Kernel 3.15 defused this, by doubling the stack size on x86_64.
> (btw: That, and not bad memory, might actually be the most common cause
> behind FAQ item 4.3).

That's an interesting bit of info, luckily I never ran into this...

>
> Regards,
> Marc

Regards

-Sven

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-21 14:29         ` Marc Ballarin
  2014-09-21 15:38           ` Sven Eschenberg
@ 2014-09-22  9:14           ` Arno Wagner
  1 sibling, 0 replies; 11+ messages in thread
From: Arno Wagner @ 2014-09-22  9:14 UTC (permalink / raw)
  To: dm-crypt

On Sun, Sep 21, 2014 at 16:29:28 CEST, Marc Ballarin wrote:
> Am 21.09.2014 um 11:58 schrieb Arno Wagner:
> > On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
> >> Well, it is not THAT easy.
> > Actially it is.
> >  
> >> If you want resilience/availability, you'll need RAID. Now what do you put
> >> ontop of the RAID when you need to slice it? 
> > And there the desaster starts: Don't slice RAID. It isnot a good 
> > idea.
> >
> >
> >> Put a disklabel/partition on
> >> top of it and stick with a static setup or use LVM which can span multiple
> >> RAIDs (and types) supports snapshotting etc. . Depending on your needs and
> >> usage you will end up with LVM in the end. If you want encryption, you'll
> >> need a crypto layer (or you put it in the FS alongside volume slicing).
> >> Partitions underaneath the RAID, not necessary if the RAID implementation
> >> can subslice physical devices and arrange for different levels on the same
> >> disk. Except unfortunately, when you need a bootloader.
> >>
> >> I don't see any alternative which would be KISS enough, except merging the
> >> layers to avoid collissions due to stacking order etc. . Simple usage and
> >> debugging for the user, but the actual single merged layer would be
> >> anything but KISS.
> > You miss one thing: LVM breaks layereing and rather badly so. That
> > is a deadly sin. Partitioning should only ever been done on
> > monolithic devices. There is a good reason for that, namely that
> > parition-raid, filesystems and LUKS all respect partitioning per
> > default, and hence it actually takes work to break the container 
> > structure.
> 
> Hi,
> 
> I don't see how LVM breaks layering. 

Seriously? LVM allows you to place partitions into partitions.
If yod do not see how that breaks layering, I don't know how
to explain it.

> In theory it replaces partitioning,
> but in practice it is still a very good idea to use one single partition
> per visible disk as a (more or less) universally accepted way to say
> "there is something here, stay away!". The same applies to LUKS or plain
> fiilesystems. No reason to put them on whole disks.
> The megabyte or so that you sacrifice for the partition table (plus
> alignment) is well spent. Partitions do not cause any further overhead,
> as unlike device mapper, they do not add a layer to the storage stack
> (from a users POV they do, but not from the kernel's).
> 
> Note that there is little reason to use mdraid for data volumes nowadays
> (that includes "/" when using a proper initramfs).

There are a lot ov veru good reasons. Simplicity, reliability,
stability, clarity, etc.

>  LVM can handle this
> just fine and unlike mdadm has not seen any major metadata changes, or
> even metadata location changes, in the last years. 

I agree that metadata formats 1.0, 1.1 and 1.2 dor mdraid are
screwed up and the designers have failed. Format 0.90 is entirely
fine though, if unsuitable for very large installations.

> But I'm not sure, it
> can offer redundancy on boot devices. In theory it should, if the boot
> loader knows how to handle it, but I have never tested it. This is
> basically the "merging of layers" that Sven talked about.
> Btrfs and ZFS push this even further, and while they are complex beasts,
> they actually eliminate a lot of complexity for applications and users.

They bring in "magic". That is fine if the user is clueless, like
the typical windows user for example. It is a catastrophe once
things break and a major annoyance for non-clueless users. There
are good resaons this functionality is kept in seperate layers.
We will see whether these things manage to actually pull it off
or not, but I am somewhat doubtful fot YFS and highly doubtfult
for BTRFS. It stinks of the "second system" effect, where designers
that think they have mastered the problem after their first system
throw in everything and the kitchen sink. Usually complex monsters
like that never manage to get good stability dues to complexity. 

> Just look at how simple, generic and cheap it becomes to create a
> consistent backup by using temporary snapshots, or to preserve old
> versions by using long lived snapshots. 

Sorry, but that is one of Linuses messes: "dump" works fine for
that on basically any Unix and it should do so on Linux, but there
are statements by Linus where he admits to breaking the FS layer
and the possibility to damage even an read-only filesystem with
"dump". I have used dump on Linux for snapshots for about 5 years 
way back without problemns though. These people are reinventing 
the wheel and what they produce is not really better than what 
already existed. And if you really need a "hard" snapshot, just 
use the dmraid layer for that.

> This can replace application
> specific backup solutions, that cost an insane amount of money and whose
> user interfaces are based on the principles of Discordianism (so that
> training becomes mandatory).

No. It cannot. An appplication-specific backup solution needs to 
understand the application. Either you never needed it in the
first place, or you still need it when you have snapshots. Just
freezing an image in time is not a valid way to back-up in
many application-specific scenarios.

> Also: Stay away from tools like gparted or parted. 

Not at all. Unlike the infamous "Partition Magic", (g)parted is
reliable. Of course, if you have an LVM-mess, you may break things
because you do not understand the on-disk structure anymore. 
I have used gparted for years regularly and it never broke one
single thing and never behaved in any surprising fashion.
I don't know where you get this nonsense.

> Resizing and, above
> all, moving volumes is bound to cause problems. For example, looking at
> John Wells issue from august 18th (especially mail
> CADt3ZtscbX-rmMt++aXme9Oiu3sxiBW_MD_CGJM_b=t+iMaerQ), the most likely
> culprit really wasn't LVM, but parted. It seems to have set up scratch
> space where it should not have.

I very much doubt that. parted does not create anything you do
not tell it to. Much more likely, LVM caused the user to not
understand what he was doing, which is the whole point why
I do not like it.

> Once resizing or volume deletions/additions are necessary, LVM is
> actually the much simpler and more robust solution. Resizing as well as
> deletions and additions in LVM are well defined, robust and even
> undoable (as long as the filesystem was not adjusted/created). At work,
> we use that on 10,000s of systems.

Well, once you have a _tested_ operation sequence, LVM gives 
you sort-of storage abstraction, and when you automatize 
things, that is very much worthwhile doing. But the situation is
not what you have when working manually on a single system. 
These two are not comparable at all. For example, when doing 
automatization, you make sure to create your change runbook as 
simple as possible and you will test it. You will not experiment 
on production systems. In essence, you add a whole reliability 
layer manually. That is not the situation you have when working 
manually on one system.

> Lastly, it should be noted, that complex storage stacks like
> MD-RAID->LVM->LUKS->(older)XFS can have reliability issues due to stack
> exhaustion (you can make it even worse by adding iSCSI, virtio,
> multi-path and many other things to your storage stack). When and if
> problems occur, depends strongly on the architecture, low-level drivers
> involved  and the kernel version, but it is likely to happen at some
> point. Kernel 3.15 defused this, by doubling the stack size on x86_64.
> (btw: That, and not bad memory, might actually be the most common cause
> behind FAQ item 4.3).

What is your point? Older XFS was unusable with mdraid, beacause
you could have it need weeks (estimated) when you had a RAID
resync and an XFS check at the same time. But what do you need
XFS and LVM in that stack for? Make that MD-RAID->LUKS->ext2/3
and you get a reliable and stable solution. 

Arno
-- 
Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718
----
A good decision is based on knowledge and not on numbers. -- Plato

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-21 14:51         ` Sven Eschenberg
@ 2014-09-22  9:41           ` Arno Wagner
  2014-09-22 18:52             ` Sven Eschenberg
  0 siblings, 1 reply; 11+ messages in thread
From: Arno Wagner @ 2014-09-22  9:41 UTC (permalink / raw)
  To: dm-crypt

On Sun, Sep 21, 2014 at 16:51:09 CEST, Sven Eschenberg wrote:
> Hi Arno,
> 
> On Sun, September 21, 2014 11:58, Arno Wagner wrote:
> > On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
> >> Well, it is not THAT easy.
> >
> > Actially it is.
> >
> >> If you want resilience/availability, you'll need RAID. Now what do you
> >> put
> >> ontop of the RAID when you need to slice it?
> >
> > And there the desaster starts: Don't slice RAID. It isnot a good
> > idea.
> 
> While in principle this is true, in practise you cannot have different
> filesystems on the same RAID then in such a setup. So you'll need as many
> RAIDs as filesystems. Which in turn means you will have to go for RAID on
> partitions in respect to current disk sizes.

Yes? So? Is there a problem anywhere here? 
 
> The second aspect I mentioned is spanning filesystems over RAIDs. The
> reasonable number of disks in a single RAID is quite limited and as such
> really huge filesystems need to span multiple RAIDs. I know, as long as
> single files don't exceed the size of a reasonable RAID you could still
> use multiple FSes.

If you do huge storage installations, your needs change. This list
kind of assumes as default that you stay in conventional sizes, say no 
more than 8 disks in an array. Things that you need to do for really 
large storage do not translate well to normal sizes. We can still discuss 
huge installations of course, but please mark this clearly, as users 
that never will have to deal with these may get confused and think 
it applies to them. My personal experience also ends at about 8 disks
per RAID, as that was the maximum I could cram into my research servers
and budget, so please take everything I say without qualification
for storage size to be said in that context.

Now, this is not meant in any way as discrimination against people
that have to deal with huge storage volumes, please feel free to 
discuss anything you want here, but please always state that this is
from a huge-storage perspective so as to not confuse people. The same
applies when you talk about things needed to automatize changes 
for multiple machines, which again is not the "default" perspective
for most people. There, I have a little more experience, as I had
a cluster with 25 machines and that is already enough to not want 
to do anything manually.

And of course, the possibility of EB-sized arrays with hundreds of 
disks does not justify putting LVM on a laptop with a single disk. 
One size does not fit all.

> >
> >
> >> Put a disklabel/partition on
> >> top of it and stick with a static setup or use LVM which can span
> >> multiple
> >> RAIDs (and types) supports snapshotting etc. . Depending on your needs
> >> and
> >> usage you will end up with LVM in the end. If you want encryption,
> >> you'll
> >> need a crypto layer (or you put it in the FS alongside volume slicing).
> >> Partitions underaneath the RAID, not necessary if the RAID
> >> implementation
> >> can subslice physical devices and arrange for different levels on the
> >> same
> >> disk. Except unfortunately, when you need a bootloader.
> >>
> >> I don't see any alternative which would be KISS enough, except merging
> >> the
> >> layers to avoid collissions due to stacking order etc. . Simple usage
> >> and
> >> debugging for the user, but the actual single merged layer would be
> >> anything but KISS.
> >
> > You miss one thing: LVM breaks layereing and rather badly so. That
> > is a deadly sin. Partitioning should only ever been done on
> > monolithic devices. There is a good reason for that, namely that
> > parition-raid, filesystems and LUKS all respect partitioning per
> > default, and hence it actually takes work to break the container
> > structure.
> 
> That is true, usually slicing, RAIDs and subvolumes are all part of the
> RAID layer and as such RAID subvolumes are monolithic devices from an OS
> point of view (read with HW-RAID HBAs). AFAIK with DDF metadata mdraid
> takes this path and LVM could (except for spanning/snapshotting) be taken
> out of the equation.

One problem here is that dmraid on paritions is conceptually 
on the wrong layer compared to hardware RAID. But quite
frankly, hardwre RAID never reached any reasonable degree
of sophistication and was more of a "magic box" solution
that yu could not look into. I do not think there is any problem
doing RAID on partitions and not partitioning the array 
again, but it is different from what people used to hardware
RAID expect.
 
> >
> > LVM rides all over that and hence it is absolutely no surprise
> > at all that people keep breaking things using it. It is like
> > a chainsaw without safety features. Until those safety-features
> > are present and work reliably, LVM should be avoided in all
> > situation where there is an alternative. There almost always is.
> >
> 
> I doubt you'll ever get foolproofness and sophistication/flexibility at
> the same time, just look at cryptsetup and the libgcrypt/whirlpool issues.
> Foolproof mostly means lack of choise or 'features' ;-).

It is a balance. My take is that most people do not need the 
flexibility LVM gives them and at the sametime cannot really 
master its complexity, and then they end up sawing off a foot.
There are cases where you do need it, I do not dispute that.
But an ordinary, self-administrated end-user installation is 
not one of them. And yes, even the "chainsaw without safety
features" has valid applications, but you will never, ever give 
it to a non-expert and you will only use it if there is no
better way.

> > But please, be my guest shooting yourself in the foot all
> > you like. I will just not refrain from telling you "I told
> > you so".
> 
> In a way you are right, then again, at some point in time, you'll let kids
> use forks, knives and fire, you know ;-).

Indeed. But only when you see them being able to handle it. What
I see happeingn is that people keep breaking things with LVM in
situations where there was no need for it in the first place.

Arno

-- 
Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718
----
A good decision is based on knowledge and not on numbers. -- Plato

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dm-crypt] KISS (was disappearing luks header and other mysteries)
  2014-09-22  9:41           ` Arno Wagner
@ 2014-09-22 18:52             ` Sven Eschenberg
  0 siblings, 0 replies; 11+ messages in thread
From: Sven Eschenberg @ 2014-09-22 18:52 UTC (permalink / raw)
  To: dm-crypt

On Mon, September 22, 2014 11:41, Arno Wagner wrote:
> On Sun, Sep 21, 2014 at 16:51:09 CEST, Sven Eschenberg wrote:
>> Hi Arno,
>>
>> On Sun, September 21, 2014 11:58, Arno Wagner wrote:
>> > On Sat, Sep 20, 2014 at 02:29:43 CEST, Sven Eschenberg wrote:
>> >> Well, it is not THAT easy.
>> >
>> > Actially it is.
>> >
>> >> If you want resilience/availability, you'll need RAID. Now what do
>> you
>> >> put
>> >> ontop of the RAID when you need to slice it?
>> >
>> > And there the desaster starts: Don't slice RAID. It isnot a good
>> > idea.
>>
>> While in principle this is true, in practise you cannot have different
>> filesystems on the same RAID then in such a setup. So you'll need as
>> many
>> RAIDs as filesystems. Which in turn means you will have to go for RAID
>> on
>> partitions in respect to current disk sizes.
>
> Yes? So? Is there a problem anywhere here?
>
>> The second aspect I mentioned is spanning filesystems over RAIDs. The
>> reasonable number of disks in a single RAID is quite limited and as such
>> really huge filesystems need to span multiple RAIDs. I know, as long as
>> single files don't exceed the size of a reasonable RAID you could still
>> use multiple FSes.
>
> If you do huge storage installations, your needs change. This list
> kind of assumes as default that you stay in conventional sizes, say no
> more than 8 disks in an array. Things that you need to do for really
> large storage do not translate well to normal sizes. We can still discuss
> huge installations of course, but please mark this clearly, as users
> that never will have to deal with these may get confused and think
> it applies to them. My personal experience also ends at about 8 disks
> per RAID, as that was the maximum I could cram into my research servers
> and budget, so please take everything I say without qualification
> for storage size to be said in that context.
>
> Now, this is not meant in any way as discrimination against people
> that have to deal with huge storage volumes, please feel free to
> discuss anything you want here, but please always state that this is
> from a huge-storage perspective so as to not confuse people. The same
> applies when you talk about things needed to automatize changes
> for multiple machines, which again is not the "default" perspective
> for most people. There, I have a little more experience, as I had
> a cluster with 25 machines and that is already enough to not want
> to do anything manually.
>
> And of course, the possibility of EB-sized arrays with hundreds of
> disks does not justify putting LVM on a laptop with a single disk.
> One size does not fit all.

You are absolutely right, there never is a one size fits all. I really
thought about things in a complete generic way. On a laptop you can always
live easily without snapshotting of any kind (just one example). On a
server snapshotting can be handy, it depends though how open files are
handled. This probably is off topic though and would be quite an intese
and deep discussion I guess.

Anyway, I think many distributions did the LVM thing as a partitioning
replacement scheme that is flexible. This way, you'd open a single
cryptotarget (example), have an LVM on top and then have all different
filesystems in there. The question though is, do you need different
filesystems for /home /usr (you name it) on a laptop? No probably not at
all. Maybe you could even already live with dmcrypt just for /home.

Afterall it most probably was the one size fits all concept that led to
the decision (a wrong one imho, but understandable as there is a single
deployment/setup which eases maintanance for distributors, laziness won I
assume :-) ).
>
>> >
>> >
>> >> Put a disklabel/partition on
>> >> top of it and stick with a static setup or use LVM which can span
>> >> multiple
>> >> RAIDs (and types) supports snapshotting etc. . Depending on your
>> needs
>> >> and
>> >> usage you will end up with LVM in the end. If you want encryption,
>> >> you'll
>> >> need a crypto layer (or you put it in the FS alongside volume
>> slicing).
>> >> Partitions underaneath the RAID, not necessary if the RAID
>> >> implementation
>> >> can subslice physical devices and arrange for different levels on the
>> >> same
>> >> disk. Except unfortunately, when you need a bootloader.
>> >>
>> >> I don't see any alternative which would be KISS enough, except
>> merging
>> >> the
>> >> layers to avoid collissions due to stacking order etc. . Simple usage
>> >> and
>> >> debugging for the user, but the actual single merged layer would be
>> >> anything but KISS.
>> >
>> > You miss one thing: LVM breaks layereing and rather badly so. That
>> > is a deadly sin. Partitioning should only ever been done on
>> > monolithic devices. There is a good reason for that, namely that
>> > parition-raid, filesystems and LUKS all respect partitioning per
>> > default, and hence it actually takes work to break the container
>> > structure.
>>
>> That is true, usually slicing, RAIDs and subvolumes are all part of the
>> RAID layer and as such RAID subvolumes are monolithic devices from an OS
>> point of view (read with HW-RAID HBAs). AFAIK with DDF metadata mdraid
>> takes this path and LVM could (except for spanning/snapshotting) be
>> taken
>> out of the equation.
>
> One problem here is that dmraid on paritions is conceptually
> on the wrong layer compared to hardware RAID. But quite
> frankly, hardwre RAID never reached any reasonable degree
> of sophistication and was more of a "magic box" solution
> that yu could not look into. I do not think there is any problem
> doing RAID on partitions and not partitioning the array
> again, but it is different from what people used to hardware
> RAID expect.

I agree, I do see advantages in RAID over partitions and softraid (and use
it) as it does not need special hardware, saves replacement parts, is
flexible, it is just a pitty it does not basicly give the blackbox
experience esp. within the OS after it was setup. From a normal user's
point of view, after starting a raid over partitions, it would be more
consistent if the inodes for the raid members would magically vanish
instead of only being marked as in use (yes I do see the downside of this
aswell).

And then there is that chipset softraid rake which can drive people nuts.
(conceptually RAID on disks, metadata at end and then GPT: tools don't see
secondary GPT as they can and do access disks individually ...)
I am seeing quite some room for improvements in many places ;-) .

>
>> >
>> > LVM rides all over that and hence it is absolutely no surprise
>> > at all that people keep breaking things using it. It is like
>> > a chainsaw without safety features. Until those safety-features
>> > are present and work reliably, LVM should be avoided in all
>> > situation where there is an alternative. There almost always is.
>> >
>>
>> I doubt you'll ever get foolproofness and sophistication/flexibility at
>> the same time, just look at cryptsetup and the libgcrypt/whirlpool
>> issues.
>> Foolproof mostly means lack of choise or 'features' ;-).
>
> It is a balance. My take is that most people do not need the
> flexibility LVM gives them and at the sametime cannot really
> master its complexity, and then they end up sawing off a foot.
> There are cases where you do need it, I do not dispute that.
> But an ordinary, self-administrated end-user installation is
> not one of them. And yes, even the "chainsaw without safety
> features" has valid applications, but you will never, ever give
> it to a non-expert and you will only use it if there is no
> better way.

Simpyl put: Agreed. As I said, I was reflecting genericly ...
>
>> > But please, be my guest shooting yourself in the foot all
>> > you like. I will just not refrain from telling you "I told
>> > you so".
>>
>> In a way you are right, then again, at some point in time, you'll let
>> kids
>> use forks, knives and fire, you know ;-).
>
> Indeed. But only when you see them being able to handle it. What
> I see happeingn is that people keep breaking things with LVM in
> situations where there was no need for it in the first place.

I agree again. You don't use something just because it is available but
because there is a reasonable need.

>
> Arno
>
> --
> Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
> GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D
> 9718
> ----
> A good decision is based on knowledge and not on numbers. -- Plato
>
> If it's in the news, don't worry about it.  The very definition of
> "news" is "something that hardly ever happens." -- Bruce Schneier

Regards

-Sven

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-09-22 18:52 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-16  0:53 [dm-crypt] KISS (was disappearing luks header and other mysteries) Boylan, Ross
2014-09-16  6:39 ` Heinz Diehl
2014-09-16  8:07   ` Arno Wagner
2014-09-20  0:29     ` Sven Eschenberg
2014-09-21  9:58       ` Arno Wagner
2014-09-21 14:29         ` Marc Ballarin
2014-09-21 15:38           ` Sven Eschenberg
2014-09-22  9:14           ` Arno Wagner
2014-09-21 14:51         ` Sven Eschenberg
2014-09-22  9:41           ` Arno Wagner
2014-09-22 18:52             ` Sven Eschenberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.