All of lore.kernel.org
 help / color / mirror / Atom feed
* [dm-crypt] some questions and FAQ suggestions
@ 2013-07-07 20:30 Christoph Anton Mitterer
  2013-07-07 20:33 ` Christoph Anton Mitterer
  2013-07-08 17:48 ` Arno Wagner
  0 siblings, 2 replies; 5+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-07 20:30 UTC (permalink / raw)
  To: dm-crypt

[-- Attachment #1: Type: text/plain, Size: 3692 bytes --]

Hi Arno, Milan, et al.


I recently asked some questions[0],[1] over at the linux-raid list and
some of them were about how things are when one stacks the typical DM
layers (MD, dmcrypt, LVM) in different order, I guess most reasonable
use cases would be either of these:
physical medium -> MD -> LVM -> dmcrypt        -> filesytem
physical medium -> MD        -> dmcrypt -> LVM -> filesytem
or maybe even:
physical medium -> MD -> LVM -> dmcrypt -> LVM -> filesytem

Optionally having partitions in between phyiscal medium and MD - but I
guess this shouldn't really change anything (correct me if I'm wrong),
neither with respect to performance, nor with respect to functionality
(i.e. how commands or techniques like TRIM, or barriers are passed
through).


Some discussion (especially about performance) with respect to the (old)
fact that dmcrypt was single-threaded arose in [1].
I asked Milan off list to share some light which he did[2,3] (thanks
again).


I think most of what he says should be added to the FAQ, for people that
also search on this (perhaps referring those threads as well), like:

Q1: Does dmcrypt work with DM/block device barriers and filesystem
barriers?
(AFAIU, these are different barrier technologies?)


Q2: Are there any technological/functional/security issues when stacking
dmcrypt with LVM and/or MD (at any order of these)?
I.e. is TRIM supported in any stacking order? Are there any other
subtle/major issues depending on the ordering of these? Or issues that
could lead to data corruption or out-of-sync RAIDs, whatsoever.


Q3: Are there any performance issues when stacking dmcrypt with LVM
and/or MD (at any order of these), assuming that the different layers
have the correct block/chunk/phsical extent alignment?
(not sure whether, if unaligned to physical extents from LVM, would
actually cause troubles or not?)

There probably noting the thing about single/multi-threaded and that MD
makes IO from one CPU, so that MD should always be below dmcrypt (for
performance reasons at least).



Milan noted that one should also tell how things were before these
patches... I'd say it should be at least noted that this changed at one
point... whether the situation before needs to described in-depth, I
have no strong opinion.




There are some questions of my MD questions I haven't yet gotten any
real answers (especially how MD actually reads/writes data/parity
blocks, i.e. how much is at least always FULLY read/written)... and I'd
have basically the same question for dm-crypt (and I think it's not yet
in the FAQ).
So IMHO answering the following would be interesting:

Q4: Depending on the chosen cipher/size/modes, which there a minimum
block size that dmcrypt always fully reads/writes?

I always though that this is _always_ (even if you have 4KiB blocks
below or so) the 512B... which are fully read/written (respectively
decrypted/encrypted), right?

Milan, I saw [4], which AFAIU means that we may sooner or later get
block sizes > 512B.
So the question might arise how large block sizes would affect
interaction (especially performance) with the other layers like MD (i.e.
would it be a problem if the dmcrypt block size is larger than the
smalles block size that MD always fully/reads writes... when either is
on top of the other).



Cheers,
Chris.


[0] http://thread.gmane.org/gmane.linux.raid/43405
[1] http://thread.gmane.org/gmane.linux.raid/43406
[2] http://thread.gmane.org/gmane.linux.raid/43406/focus=43450
[3] http://thread.gmane.org/gmane.linux.raid/43406/focus=43452
[4] http://code.google.com/p/cryptsetup/issues/detail?id=156

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-crypt] some questions and FAQ suggestions
  2013-07-07 20:30 [dm-crypt] some questions and FAQ suggestions Christoph Anton Mitterer
@ 2013-07-07 20:33 ` Christoph Anton Mitterer
  2013-07-08 17:48 ` Arno Wagner
  1 sibling, 0 replies; 5+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-07 20:33 UTC (permalink / raw)
  To: dm-crypt

[-- Attachment #1: Type: text/plain, Size: 218 bytes --]

On Sun, 2013-07-07 at 22:30 +0200, Christoph Anton Mitterer wrote:
> [4] http://code.google.com/p/cryptsetup/issues/detail?id=156
That should be:
http://code.google.com/p/cryptsetup/issues/detail?id=150 I guess ;)

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-crypt] some questions and FAQ suggestions
  2013-07-07 20:30 [dm-crypt] some questions and FAQ suggestions Christoph Anton Mitterer
  2013-07-07 20:33 ` Christoph Anton Mitterer
@ 2013-07-08 17:48 ` Arno Wagner
  2013-07-08 20:41   ` Christoph Anton Mitterer
  1 sibling, 1 reply; 5+ messages in thread
From: Arno Wagner @ 2013-07-08 17:48 UTC (permalink / raw)
  To: dm-crypt

My intuition is that performance questions are too volatile 
to put into the FAQ. And to dependent on the actual details
of the target system, i.e. most people will actually have to
benchmark on their own set-up to find out what _they_ get.


<rant>
One thing is for sure, more layers do not make things better.
Hence I do not have any LVM set-up. A second problem with LVM
is that it complicates things vastly in case something goes 
wrong and you need to do data-revocery. KISS applies. 
(Personally, I think LVM is a complicated, intransparent 
 monster that adds complexity where it is rarely needed...)

As to MD, I still use superblock format 0.90, because
assembling an array ist clearly the controllers job (i.e.
the kernels), hence I use it with kernel-level autodetection.
Several people have told me that was stupid, but my impression
of the newer superblock formats is that they are made by a
chaotic horde that had no clue what they wanted to achive,
increase complexity with out need and are generally messed
up and hence violate KISS. With 0.90 at least I can find
the data on disk manually if something breaks without 
understanding some convoluted line of reasoning, so I will
keep using it until it breaks (which may well be never).
And 0.90 superblocks go into a place where they cannot
kill LUKS headerts, a definitive advantage.
</rant>

Arno

On Sun, Jul 07, 2013 at 10:30:44PM +0200, Christoph Anton Mitterer wrote:
> Hi Arno, Milan, et al.
> 
> 
> I recently asked some questions[0],[1] over at the linux-raid list and
> some of them were about how things are when one stacks the typical DM
> layers (MD, dmcrypt, LVM) in different order, I guess most reasonable
> use cases would be either of these:
> physical medium -> MD -> LVM -> dmcrypt        -> filesytem
> physical medium -> MD        -> dmcrypt -> LVM -> filesytem
> or maybe even:
> physical medium -> MD -> LVM -> dmcrypt -> LVM -> filesytem
> 
> Optionally having partitions in between phyiscal medium and MD - but I
> guess this shouldn't really change anything (correct me if I'm wrong),
> neither with respect to performance, nor with respect to functionality
> (i.e. how commands or techniques like TRIM, or barriers are passed
> through).
> 
> 
> Some discussion (especially about performance) with respect to the (old)
> fact that dmcrypt was single-threaded arose in [1].
> I asked Milan off list to share some light which he did[2,3] (thanks
> again).
> 
> 
> I think most of what he says should be added to the FAQ, for people that
> also search on this (perhaps referring those threads as well), like:
> 
> Q1: Does dmcrypt work with DM/block device barriers and filesystem
> barriers?
> (AFAIU, these are different barrier technologies?)
> 
> 
> Q2: Are there any technological/functional/security issues when stacking
> dmcrypt with LVM and/or MD (at any order of these)?
> I.e. is TRIM supported in any stacking order? Are there any other
> subtle/major issues depending on the ordering of these? Or issues that
> could lead to data corruption or out-of-sync RAIDs, whatsoever.
> 
> 
> Q3: Are there any performance issues when stacking dmcrypt with LVM
> and/or MD (at any order of these), assuming that the different layers
> have the correct block/chunk/phsical extent alignment?
> (not sure whether, if unaligned to physical extents from LVM, would
> actually cause troubles or not?)
> 
> There probably noting the thing about single/multi-threaded and that MD
> makes IO from one CPU, so that MD should always be below dmcrypt (for
> performance reasons at least).
> 
> 
> 
> Milan noted that one should also tell how things were before these
> patches... I'd say it should be at least noted that this changed at one
> point... whether the situation before needs to described in-depth, I
> have no strong opinion.
> 
> 
> 
> 
> There are some questions of my MD questions I haven't yet gotten any
> real answers (especially how MD actually reads/writes data/parity
> blocks, i.e. how much is at least always FULLY read/written)... and I'd
> have basically the same question for dm-crypt (and I think it's not yet
> in the FAQ).
> So IMHO answering the following would be interesting:
> 
> Q4: Depending on the chosen cipher/size/modes, which there a minimum
> block size that dmcrypt always fully reads/writes?
> 
> I always though that this is _always_ (even if you have 4KiB blocks
> below or so) the 512B... which are fully read/written (respectively
> decrypted/encrypted), right?
> 
> Milan, I saw [4], which AFAIU means that we may sooner or later get
> block sizes > 512B.
> So the question might arise how large block sizes would affect
> interaction (especially performance) with the other layers like MD (i.e.
> would it be a problem if the dmcrypt block size is larger than the
> smalles block size that MD always fully/reads writes... when either is
> on top of the other).
> 
> 
> 
> Cheers,
> Chris.
> 
> 
> [0] http://thread.gmane.org/gmane.linux.raid/43405
> [1] http://thread.gmane.org/gmane.linux.raid/43406
> [2] http://thread.gmane.org/gmane.linux.raid/43406/focus=43450
> [3] http://thread.gmane.org/gmane.linux.raid/43406/focus=43452
> [4] http://code.google.com/p/cryptsetup/issues/detail?id=156



> _______________________________________________
> dm-crypt mailing list
> dm-crypt@saout.de
> http://www.saout.de/mailman/listinfo/dm-crypt


-- 
Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718
----
There are two ways of constructing a software design: One way is to make it
so simple that there are obviously no deficiencies, and the other way is to
make it so complicated that there are no obvious deficiencies. The first
method is far more difficult.  --Tony Hoare

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-crypt] some questions and FAQ suggestions
  2013-07-08 17:48 ` Arno Wagner
@ 2013-07-08 20:41   ` Christoph Anton Mitterer
  2013-07-09  3:33     ` Arno Wagner
  0 siblings, 1 reply; 5+ messages in thread
From: Christoph Anton Mitterer @ 2013-07-08 20:41 UTC (permalink / raw)
  To: dm-crypt

[-- Attachment #1: Type: text/plain, Size: 2609 bytes --]

On Mon, 2013-07-08 at 19:48 +0200, Arno Wagner wrote:
> My intuition is that performance questions are too volatile 
> to put into the FAQ.
Well AFAICS that would apply only to Q3 then... and well... I didn't
mean to give any real world values but rather telling people about some
caveats like that placing MD above dmcrypt is not so good for RAID 4,5,6
(Neil wrote over at linux-raid, that the issues described by Milan are
no longer true for levels 1 and 10).
I mean the situation is now that all these legacy rumors and information
is floating around in dozens of howtos... so either people have no idea
what they're doing (and thus suffer performance)... or we should tell
em.


> And to dependent on the actual details
> of the target system, i.e. most people will actually have to
> benchmark on their own set-up to find out what _they_ get.
Sure but that doesn't apply to general principles or stuff.



> One thing is for sure, more layers do not make things better.
> Hence I do not have any LVM set-up. A second problem with LVM
> is that it complicates things vastly in case something goes 
> wrong and you need to do data-revocery. KISS applies. 
> (Personally, I think LVM is a complicated, intransparent 
>  monster that adds complexity where it is rarely needed...)
Well the only alternative (for the scenarios in which one wants to use
LVM) would be to create partitions on top of dmcrypt... is that possible
at all?


> As to MD, I still use superblock format 0.90, because
> assembling an array ist clearly the controllers job (i.e.
> the kernels), hence I use it with kernel-level autodetection.
Well you can easily screw your RAID with that... and I think it's
generally deprecated...
I do not even know whether it would e.g. work with GPT... and IIRC it
does (obviously) not work with the RAID modules not compiled into the
kernel... not to talk about other limitations of the v0 superblock.


> With 0.90 at least I can find
> the data on disk manually if something breaks without 
> understanding some convoluted line of reasoning
I guess you mean "mount" with find... which is obviously only possible
with RAID1,... even that can be done with the v1 superblocks (out of the
box with 1.0)... and if you just set an offset... also with the others.

And directly mounting a RAID1 is really a dangerous thing, when one
doesn't know what one's doing or when it happens accidentally (which is
easy with 0.9 and 1.0 superblocks)... your RAID1 can get dirty without
it ever noticing it (thus likely data corruption).



Cheers,
Chris.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5113 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-crypt] some questions and FAQ suggestions
  2013-07-08 20:41   ` Christoph Anton Mitterer
@ 2013-07-09  3:33     ` Arno Wagner
  0 siblings, 0 replies; 5+ messages in thread
From: Arno Wagner @ 2013-07-09  3:33 UTC (permalink / raw)
  To: dm-crypt

On Mon, Jul 08, 2013 at 10:41:27PM +0200, Christoph Anton Mitterer wrote:
> On Mon, 2013-07-08 at 19:48 +0200, Arno Wagner wrote:
> > My intuition is that performance questions are too volatile 
> > to put into the FAQ.
> Well AFAICS that would apply only to Q3 then... and well... I didn't
> mean to give any real world values but rather telling people about some
> caveats like that placing MD above dmcrypt is not so good for RAID 4,5,6

That I have in FAQ item 2.6. It is also not a good idea for RAID 1,
as you have to open multiple containers before yoru RAID can be 
assembled. Conceptually, you should encrypt the filesystem, not the
raw block devices.

> (Neil wrote over at linux-raid, that the issues described by Milan are
> no longer true for levels 1 and 10).
> I mean the situation is now that all these legacy rumors and information
> is floating around in dozens of howtos... so either people have no idea
> what they're doing (and thus suffer performance)... or we should tell
> em.

See FAQ item 2.6.
 
> 
> > And to dependent on the actual details
> > of the target system, i.e. most people will actually have to
> > benchmark on their own set-up to find out what _they_ get.
> Sure but that doesn't apply to general principles or stuff.
> 
> 
> 
> > One thing is for sure, more layers do not make things better.
> > Hence I do not have any LVM set-up. A second problem with LVM
> > is that it complicates things vastly in case something goes 
> > wrong and you need to do data-revocery. KISS applies. 
> > (Personally, I think LVM is a complicated, intransparent 
> >  monster that adds complexity where it is rarely needed...)
> Well the only alternative (for the scenarios in which one wants to use
> LVM) would be to create partitions on top of dmcrypt... is that possible
> at all?

It is. But KISS-wise it is even more of a problem.

> > As to MD, I still use superblock format 0.90, because
> > assembling an array ist clearly the controllers job (i.e.
> > the kernels), hence I use it with kernel-level autodetection.
> Well you can easily screw your RAID with that... and I think it's
> generally deprecated...

I have not lost a single array in > 10 years with that. I have
not idea how you could "easily screw your RAID with that". 
I think the deprecation is just because some people try to 
hide the mess they made with the newer superblock placement and
with missing autodetection. 

> I do not even know whether it would e.g. work with GPT... and IIRC it
> does (obviously) not work with the RAID modules not compiled into the
> kernel... not to talk about other limitations of the v0 superblock.

For a reliable RAID setup, the code obviosuly belongs statically
into the kernel. But AFAIK, the autodetection then just happens
on module load.  

> > With 0.90 at least I can find
> > the data on disk manually if something breaks without 
> > understanding some convoluted line of reasoning
> I guess you mean "mount" with find... which is obviously only possible

No, I very explicitely mean not "mount", I mean "find", i.e. 
partition, offset, length. If I mount a RAID1 component,
it will either be ro or manually resynced afterwards.

> with RAID1,... even that can be done with the v1 superblocks (out of the
> box with 1.0)... and if you just set an offset... also with the others.
> 
> And directly mounting a RAID1 is really a dangerous thing, when one
> doesn't know what one's doing or when it happens accidentally (which is
> easy with 0.9 and 1.0 superblocks)... your RAID1 can get dirty without

Never happened to me. If used with autodetection, they are 
already in use whan it could happen. Another reason why it is
the kernel's job to assemble RAID arrays, not some script's
job later.

> it ever noticing it (thus likely data corruption).

I know, I have been using Linux software RAID for a long time.
And I have done the one or other data recovery from RAID1.

Arno
-- 
Arno Wagner,     Dr. sc. techn., Dipl. Inform.,    Email: arno@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718
----
There are two ways of constructing a software design: One way is to make it
so simple that there are obviously no deficiencies, and the other way is to
make it so complicated that there are no obvious deficiencies. The first
method is far more difficult.  --Tony Hoare

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-07-09  3:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-07 20:30 [dm-crypt] some questions and FAQ suggestions Christoph Anton Mitterer
2013-07-07 20:33 ` Christoph Anton Mitterer
2013-07-08 17:48 ` Arno Wagner
2013-07-08 20:41   ` Christoph Anton Mitterer
2013-07-09  3:33     ` Arno Wagner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.