linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
       [not found] <18980.48553.328662.80987@notabene.brown>
@ 2009-06-02 20:11 ` Jeff Garzik
  2009-06-02 22:58   ` Dan Williams
  2009-06-03  3:56   ` Neil Brown
  0 siblings, 2 replies; 10+ messages in thread
From: Jeff Garzik @ 2009-06-02 20:11 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid, LKML, linux-fsdevel, Arjan van de Ven, Alan Cox

Neil Brown wrote:
> 
> I am pleased to (finally) announce the availability of
>    mdadm version 3.0
> 
> It is available at the usual places:
>    countrycode=xx.
>    http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/
> and via git at
>    git://neil.brown.name/mdadm
>    http://neil.brown.name/git?p=mdadm
> 
> 
> This is a major new version and as such should be treated with some
> caution.  However it has seen substantial testing and is considerred
> to be ready for wide use.
> 
> 
> The significant change which justifies the new major version number is
> that mdadm can now handle metadata updates entirely in userspace.
> This allows mdadm to support metadata formats that the kernel knows
> nothing about.
> 
> Currently two such metadata formats are supported:
>   - DDF  - The SNIA standard format
>   - Intel Matrix - The metadata used by recent Intel ICH controlers.

This seems pretty awful from a support standpoint:  dmraid has been the 
sole provider of support for vendor-proprietary up until this point.

Now Linux users -- and distro installers -- must choose between software 
RAID stack "MD" and software RAID stack "DM".  That choice is made _not_ 
based on features, but on knowing the underlying RAID metadata format 
that is required, and what features you need out of it.

dmraid already supports
	- Intel RAID format, touched by Intel as recently as 2007
	- DDF, the SNIA standard format

This obviously generates some relevant questions...

1) Why?  This obviously duplicates existing effort and code.  The only 
compelling reason I see is RAID5 support, which DM lacks IIRC -- but the 
huge issue of user support and duplicated code remains.

2) Adding container-like handling obviously moves MD in the direction of 
DM.  Does that imply someone will be looking at integrating the two 
codebases, or will this begin to implement features also found in DM's 
codebase?

3) What is the status of distro integration efforts?  I wager the distro 
installer guys will grumble at having to choose among duplicated RAID 
code and formats.

4) What is the plan for handling existing Intel RAID users (e.g. dmraid 
+ Intel RAID)?  Has Intel been contacted about dmraid issues?  What does 
Intel think about this lovely user confusion shoved into their laps?

5) Have the dmraid maintainer and DM folks been queried, given that you 
are duplicating their functionality via Intel and DDF RAID formats? 
What was their response, what issues were raised and resolved?

	Jeff




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-02 20:11 ` ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux Jeff Garzik
@ 2009-06-02 22:58   ` Dan Williams
  2009-06-03  3:56   ` Neil Brown
  1 sibling, 0 replies; 10+ messages in thread
From: Dan Williams @ 2009-06-02 22:58 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: Neil Brown, linux-raid, LKML, linux-fsdevel, Arjan van de Ven,
	Alan Cox, Ed Ciechanowski, Jacek Danecki

On Tue, Jun 2, 2009 at 1:11 PM, Jeff Garzik <jeff@garzik.org> wrote:
> Neil Brown wrote:
>>
>> I am pleased to (finally) announce the availability of
>>   mdadm version 3.0
>>
>> It is available at the usual places:
>>   countrycode=xx.
>>   http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/
>> and via git at
>>   git://neil.brown.name/mdadm
>>   http://neil.brown.name/git?p=mdadm
>>
>>
>> This is a major new version and as such should be treated with some
>> caution.  However it has seen substantial testing and is considerred
>> to be ready for wide use.
>>
>>
>> The significant change which justifies the new major version number is
>> that mdadm can now handle metadata updates entirely in userspace.
>> This allows mdadm to support metadata formats that the kernel knows
>> nothing about.
>>
>> Currently two such metadata formats are supported:
>>  - DDF  - The SNIA standard format
>>  - Intel Matrix - The metadata used by recent Intel ICH controlers.
>
> This seems pretty awful from a support standpoint:  dmraid has been the sole
> provider of support for vendor-proprietary up until this point.

This bares similarities with the early difficulties of selecting
between ide and libata.

> Now Linux users -- and distro installers -- must choose between software
> RAID stack "MD" and software RAID stack "DM".  That choice is made _not_
> based on features, but on knowing the underlying RAID metadata format that
> is required, and what features you need out of it.
>
> dmraid already supports
>        - Intel RAID format, touched by Intel as recently as 2007
>        - DDF, the SNIA standard format
>
> This obviously generates some relevant questions...
>
> 1) Why?  This obviously duplicates existing effort and code.  The only
> compelling reason I see is RAID5 support, which DM lacks IIRC -- but the
> huge issue of user support and duplicated code remains.

The MD raid5 code has been upstream since forever and already has
features like online capacity expansion.  There is also
infrastructure, upstream, for online raid level migration.

> 2) Adding container-like handling obviously moves MD in the direction of DM.
>  Does that imply someone will be looking at integrating the two codebases,
> or will this begin to implement features also found in DM's codebase?

I made a proof-of-concept investigation of what it would take to
activate all dmraid arrays (any metadata format, any raid level) with
MD.  The result, dm2md [1], did not stimulate much in the way of
conversation.

A pluggable architecture for a write-intent log seems to be the only
piece that does not have a current equivalent in MD.  However, the
'bitmap' infrastructure covers most needs.  I think unifying on a
write-intent logging infrastructure is a good place to start working
together.

> 3) What is the status of distro integration efforts?  I wager the distro
> installer guys will grumble at having to choose among duplicated RAID code
> and formats.

There has been some grumbling, but the benefits of using one
linux-raid infrastructure for md-metadata and vendor metadata is
appealing.  mdadm-3.0 also makes a serious effort to be more agreeable
with udev and incremental discovery.  So hopefully this makes mdadm
easier to handle in the installer.

> 4) What is the plan for handling existing Intel RAID users (e.g. dmraid +
> Intel RAID)?  Has Intel been contacted about dmraid issues?  What does Intel
> think about this lovely user confusion shoved into their laps?

The confusion was the other way round.  We were faced with how to
achieve long term feature parity of our raid solution across OS's and
the community presented us with two directions DM and MD.  The
decision was made to support and maintain dmraid for existing
deployments while basing future development on extending the MD stack,
because it gave some feature advantages out of the gate.  So, there is
support for both and new development will focus on MD.

> 5) Have the dmraid maintainer and DM folks been queried, given that you are
> duplicating their functionality via Intel and DDF RAID formats? What was
> their response, what issues were raised and resolved?

There have been interludes, but not much in the way of discussion.
Hopefully, this will be a starting point.

Thanks,
Dan

[1] http://marc.info/?l=linux-raid&m=123300614013042&w=2
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-02 20:11 ` ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux Jeff Garzik
  2009-06-02 22:58   ` Dan Williams
@ 2009-06-03  3:56   ` Neil Brown
  2009-06-03 13:01     ` Anton Altaparmakov
                       ` (2 more replies)
  1 sibling, 3 replies; 10+ messages in thread
From: Neil Brown @ 2009-06-03  3:56 UTC (permalink / raw)
  To: Jeff Garzik
  Cc: linux-raid, LKML, linux-fsdevel, dm-devel, Arjan van de Ven, Alan Cox


[dm-devel added for completeness]

Hi Jeff,
 thanks for your thoughts.
 I agree this is a conversation worth having.

On Tuesday June 2, jeff@garzik.org wrote:
> Neil Brown wrote:

> > The significant change which justifies the new major version number is
> > that mdadm can now handle metadata updates entirely in userspace.
> > This allows mdadm to support metadata formats that the kernel knows
> > nothing about.
> > 
> > Currently two such metadata formats are supported:
> >   - DDF  - The SNIA standard format
> >   - Intel Matrix - The metadata used by recent Intel ICH controlers.
> 
> This seems pretty awful from a support standpoint:  dmraid has been the 
> sole provider of support for vendor-proprietary up until this point.

And mdadm has been the sole provider of raid5 and raid6 (and,
arguably, reliable raid1 - there was a thread recently about
architectural issues in dm/raid1 that allowed data corruption).
So either dmraid would have to support raid5, or mdadm would have to
support IMSM.  or both?

> 
> Now Linux users -- and distro installers -- must choose between software 
> RAID stack "MD" and software RAID stack "DM".  That choice is made _not_ 
> based on features, but on knowing the underlying RAID metadata format 
> that is required, and what features you need out of it.

If you replace the word "required" by "supported", then the metadata
format becomes a feature.  And only md provides raid5/raid6.  And only
dm provides LVM.  So I think there are plenty of "feature" issues
between them.
Maybe there are now more use-cases where the choice cannot be made
based on features.  I guess things like familiarity and track-record
come in to play there.  But choice is a crucial element of freedom.


> 
> dmraid already supports
> 	- Intel RAID format, touched by Intel as recently as 2007
> 	- DDF, the SNIA standard format
> 
> This obviously generates some relevant questions...
> 
> 1) Why?  This obviously duplicates existing effort and code.  The only 
> compelling reason I see is RAID5 support, which DM lacks IIRC -- but the 
> huge issue of user support and duplicated code remains.

Yes, RAID5 (and RAID6) are big parts of the reason.  RAID1 is not an
immaterial part.
But my initial motivation was that this was the direction I wanted the
md code base to move in.  It was previously locked to two internal
metadata formats.  I wanted to move the metadata support into
userspace where I felt it belonged, and DDF was a good vehicle to
drive that.
Intel then approached me about adding IMSM support and I was happy to
co-operate.

> 
> 2) Adding container-like handling obviously moves MD in the direction of 
> DM.  Does that imply someone will be looking at integrating the two 
> codebases, or will this begin to implement features also found in DM's 
> codebase?

I wonder why you think "container-like" handling moves in the
direction of DM.  I see nothing in the DM that explicitly relates to
this.  There was something in MD (internal metadata support) which
explicitly worked against it.  I have since made that less of an issue.
All the knowledge of containers  is really in lvm2/dmraid and mdadm - the
user-space tools (and I do think it is important to be aware of the
distinction between the kernel side and the user side of each
system). 

So this is really a case of md "seeing" the wisdom in that aspect of
the design of "dm" and taking a similar approach - though with
significantly different details.

As for integrating the two code bases.... people have been suggesting
that for years, but I suspect few of them have looked deeply at the
practicalities.  Apparently it was suggested at the recent "storage
summit".  However as the primary md and dm developers were absent, I
have doubts about how truly well-informed that conversation could have
been.

I do have my own sketchy ideas about how unification could be
achieved.  It would involve creating a third "thing" and then
migrating md and dm (and loop and nbd and drbd and ...) to mesh with
that new model.
But it is hard to make this a priority where there are more
practically useful things to be done.

It is worth reflecting again on the distinction between lvm2 or dmraid
and dm, and between mdadm and md.
lvm2 could conceivably use md.  mdadm could conceivably use dm.
I have certainly considered teaching mdadm to work with dm-multipath
so that I could justifiably remove md/multipath without the risk of
breaking someone's installation.  But it isn't much of a priority.
The dmraid developers might think that utilising md to provide some
raid levels might be a good thing (now that I have shown it to be
possible).  I would be happy to support that to the extent of
explaining how it can work and even refining interfaces if that proved
to be necessary.  Who knows - that could eventually lead to me being
able to end-of-life mdadm and leave everyone using dmraid :-)

Will md implement features found in dm's code base?
For things like LVM, Multipath, crypt and snapshot : no, definitely not.
For things like suspend/resume of incoming IO (so a device can be
reconfigured), maybe.  I recently added that so that I could effect 
raid5->raid6 conversions.  I would much rather this was implemented in
the block layer than in md or dm.  I added it to md because that was
the fastest path, and it allowed me to explore and come to understand
the issues.  I tried to arrange the implementation so that it could be
moved up to the block layer without user-space noticing.  Hopefully I
will get around to attempting that before I forget all that I learnt.


> 
> 3) What is the status of distro integration efforts?  I wager the distro 
> installer guys will grumble at having to choose among duplicated RAID 
> code and formats.

Some distros are shipping mdadm-3.0-pre releases, but I don't think
any have seriously tried to integrate the DDF or IMSM support with
installers or the boot process yet.
Intel have engineers working to make sure such integration is
possible, reliable, and relatively simple.

Installers already understand lvm and mdadm for different use cases.
Adding some new use cases that overlap should not be a big headache.
They also already support ext3-vs-xfs, gnome-vs-kde etc.

There is an issue of "if the drives appear to have DDF metadata, which
tool shall I use".  I am not well placed to give an objective answer
to that.
mdadm can easily be told to ignore such arrays unless explicitly
requested to deal with them.  A line like
   AUTO -ddf -imsm
in mdadm.conf would ensure that auto-assembly and incremental assembly
will ignore both DDF and IMSM.

> 
> 4) What is the plan for handling existing Intel RAID users (e.g. dmraid 
> + Intel RAID)?  Has Intel been contacted about dmraid issues?  What does 
> Intel think about this lovely user confusion shoved into their laps?

The above mentioned AUTO line can disable mdadm auto-management of
such arrays.  Maybe dmraid auto-management can be equally disabled.
Distros might be well-advise to make the choice a configurable
option.

I cannot speak for Intel, except to acknowledge that their engineers
have done most of the work to support IMSM is mdadm.  I just provided
the infrastructure and general consulting.

> 
> 5) Have the dmraid maintainer and DM folks been queried, given that you 
> are duplicating their functionality via Intel and DDF RAID formats? 
> What was their response, what issues were raised and resolved?

I haven't spoken to them, no (except for a couple of barely-related
chats with Alasdair).
By and large, they live in their little walled garden, and I/we live
in ours.

NeilBrown

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-03  3:56   ` Neil Brown
@ 2009-06-03 13:01     ` Anton Altaparmakov
  2009-06-03 14:42     ` Heinz Mauelshagen
  2009-06-04 15:33     ` Larry Dickson
  2 siblings, 0 replies; 10+ messages in thread
From: Anton Altaparmakov @ 2009-06-03 13:01 UTC (permalink / raw)
  To: Neil Brown
  Cc: Jeff Garzik, linux-raid, LKML, linux-fsdevel, dm-devel,
	Arjan van de Ven, Alan Cox

Hi Neil,

Is there any documentation for the interface between mdadm and a  
metadata format "module" (if I can call it that way)?

What I mean is: where would one start if one wanted to add a new  
metadata format to mdadm?

Or is the only documentation the source code to mdadm?

Thanks a lot in advance!

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-03  3:56   ` Neil Brown
  2009-06-03 13:01     ` Anton Altaparmakov
@ 2009-06-03 14:42     ` Heinz Mauelshagen
  2009-06-03 17:26       ` [dm-devel] " Dan Williams
  2009-06-08 23:32       ` [dm-devel] " Neil Brown
  2009-06-04 15:33     ` Larry Dickson
  2 siblings, 2 replies; 10+ messages in thread
From: Heinz Mauelshagen @ 2009-06-03 14:42 UTC (permalink / raw)
  To: device-mapper development
  Cc: Jeff Garzik, LKML, linux-raid, linux-fsdevel, Alan Cox, Arjan van de Ven

On Wed, 2009-06-03 at 13:56 +1000, Neil Brown wrote: 
> [dm-devel added for completeness]
> 
> Hi Jeff,
>  thanks for your thoughts.
>  I agree this is a conversation worth having.
> 
> On Tuesday June 2, jeff@garzik.org wrote:
> > Neil Brown wrote:
> 
> > > The significant change which justifies the new major version number is
> > > that mdadm can now handle metadata updates entirely in userspace.
> > > This allows mdadm to support metadata formats that the kernel knows
> > > nothing about.
> > > 
> > > Currently two such metadata formats are supported:
> > >   - DDF  - The SNIA standard format
> > >   - Intel Matrix - The metadata used by recent Intel ICH controlers.
> > 
> > This seems pretty awful from a support standpoint:  dmraid has been the 
> > sole provider of support for vendor-proprietary up until this point.
> 
> And mdadm has been the sole provider of raid5 and raid6 (and,
> arguably, reliable raid1 - there was a thread recently about
> architectural issues in dm/raid1 that allowed data corruption).
> So either dmraid would have to support raid5, or mdadm would have to
> support IMSM.  or both?

Hi,

the dm-raid45 target patch has been adopted by various distros for that
purpose since quite some time. It's providing  RAID4 and RAID5 mappings
but is not yet upstream.

Support for IMSM 9.0 is being integrated.

> 
> > 
> > Now Linux users -- and distro installers -- must choose between software 
> > RAID stack "MD" and software RAID stack "DM".  That choice is made _not_ 
> > based on features, but on knowing the underlying RAID metadata format 
> > that is required, and what features you need out of it.
> 
> If you replace the word "required" by "supported", then the metadata
> format becomes a feature.  And only md provides raid5/raid6.  And only
> dm provides LVM.  So I think there are plenty of "feature" issues
> between them.
> Maybe there are now more use-cases where the choice cannot be made
> based on features.  I guess things like familiarity and track-record
> come in to play there.  But choice is a crucial element of freedom.
> 
> 
> > 
> > dmraid already supports
> > 	- Intel RAID format, touched by Intel as recently as 2007

Like mentioned, IMSM 9.0 being supported via an Intel contribution.

> > 	- DDF, the SNIA standard format
> > 
> > This obviously generates some relevant questions...
> > 
> > 1) Why?  This obviously duplicates existing effort and code.  The only 
> > compelling reason I see is RAID5 support, which DM lacks IIRC -- but the 
> > huge issue of user support and duplicated code remains.
> 
> Yes, RAID5 (and RAID6) are big parts of the reason.  RAID1 is not an
> immaterial part.
> But my initial motivation was that this was the direction I wanted the
> md code base to move in.  It was previously locked to two internal
> metadata formats.  I wanted to move the metadata support into
> userspace where I felt it belonged, and DDF was a good vehicle to
> drive that.
> Intel then approached me about adding IMSM support and I was happy to
> co-operate.

Like us for dmraid about IMSM 9.0 and other features.

> 
> > 
> > 2) Adding container-like handling obviously moves MD in the direction of 
> > DM.  Does that imply someone will be looking at integrating the two 
> > codebases, or will this begin to implement features also found in DM's 
> > codebase?
> 
> I wonder why you think "container-like" handling moves in the
> direction of DM.  I see nothing in the DM that explicitly relates to
> this.

DM was initially designed to be container-style with respect to many
areas and that included it to be metadata agnostic in order to handle
any metadata formats in userspace.

> There was something in MD (internal metadata support) which
> explicitly worked against it.  I have since made that less of an issue.
> All the knowledge of containers  is really in lvm2/dmraid and mdadm - the
> user-space tools (and I do think it is important to be aware of the
> distinction between the kernel side and the user side of each
> system). 
> 
> So this is really a case of md "seeing" the wisdom in that aspect of
> the design of "dm" and taking a similar approach - though with
> significantly different details.

Yes, you are working dm type features in since a while :-)

> 
> As for integrating the two code bases.... people have been suggesting
> that for years, but I suspect few of them have looked deeply at the
> practicalities.  Apparently it was suggested at the recent "storage
> summit".  However as the primary md and dm developers were absent, I
> have doubts about how truly well-informed that conversation could have
> been.

Agreed, we'd need face-time and talk issues through in order to come up
with any such plan for md+dm integration.

> 
> I do have my own sketchy ideas about how unification could be
> achieved.  It would involve creating a third "thing" and then
> migrating md and dm (and loop and nbd and drbd and ...) to mesh with
> that new model.
> But it is hard to make this a priority where there are more
> practically useful things to be done.
> 
> It is worth reflecting again on the distinction between lvm2 or dmraid
> and dm, and between mdadm and md.
> lvm2 could conceivably use md.

With the exception of clustered storage. There's no e.g. clustered RAID1
in MD.

> mdadm could conceivably use dm.
> I have certainly considered teaching mdadm to work with dm-multipath
> so that I could justifiably remove md/multipath without the risk of
> breaking someone's installation.  But it isn't much of a priority.
> The dmraid developers might think that utilising md to provide some
> raid levels might be a good thing (now that I have shown it to be
> possible).  I would be happy to support that to the extent of
> explaining how it can work and even refining interfaces if that proved
> to be necessary.  Who knows - that could eventually lead to me being
> able to end-of-life mdadm and leave everyone using dmraid :-)

Your ':-)' is adaquate because dmraid just got features added to
create/remove RAID sets and to handle spares recently with IMSM.
Other metadata format handlers in dmraid have to be enhanced to support
that functionality.

> 
> Will md implement features found in dm's code base?
> For things like LVM, Multipath, crypt and snapshot : no, definitely not.
> For things like suspend/resume of incoming IO (so a device can be
> reconfigured), maybe.  I recently added that so that I could effect 
> raid5->raid6 conversions.  I would much rather this was implemented in
> the block layer than in md or dm.  I added it to md because that was
> the fastest path, and it allowed me to explore and come to understand
> the issues.  I tried to arrange the implementation so that it could be
> moved up to the block layer without user-space noticing.  Hopefully I
> will get around to attempting that before I forget all that I learnt.
> 
> 
> > 
> > 3) What is the status of distro integration efforts?  I wager the distro 
> > installer guys will grumble at having to choose among duplicated RAID 
> > code and formats.
> 
> Some distros are shipping mdadm-3.0-pre releases, but I don't think
> any have seriously tried to integrate the DDF or IMSM support with
> installers or the boot process yet.
> Intel have engineers working to make sure such integration is
> possible, reliable, and relatively simple.
> 
> Installers already understand lvm and mdadm for different use cases.

And dmraid.

> Adding some new use cases that overlap should not be a big headache.
> They also already support ext3-vs-xfs, gnome-vs-kde etc.
> 
> There is an issue of "if the drives appear to have DDF metadata, which
> tool shall I use".  I am not well placed to give an objective answer
> to that.
> mdadm can easily be told to ignore such arrays unless explicitly
> requested to deal with them.  A line like
>    AUTO -ddf -imsm
> in mdadm.conf would ensure that auto-assembly and incremental assembly
> will ignore both DDF and IMSM.
> 
> > 
> > 4) What is the plan for handling existing Intel RAID users (e.g. dmraid 
> > + Intel RAID)?  Has Intel been contacted about dmraid issues?  What does 
> > Intel think about this lovely user confusion shoved into their laps?
> 
> The above mentioned AUTO line can disable mdadm auto-management of
> such arrays.  Maybe dmraid auto-management can be equally disabled.
> 

dmraid already supports that since ever but goes by the different
approach to allow the metadata to be selected with the -f option, hence
ignoring any RAID sets with other metadata.

> Distros might be well-advise to make the choice a configurable
> option.
> 
> I cannot speak for Intel, except to acknowledge that their engineers
> have done most of the work to support IMSM is mdadm.  I just provided
> the infrastructure and general consulting.
> 
> > 
> > 5) Have the dmraid maintainer and DM folks been queried, given that you 
> > are duplicating their functionality via Intel and DDF RAID formats? 
> > What was their response, what issues were raised and resolved?
> 
> I haven't spoken to them, no (except for a couple of barely-related
> chats with Alasdair).
> By and large, they live in their little walled garden, and I/we live
> in ours.

Maybe we are about to change that? ;-)

Heinz

> 
> NeilBrown
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dm-devel] Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-03 14:42     ` Heinz Mauelshagen
@ 2009-06-03 17:26       ` Dan Williams
  2009-06-04 16:38         ` Heinz Mauelshagen
  2009-06-08 23:32       ` [dm-devel] " Neil Brown
  1 sibling, 1 reply; 10+ messages in thread
From: Dan Williams @ 2009-06-03 17:26 UTC (permalink / raw)
  To: heinzm, device-mapper development
  Cc: Jeff Garzik, LKML, linux-raid, linux-fsdevel, Alan Cox,
	Arjan van de Ven, Ed Ciechanowski, Jacek Danecki

On Wed, Jun 3, 2009 at 7:42 AM, Heinz Mauelshagen <heinzm@redhat.com> wrote:
> On Wed, 2009-06-03 at 13:56 +1000, Neil Brown wrote:
>> As for integrating the two code bases.... people have been suggesting
>> that for years, but I suspect few of them have looked deeply at the
>> practicalities.  Apparently it was suggested at the recent "storage
>> summit".  However as the primary md and dm developers were absent, I
>> have doubts about how truly well-informed that conversation could have
>> been.
>
> Agreed, we'd need face-time and talk issues through in order to come up
> with any such plan for md+dm integration.
>

What are your general impressions of dmraid using md kernel
infrastructure for raid level support?

Thanks,
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-03  3:56   ` Neil Brown
  2009-06-03 13:01     ` Anton Altaparmakov
  2009-06-03 14:42     ` Heinz Mauelshagen
@ 2009-06-04 15:33     ` Larry Dickson
  2 siblings, 0 replies; 10+ messages in thread
From: Larry Dickson @ 2009-06-04 15:33 UTC (permalink / raw)
  To: device-mapper development
  Cc: Jeff Garzik, LKML, linux-raid, linux-fsdevel, Alan Cox, Arjan van de Ven


[-- Attachment #1.1: Type: text/plain, Size: 1000 bytes --]

Hi all,

As a user of both dm (in lvm) and md, I am not reassured by the "turf war"
flavor coming from the dm side. The idea that all functions should be
glooped together in one monster program, whether dm or the Microsoft
operating system, is not an automatic + in my opinion. The massive patch
activity that I see in dm-devel could be an indication of function
overcentralization leading to design risk, just as in Microsoft development.

A minor technical note follows.


> For things like suspend/resume of incoming IO (so a device can be
> reconfigured), maybe.  I recently added that so that I could effect
> raid5->raid6 conversions.


Suspend is not necessary, only barriers, as long as you define a hybrid
raid5/raid6 array via a moving watermark. Only those IOs that hit in the
neighborhood of the watermark are affected.

Larry Dickson
Cutting Edge Networked Storage




> NeilBrown
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
>

[-- Attachment #1.2: Type: text/html, Size: 1516 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-03 17:26       ` [dm-devel] " Dan Williams
@ 2009-06-04 16:38         ` Heinz Mauelshagen
  0 siblings, 0 replies; 10+ messages in thread
From: Heinz Mauelshagen @ 2009-06-04 16:38 UTC (permalink / raw)
  To: Dan Williams
  Cc: Jeff Garzik, Jacek Danecki, LKML, Ed Ciechanowski, linux-raid,
	device-mapper development, linux-fsdevel, Alan Cox,
	Arjan van de Ven

On Wed, 2009-06-03 at 10:26 -0700, Dan Williams wrote:
> On Wed, Jun 3, 2009 at 7:42 AM, Heinz Mauelshagen <heinzm@redhat.com> wrote:
> > On Wed, 2009-06-03 at 13:56 +1000, Neil Brown wrote:
> >> As for integrating the two code bases.... people have been suggesting
> >> that for years, but I suspect few of them have looked deeply at the
> >> practicalities.  Apparently it was suggested at the recent "storage
> >> summit".  However as the primary md and dm developers were absent, I
> >> have doubts about how truly well-informed that conversation could have
> >> been.
> >
> > Agreed, we'd need face-time and talk issues through in order to come up
> > with any such plan for md+dm integration.
> >
> 
> What are your general impressions of dmraid using md kernel
> infrastructure for raid level support?

At the time of the dmraid project start, we already had libdevmapper
which was suitable to handle in-kernel device manipulation with no
adequate on the MD side so it was the appropriate interface to use.

Cheers,
Heinz

> 
> Thanks,
> Dan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dm-devel] Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-03 14:42     ` Heinz Mauelshagen
  2009-06-03 17:26       ` [dm-devel] " Dan Williams
@ 2009-06-08 23:32       ` Neil Brown
  2009-06-09 16:29         ` Heinz Mauelshagen
  1 sibling, 1 reply; 10+ messages in thread
From: Neil Brown @ 2009-06-08 23:32 UTC (permalink / raw)
  To: heinzm, device-mapper development
  Cc: Jeff Garzik, LKML, linux-raid, linux-fsdevel, Alan Cox, Arjan van de Ven

On Wednesday June 3, heinzm@redhat.com wrote:
> > 
> > I haven't spoken to them, no (except for a couple of barely-related
> > chats with Alasdair).
> > By and large, they live in their little walled garden, and I/we live
> > in ours.
> 
> Maybe we are about to change that? ;-)

Maybe ... what should we talk about?

Two areas where I think we might be able to have productive
discussion:

 1/ Making md personalities available as dm targets.
    In one sense this is trivial as an block device can be a DM
    target, and any md personality can be a block device.
    However it might be more attractive if the md personality
    responded to dm ioctls.
    Considering specifically raid5, some aspects of plugging
    md/raid5 underneath dm would be trivial - e.g. assembling the
    array at the start.
    However others are not so straight forward.
    In particular, when a drive fails in a raid5, you need to update
    the metadata before allowing any writes which depend on that drive
    to complete.  Given that metadata is managed in user-space, this
    means signalling user-space and waiting for a response.
    md does this via a file in sysfs.  I cannot see any similar
    mechanism in dm, but I haven't looked very hard.

    Would it be useful to pursue this do you think?


 2/ It might be useful to have a common view how virtual devices in
    general should be managed in Linux.  Then we could independently
    migrated md and dm towards this goal.

    I imagine a block-layer level function which allows a blank
    virtual device to be created, with an arbitrary major/minor
    allocated.
    e.g.
         echo foo > /sys/block/.new
    causes
         /sys/devices/virtual/block/foo/
    to be created.
    Then a similar mechanism associates that with a particular driver.
    That causes more attributes to appear in  ../block/foo/ which
    can be used to flesh out the details of the device.

    There would be library code that a driver could use to:
      - accept subordinate devices
      - manage the state of those devices
      - maintain a write-intent bitmap
    etc.

    There would also need to be a block-layer function to 
    suspend/resume or similar so that a block device can be changed
    underneath a filesystem.

    We currently have three structures for a block device:
      struct block_device -> struct gendisk -> struct request_queue

    I imagine allow either the "struct gendisk" or  the "struct
    request_queue" to be swapped between two "struct block_device".
    I'm not sure which, and the rest of the details are even more
    fuzzy.

    That sort of infrastructure would allow interesting migrations
    without being limited to "just with dm" or "just within md".

    Thoughts?

NeilBrown

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux
  2009-06-08 23:32       ` [dm-devel] " Neil Brown
@ 2009-06-09 16:29         ` Heinz Mauelshagen
  0 siblings, 0 replies; 10+ messages in thread
From: Heinz Mauelshagen @ 2009-06-09 16:29 UTC (permalink / raw)
  To: device-mapper development
  Cc: Jeff Garzik, LKML, linux-raid, linux-fsdevel, Alan Cox, Arjan van de Ven

On Tue, 2009-06-09 at 09:32 +1000, Neil Brown wrote:
> On Wednesday June 3, heinzm@redhat.com wrote:
> > > 
> > > I haven't spoken to them, no (except for a couple of barely-related
> > > chats with Alasdair).
> > > By and large, they live in their little walled garden, and I/we live
> > > in ours.
> > 
> > Maybe we are about to change that? ;-)
> 
> Maybe ... what should we talk about?
> 
> Two areas where I think we might be able to have productive
> discussion:
> 
>  1/ Making md personalities available as dm targets.
>     In one sense this is trivial as an block device can be a DM
>     target, and any md personality can be a block device.

Of course one could stack a linear target on any MD personality and live
with the minor overhead in the io path. The overhead to handle such
stacking on the tool side of things is not negligible though, hence it's
a better option to have native dm targets for these mappings.

>     However it might be more attractive if the md personality
>     responded to dm ioctls.

Indeed, we need the full interface to be covered in order to stay
homogeneous.

>     Considering specifically raid5, some aspects of plugging
>     md/raid5 underneath dm would be trivial - e.g. assembling the
>     array at the start.
>     However others are not so straight forward.
>     In particular, when a drive fails in a raid5, you need to update
>     the metadata before allowing any writes which depend on that drive
>     to complete.  Given that metadata is managed in user-space, this
>     means signalling user-space and waiting for a response.
>     md does this via a file in sysfs.  I cannot see any similar
>     mechanism in dm, but I haven't looked very hard.

We use events passed to a uspace daemon via an ioctl interface and our
suspend/resume mechanism to ensure such metadata updates.

> 
>     Would it be useful to pursue this do you think?

I looked at the MD personality back in time when I was searching for an
option to support RAID5 in dm but, like you similarly noted above,
didn't find a simple way to wrap it into a dm target so the answer *was*
no. That's why I picked some code (e.g. the RAID adressing) and
implemented a target of my own.

> 
> 
>  2/ It might be useful to have a common view how virtual devices in
>     general should be managed in Linux.  Then we could independently
>     migrated md and dm towards this goal.
> 
>     I imagine a block-layer level function which allows a blank
>     virtual device to be created, with an arbitrary major/minor
>     allocated.
>     e.g.
>          echo foo > /sys/block/.new
>     causes
>          /sys/devices/virtual/block/foo/
>     to be created.
>     Then a similar mechanism associates that with a particular driver.
>     That causes more attributes to appear in  ../block/foo/ which
>     can be used to flesh out the details of the device.
> 
>     There would be library code that a driver could use to:
>       - accept subordinate devices
>       - manage the state of those devices
>       - maintain a write-intent bitmap
>     etc.

Yes, and such library can be filled with ported dm/md and other code.

> 
>     There would also need to be a block-layer function to 
>     suspend/resume or similar so that a block device can be changed
>     underneath a filesystem.

Yes, consolidating such functionality in a central place is the proper
design but we still need an interface into any block driver which is
initiating io on its own behalf (e.g. mirror resynchronization) in order
to ensure, that such io gets suspended/resumed consistently

> 
>     We currently have three structures for a block device:
>       struct block_device -> struct gendisk -> struct request_queue
> 
>     I imagine allow either the "struct gendisk" or  the "struct
>     request_queue" to be swapped between two "struct block_device".
>     I'm not sure which, and the rest of the details are even more
>     fuzzy.
> 
>     That sort of infrastructure would allow interesting migrations
>     without being limited to "just with dm" or "just within md".

Or just with other virtual drivers such as drbd.

Hard to imagine issues at the detailed spec level before they are
fleshed out but this sounds like a good idea to start with.

Heinz

> 
>     Thoughts?
> 
> NeilBrown
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-06-09 16:29 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <18980.48553.328662.80987@notabene.brown>
2009-06-02 20:11 ` ANNOUNCE: mdadm 3.0 - A tool for managing Soft RAID under Linux Jeff Garzik
2009-06-02 22:58   ` Dan Williams
2009-06-03  3:56   ` Neil Brown
2009-06-03 13:01     ` Anton Altaparmakov
2009-06-03 14:42     ` Heinz Mauelshagen
2009-06-03 17:26       ` [dm-devel] " Dan Williams
2009-06-04 16:38         ` Heinz Mauelshagen
2009-06-08 23:32       ` [dm-devel] " Neil Brown
2009-06-09 16:29         ` Heinz Mauelshagen
2009-06-04 15:33     ` Larry Dickson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).