All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID10 Layouts
@ 2009-08-21 13:27 Info
  2009-08-21 16:43 ` Goswin von Brederlow
  0 siblings, 1 reply; 22+ messages in thread
From: Info @ 2009-08-21 13:27 UTC (permalink / raw)
  To: linux-raid


Hello list,

Researching RAID10, trying to learn the most advanced system for a 2 SATA drive system.  Have two WD 2TB drives for a media computer, and the most important requirement is data redundancy.  I realize that RAID is no substitute for backups, but this is a backup for the backups and the purpose here is data safety.  The secondary goal is speed enhancement.  It appears that RAID10 can give both.

First question is on layout of RAID10.  In studying the man pages it seems that Far mode gives 95% of the speed of RAID0, but with increased seek for writes.  And that Offset retains much of this benefit while increasing efficiency of writes.  What should be the preference, Far or Offset?  Are they equally as robust?

How safe is the data in Far or Offset mode?  If a drive fails, will a complete, usable, bootable system exist on the other drive?  (These two are the only drives in the system, which is Debian Testing, Debian kernel 2.6.30-5)  Need I make any special Grub settings?

What about this Intel firmware 'RAID'?  Would this assist in any way?  How does it relate (if it does) to the linux md system?  Should I set in BIOS to RAID, or leave it at ACPI?

How does this look:
# mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: RAID10 Layouts
  2009-08-21 13:27 RAID10 Layouts Info
@ 2009-08-21 16:43 ` Goswin von Brederlow
  2009-08-21 18:02   ` Info
  2009-08-21 20:42   ` Keld Jørn Simonsen
  0 siblings, 2 replies; 22+ messages in thread
From: Goswin von Brederlow @ 2009-08-21 16:43 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

Info@quantum-sci.net writes:

> Hello list,
>
> Researching RAID10, trying to learn the most advanced system for a 2
> SATA drive system.  Have two WD 2TB drives for a media computer, and
> the most important requirement is data redundancy.  I realize that
> RAID is no substitute for backups, but this is a backup for the
> backups and the purpose here is data safety.  The secondary goal is
> speed enhancement.  It appears that RAID10 can give both.
>
> First question is on layout of RAID10.  In studying the man pages it
> seems that Far mode gives 95% of the speed of RAID0, but with
> increased seek for writes.  And that Offset retains much of this
> benefit while increasing efficiency of writes.  What should be the
> preference, Far or Offset?  Are they equally as robust?

All raid10 layouts offer the same robustness. Which layout is best for
you really depends on your use case. Probably the biggest factor will
be the average file size. My experience is that with large files the
far copies do not cost noticeable write speed while being twice as
fast reading as raid1.

> How safe is the data in Far or Offset mode?  If a drive fails, will
> a complete, usable, bootable system exist on the other drive?
> (These two are the only drives in the system, which is Debian
> Testing, Debian kernel 2.6.30-5) Need I make any special Grub
> settings?

I don't think lilo or grub1 can boot from raid10 at all with offset or
far copies. With near copies you are identical to a simple raid1 so
that would boot.

So to be bootable even with a failed drive you should partition the
disk. Create a small raid1 for the system and a large raid10 for the
data.

> What about this Intel firmware 'RAID'?  Would this assist in any
> way?  How does it relate (if it does) to the linux md system?
> Should I set in BIOS to RAID, or leave it at ACPI?

I would stay away from any half baked bios stuff. It will be no better
than linux software raid but will tie you to the specific bios. If
your mainboard fails and the next one has a different bios you can't
boot your disks.

> How does this look:
> # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1

On partitions it is save to use 1.1 format. Saves you 4k. Jupey.

You should play with the chunksize though and try with and without
bitmap and different bitmap sizes. Bitmap costs some write performance
but it greatly speeds up resyncs after a crash or temporary drive
failure.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: RAID10 Layouts
  2009-08-21 16:43 ` Goswin von Brederlow
@ 2009-08-21 18:02   ` Info
  2009-08-21 19:20     ` Help Info
  2009-08-22  6:31     ` RAID10 Layouts Goswin von Brederlow
  2009-08-21 20:42   ` Keld Jørn Simonsen
  1 sibling, 2 replies; 22+ messages in thread
From: Info @ 2009-08-21 18:02 UTC (permalink / raw)
  To: linux-raid


Thank you Goswin.


On Friday 21 August 2009 09:43:28 Goswin von Brederlow wrote:
> I don't think lilo or grub1 can boot from raid10 at all with offset or
> far copies. With near copies you are identical to a simple raid1 so
> that would boot.
> 
> So to be bootable even with a failed drive you should partition the
> disk. Create a small raid1 for the system and a large raid10 for the
> data.

Uh oh, already set all 3 parts for RAID10, but haven't switched over yet.

As it happens my / is on sda1 and /home is sda3  (swap is sda2), so it'll be pretty easy to just make / RAID1.  Do I need to make swap RAID1 and not 10?

 
> I would stay away from any half baked bios stuff. It will be no better
> than linux software raid but will tie you to the specific bios. If
> your mainboard fails and the next one has a different bios you can't
> boot your disks.

Thank you.

 
> > How does this look:
> > # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
> 
> On partitions it is save to use 1.1 format. Saves you 4k. Jupey.

4k of what?  One time only, or on every cluster?  Any additional benefit to 1.2?

My system records mpeg4 from DishNetwork satellite (R5000-HD), so it handles mostly files over 1GB.  However its most rigorous duty is scanning those videos for commercials, and marking locations in a mysql database.  The disk light is constantly on and system response is sluggish when this is being done.  I don't understand how an advanced drive like this can be so bogged down, but I hope RAID10 will speed things up.  Maybe there is a way to increase disk cache size?


> You should play with the chunksize though and try with and without
> bitmap and different bitmap sizes. Bitmap costs some write performance
> but it greatly speeds up resyncs after a crash or temporary drive
> failure.

My partitions and data are so enormous that I can't really do any experimenting.  Definitely will use write-intent log.











^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-21 18:02   ` Info
@ 2009-08-21 19:20     ` Info
  2009-08-21 19:38       ` Help John Robinson
  2009-08-22  6:14       ` Help Info
  2009-08-22  6:31     ` RAID10 Layouts Goswin von Brederlow
  1 sibling, 2 replies; 22+ messages in thread
From: Info @ 2009-08-21 19:20 UTC (permalink / raw)
  To: linux-raid


My God, the command is not working.  I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
# mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid10 sdb3[1]
      1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
      bitmap: 94/446 pages [376KB], 2048KB chunk

md1 : active raid10 sdb2[1]
      6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
      bitmap: 0/25 pages [0KB], 128KB chunk

md0 : active raid10 sdb1[1]
      78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
      bitmap: 76/151 pages [304KB], 256KB chunk

unused devices: <none>
#


My system is half-converted and is now unbootable.  What am I going to do?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-21 19:20     ` Help Info
@ 2009-08-21 19:38       ` John Robinson
  2009-08-21 20:51         ` Help Info
  2009-08-22  6:14       ` Help Info
  1 sibling, 1 reply; 22+ messages in thread
From: John Robinson @ 2009-08-21 19:38 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

On 21/08/2009 20:20, Info@quantum-sci.net wrote:
> My God, the command is not working.  I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md2 : active raid10 sdb3[1]
>       1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>       bitmap: 94/446 pages [376KB], 2048KB chunk
> 
> md1 : active raid10 sdb2[1]
>       6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>       bitmap: 0/25 pages [0KB], 128KB chunk
> 
> md0 : active raid10 sdb1[1]
>       78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>       bitmap: 76/151 pages [304KB], 256KB chunk

Well, it won't let you remove the only thing keeping the array active. 
Stop the array first with `mdadm --stop /dev/md0`. After that I think 
you can just create your new RAID-1 array without doing anything else.

Cheers,

John.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: RAID10 Layouts
  2009-08-21 16:43 ` Goswin von Brederlow
  2009-08-21 18:02   ` Info
@ 2009-08-21 20:42   ` Keld Jørn Simonsen
  2009-08-21 21:04     ` Info
  2009-08-21 21:57     ` Bill Davidsen
  1 sibling, 2 replies; 22+ messages in thread
From: Keld Jørn Simonsen @ 2009-08-21 20:42 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Info, linux-raid

On Fri, Aug 21, 2009 at 06:43:28PM +0200, Goswin von Brederlow wrote:
> Info@quantum-sci.net writes:
> 
> > Hello list,
> >
> > Researching RAID10, trying to learn the most advanced system for a 2
> > SATA drive system.  Have two WD 2TB drives for a media computer, and
> > the most important requirement is data redundancy.  I realize that
> > RAID is no substitute for backups, but this is a backup for the
> > backups and the purpose here is data safety.  The secondary goal is
> > speed enhancement.  It appears that RAID10 can give both.
> >
> > First question is on layout of RAID10.  In studying the man pages it
> > seems that Far mode gives 95% of the speed of RAID0, but with
> > increased seek for writes.  And that Offset retains much of this
> > benefit while increasing efficiency of writes.  What should be the
> > preference, Far or Offset?  Are they equally as robust?
> 
> All raid10 layouts offer the same robustness. Which layout is best for
> you really depends on your use case. Probably the biggest factor will
> be the average file size. My experience is that with large files the
> far copies do not cost noticeable write speed while being twice as
> fast reading as raid1.

The file system elevator makes up for the Far write head movement.

> > How safe is the data in Far or Offset mode?  If a drive fails, will
> > a complete, usable, bootable system exist on the other drive?
> > (These two are the only drives in the system, which is Debian
> > Testing, Debian kernel 2.6.30-5) Need I make any special Grub
> > settings?
> 
> I don't think lilo or grub1 can boot from raid10 at all with offset or
> far copies. With near copies you are identical to a simple raid1 so
> that would boot.

there is a howto on setting up a system, that can continue runnig, if one 
disk fails at
http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

> > How does this look:
> > # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
> 
> On partitions it is save to use 1.1 format. Saves you 4k. Jupey.
> 
> You should play with the chunksize though and try with and without
> bitmap and different bitmap sizes. Bitmap costs some write performance
> but it greatly speeds up resyncs after a crash or temporary drive
> failure.

I would recommend a bigger chunk size. at least 256 kiB.

Best regards
keld

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-21 19:38       ` Help John Robinson
@ 2009-08-21 20:51         ` Info
  0 siblings, 0 replies; 22+ messages in thread
From: Info @ 2009-08-21 20:51 UTC (permalink / raw)
  To: linux-raid

On Friday 21 August 2009 12:38:00 John Robinson wrote:
> Well, it won't let you remove the only thing keeping the array active. 
> Stop the array first with `mdadm --stop /dev/md0`. After that I think 
> you can just create your new RAID-1 array without doing anything else.

WHEW, thank you.



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: RAID10 Layouts
  2009-08-21 20:42   ` Keld Jørn Simonsen
@ 2009-08-21 21:04     ` Info
  2009-08-21 21:57     ` Bill Davidsen
  1 sibling, 0 replies; 22+ messages in thread
From: Info @ 2009-08-21 21:04 UTC (permalink / raw)
  To: linux-raid

On Friday 21 August 2009 13:42:34 Keld Jørn Simonsen wrote:
> there is a howto on setting up a system, that can continue runnig, if one 
> disk fails at
> http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk
...
> I would recommend a bigger chunk size. at least 256 kiB.

Very good, thank you Keld.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: RAID10 Layouts
  2009-08-21 20:42   ` Keld Jørn Simonsen
  2009-08-21 21:04     ` Info
@ 2009-08-21 21:57     ` Bill Davidsen
  1 sibling, 0 replies; 22+ messages in thread
From: Bill Davidsen @ 2009-08-21 21:57 UTC (permalink / raw)
  To: Keld Jørn Simonsen; +Cc: Goswin von Brederlow, Info, linux-raid

Keld Jørn Simonsen wrote:
> On Fri, Aug 21, 2009 at 06:43:28PM +0200, Goswin von Brederlow wrote:
>   
>> Info@quantum-sci.net writes:
>>
>>     
>>> Hello list,
>>>
>>> Researching RAID10, trying to learn the most advanced system for a 2
>>> SATA drive system.  Have two WD 2TB drives for a media computer, and
>>> the most important requirement is data redundancy.  I realize that
>>> RAID is no substitute for backups, but this is a backup for the
>>> backups and the purpose here is data safety.  The secondary goal is
>>> speed enhancement.  It appears that RAID10 can give both.
>>>
>>> First question is on layout of RAID10.  In studying the man pages it
>>> seems that Far mode gives 95% of the speed of RAID0, but with
>>> increased seek for writes.  And that Offset retains much of this
>>> benefit while increasing efficiency of writes.  What should be the
>>> preference, Far or Offset?  Are they equally as robust?
>>>       
>> All raid10 layouts offer the same robustness. Which layout is best for
>> you really depends on your use case. Probably the biggest factor will
>> be the average file size. My experience is that with large files the
>> far copies do not cost noticeable write speed while being twice as
>> fast reading as raid1.
>>     
>
> The file system elevator makes up for the Far write head movement.
>
>   
>>> How safe is the data in Far or Offset mode?  If a drive fails, will
>>> a complete, usable, bootable system exist on the other drive?
>>> (These two are the only drives in the system, which is Debian
>>> Testing, Debian kernel 2.6.30-5) Need I make any special Grub
>>> settings?
>>>       
>> I don't think lilo or grub1 can boot from raid10 at all with offset or
>> far copies. With near copies you are identical to a simple raid1 so
>> that would boot.
>>     
>
> there is a howto on setting up a system, that can continue runnig, if one 
> disk fails at
> http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk
>
>   
>>> How does this look:
>>> # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
>>>       
>> On partitions it is save to use 1.1 format. Saves you 4k. Jupey.
>>
>> You should play with the chunksize though and try with and without
>> bitmap and different bitmap sizes. Bitmap costs some write performance
>> but it greatly speeds up resyncs after a crash or temporary drive
>> failure.
>>     
>
> I would recommend a bigger chunk size. at least 256 kiB.
>   

You really want to look at stripe-size and stride-size creating an 
ext[234] filesystem on top of raid, good things to happen there.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc

"You are disgraced professional losers. And by the way, give us our money back."
    - Representative Earl Pomeroy,  Democrat of North Dakota
on the A.I.G. executives who were paid bonuses  after a federal bailout.



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-21 19:20     ` Help Info
  2009-08-21 19:38       ` Help John Robinson
@ 2009-08-22  6:14       ` Info
  2009-08-22  9:34         ` Help NeilBrown
  1 sibling, 1 reply; 22+ messages in thread
From: Info @ 2009-08-22  6:14 UTC (permalink / raw)
  To: linux-raid


Not able to boot to my RAID devices.  md0 is / and ext3 RAID1, but md1 and md2 are swap and JFS respectively, RAID10 created like this:
mdadm --create /dev/md1 --level=raid10 --layout=o2 --metadata=1.2 --chunk=256 --raid-disks=2 missing /dev/sdb2

It gives the initial kernel boot message but then says 
invalid raid superblock magic on sdb2
invalid raid superblock magic on sdb3

... and halts progress.  I have to hard-reset to continue.  Why isn't the error more specific?

I've tried setting the metadata to 1.1, and tried adjusting mdadm.conf from /dev/md/1 to /dev/md1, but neither helped.  The parts are set to raid autodetect and the kernel parameter is set to md_autodetect.  What could be wrong?
 


On Friday 21 August 2009 12:20:28 Info@quantum-sci.net wrote:
> 
> My God, the command is not working.  I need to remove sdb1 from md0 so I can change it from a RAID10 to RAID1, and it simply ignores my command:
> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md2 : active raid10 sdb3[1]
>       1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>       bitmap: 94/446 pages [376KB], 2048KB chunk
> 
> md1 : active raid10 sdb2[1]
>       6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>       bitmap: 0/25 pages [0KB], 128KB chunk
> 
> md0 : active raid10 sdb1[1]
>       78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>       bitmap: 76/151 pages [304KB], 256KB chunk
> 
> unused devices: <none>
> #
> 
> 
> My system is half-converted and is now unbootable.  What am I going to do?
> 
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: RAID10 Layouts
  2009-08-21 18:02   ` Info
  2009-08-21 19:20     ` Help Info
@ 2009-08-22  6:31     ` Goswin von Brederlow
  1 sibling, 0 replies; 22+ messages in thread
From: Goswin von Brederlow @ 2009-08-22  6:31 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

Info@quantum-sci.net writes:

> Thank you Goswin.
>
>
> On Friday 21 August 2009 09:43:28 Goswin von Brederlow wrote:
>> I don't think lilo or grub1 can boot from raid10 at all with offset or
>> far copies. With near copies you are identical to a simple raid1 so
>> that would boot.
>> 
>> So to be bootable even with a failed drive you should partition the
>> disk. Create a small raid1 for the system and a large raid10 for the
>> data.
>
> Uh oh, already set all 3 parts for RAID10, but haven't switched over yet.
>
> As it happens my / is on sda1 and /home is sda3  (swap is sda2), so it'll be pretty easy to just make / RAID1.  Do I need to make swap RAID1 and not 10?

Need? no. Want? No idea. My experience is that if you need swap (as in
swap in/out, not just it being used for garbage) then you have lost
anyway. Half or twice the speed on swap doesn't matter, the system
will crawl anyway.
  
>> I would stay away from any half baked bios stuff. It will be no better
>> than linux software raid but will tie you to the specific bios. If
>> your mainboard fails and the next one has a different bios you can't
>> boot your disks.
>
> Thank you.
>
>  
>> > How does this look:
>> > # mdadm --create /dev/md0 --level=raid10 --layout=o2 --metadata=1.2 --chunk=64 --raid-disks=2 missing /dev/sdb1
>> 
>> On partitions it is save to use 1.1 format. Saves you 4k. Jupey.
>
> 4k of what?  One time only, or on every cluster?  Any additional benefit to 1.2?

4k of space overall. The 1.2 format leaves the first 4k of the device
free for use of a bootloader/MBR.

> My system records mpeg4 from DishNetwork satellite (R5000-HD), so it
> handles mostly files over 1GB.  However its most rigorous duty is
> scanning those videos for commercials, and marking locations in a
> mysql database.  The disk light is constantly on and system response
> is sluggish when this is being done.  I don't understand how an
> advanced drive like this can be so bogged down, but I hope RAID10
> will speed things up.  Maybe there is a way to increase disk cache
> size?

man blockdev

Raid1 and the different Raid10 layouts work good for different access
patterns. The plain raid1 allows 2 streams to read from the drive in
parallel. If you have multiple streams that will reduce seeks. On the
other hand the raid10 far layout double sequential read speed. And so
on. Every layout behaves differently.

For scanning your videos raid10 with far layout is probably best with
a large read ahead. For your database a simple raid1 is probably better.
It might be benefitial to have the two on seperate partitions with
different raid mode.

Or it might be benefitial to have 2 partitions (sdX3 + sdX4) both with
simple raid1 but flag sdb3 and sda4 as --write-mostly. That way the DB
would always read from sda while the videos read from sdb.

As said before a lot of this depends on the usage pattern and that
means trying different things with the work load you will have
productively.

>> You should play with the chunksize though and try with and without
>> bitmap and different bitmap sizes. Bitmap costs some write performance
>> but it greatly speeds up resyncs after a crash or temporary drive
>> failure.
>
> My partitions and data are so enormous that I can't really do any
> experimenting.  Definitely will use write-intent log.

That is always a problem.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-22  6:14       ` Help Info
@ 2009-08-22  9:34         ` NeilBrown
  2009-08-22 12:56           ` Help Info
  0 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2009-08-22  9:34 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

On Sat, August 22, 2009 4:14 pm, Info@quantum-sci.net wrote:
>
> Not able to boot to my RAID devices.  md0 is / and ext3 RAID1, but md1 and
> md2 are swap and JFS respectively, RAID10 created like this:
> mdadm --create /dev/md1 --level=raid10 --layout=o2 --metadata=1.2
> --chunk=256 --raid-disks=2 missing /dev/sdb2
>
> It gives the initial kernel boot message but then says
> invalid raid superblock magic on sdb2
> invalid raid superblock magic on sdb3
>
> ... and halts progress.  I have to hard-reset to continue.  Why isn't the
> error more specific?

You say md0 is raid1 but mdstat shows it to be raid10, so that won't boot.

'raid autodetect' only works for 0.90 metadata, and you are using 1.x.
You should not use 'raid autodetect' partitions.  Rather the initrd
should use mdadm to assemble the arrays.  Most distros seem to get this
right these days.  Maybe you just need to rebuild your
initrd...

NeilBrown


>
> I've tried setting the metadata to 1.1, and tried adjusting mdadm.conf
> from /dev/md/1 to /dev/md1, but neither helped.  The parts are set to raid
> autodetect and the kernel parameter is set to md_autodetect.  What could
> be wrong?
>
>
>
> On Friday 21 August 2009 12:20:28 Info@quantum-sci.net wrote:
>>
>> My God, the command is not working.  I need to remove sdb1 from md0 so I
>> can change it from a RAID10 to RAID1, and it simply ignores my command:
>> # mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
>> # cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
>> [raid4] [multipath]
>> md2 : active raid10 sdb3[1]
>>       1868560128 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>>       bitmap: 94/446 pages [376KB], 2048KB chunk
>>
>> md1 : active raid10 sdb2[1]
>>       6297344 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>>       bitmap: 0/25 pages [0KB], 128KB chunk
>>
>> md0 : active raid10 sdb1[1]
>>       78654080 blocks super 1.2 64K chunks 2 offset-copies [2/1] [_U]
>>       bitmap: 76/151 pages [304KB], 256KB chunk
>>
>> unused devices: <none>
>> #
>>
>>
>> My system is half-converted and is now unbootable.  What am I going to
>> do?
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-22  9:34         ` Help NeilBrown
@ 2009-08-22 12:56           ` Info
  2009-08-22 16:47             ` Help John Robinson
  0 siblings, 1 reply; 22+ messages in thread
From: Info @ 2009-08-22 12:56 UTC (permalink / raw)
  To: linux-raid

On Saturday 22 August 2009 02:34:12 NeilBrown wrote:
> You say md0 is raid1 but mdstat shows it to be raid10, so that won't boot.

Thanks Neil.  However that was an early attempt before I knew RAID10 won't boot.


> 'raid autodetect' only works for 0.90 metadata, and you are using 1.x.
> You should not use 'raid autodetect' partitions.  Rather the initrd
> should use mdadm to assemble the arrays.  Most distros seem to get this
> right these days.  Maybe you just need to rebuild your
> initrd...

I am not using an initrd.  Have all the RAID and disk drivers built into the (custom-compiled) kernel.  It uses mdadm to assemble the arrays?  Maybe this is the problem.

I am using this procedure to build a RAID array from a live system:
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch

It is very lucid and clear, however I am slightly modifying it to use RAID10 on my second and third partitions.  When I come to 
update-initramfs -u
... the only initrd it updates is for an old stock kernel.  It doesn't build one for any of my compiled kernels.

What partition type should I use rather than raid autodetect?  Or should I revert to 0.90 metadata?

Looking at dmesg it does say that md1 & 2 do not have a valid ==v.90== superblock.  There is no other linux raid partition type, so I guess it's got to be v.090.  Why do they make 1.1 and 1.2 then, if they do not work?


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-22 12:56           ` Help Info
@ 2009-08-22 16:47             ` John Robinson
  2009-08-22 18:12               ` Help Info
  0 siblings, 1 reply; 22+ messages in thread
From: John Robinson @ 2009-08-22 16:47 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

On 22/08/2009 13:56, Info@quantum-sci.net wrote:
[...]
> It is very lucid and clear, however I am slightly modifying it to use RAID10 on my second and third partitions.  When I come to 
> update-initramfs -u
> ... the only initrd it updates is for an old stock kernel.  It doesn't build one for any of my compiled kernels.

You should have mkinitrd (that's what it is on Fedora/RHEL/CentOS) or 
something similar with which you can build initramfs images for any kernel.

> What partition type should I use rather than raid autodetect?  Or should I revert to 0.90 metadata?

Probably type DA, Non-FS data, though type FD will be fine even if 
they're not auto-detected.

> Looking at dmesg it does say that md1 & 2 do not have a valid ==v.90== superblock.  There is no other linux raid partition type, so I guess it's got to be v.090.  Why do they make 1.1 and 1.2 then, if they do not work?

The newer metadata types have their benefits. Auto-detection is being 
deprecated, I think it's because things which are only for boot-up time 
are being pushed out of the permanently-loaded kernel into initramfs, so 
they don't hang around wasting space on a running system. For example, 
CentOS 5 uses autodetection, Fedora 10 automatically puts mdadm in the 
initramfs and runs it at the right time.

Cheers,

John.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-22 16:47             ` Help John Robinson
@ 2009-08-22 18:12               ` Info
  2009-08-22 20:45                 ` Help Info
  2009-08-23 20:28                 ` Help John Robinson
  0 siblings, 2 replies; 22+ messages in thread
From: Info @ 2009-08-22 18:12 UTC (permalink / raw)
  To: linux-raid

On Saturday 22 August 2009 09:47:48 John Robinson wrote:
> You should have mkinitrd (that's what it is on Fedora/RHEL/CentOS) or 
> something similar with which you can build initramfs images for any kernel.

OK once I changed the version to 0.90 it stopped just at the kernel banner on boot and hung.  I was about to give up on RAID when your message came through, and I created the initrd.img file.  I always compile my own kernels and don't depend on an initrd, but it now seems to be necessary.  So in Debian:
# update-initramfs -o /boot/initrd.img-2.6.30-5
... reboot, and voila it did what it was supposed to, for a change.  I'm now resyncing my 2TB drives, which will take a good while.

 
> > What partition type should I use rather than raid autodetect?  Or should I revert to 0.90 metadata?
> 
> Probably type DA, Non-FS data, though type FD will be fine even if 
> they're not auto-detected.

It simply found 'bad magick' with FD, so that doesn't work with the newer versions.  I tried to use both newer versions, but it's not possible.  You sound not quite sure of the partition type, so I'll stick with FD and 0.90.  Thanks though John.

Goswin says, "For scanning your videos raid10 with far layout is probably best with
a large read ahead."  I have the RAID10 blocksize set to 1024 for the video partition, but any idea how to set readahead?





^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-22 18:12               ` Help Info
@ 2009-08-22 20:45                 ` Info
  2009-08-22 20:59                   ` Help Guy Watkins
  2009-08-23 20:28                 ` Help John Robinson
  1 sibling, 1 reply; 22+ messages in thread
From: Info @ 2009-08-22 20:45 UTC (permalink / raw)
  To: linux-raid

On Saturday 22 August 2009 11:12:35 Info@quantum-sci.net wrote:
> Goswin says, "For scanning your videos raid10 with far layout is probably best with
> a large read ahead."  I have the RAID10 blocksize set to 1024 for the video partition, but any idea how to set readahead?

My gosh, it turns out this setting is astounding.  You test your drive speed with some large file, as such:
# time dd if={somelarge}.iso of=/dev/null bs=256k

... and check your drive's default readahead setting:
# blockdev --getra /dev/sda
256

... then test with various settings like 1024, 1536, 2048, 4096, 8192, and maybe 16384:
# blockdev --setra 4096 /dev/sda

Here are the results for my laptop.  I can't test the HTPC with the array yet, as it's still syncing.
   256	 40.4 MB/s
 1024	123 MB/s
 1536	2.7 GB/s
 2048	2.4 GB/s
 4096	2.4 GB/s
 8192	2.4 GB/s
16384	2.5 GB/s

I suspect it's best to use the minimum readahead for the best speed (in my case 1536), for two reasons:
- To save memory;
- So there isn't such a performance impact when the blocks are not sequential.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: Help
  2009-08-22 20:45                 ` Help Info
@ 2009-08-22 20:59                   ` Guy Watkins
       [not found]                     ` <200908230631.46865.Info@quantum-sci.net>
  0 siblings, 1 reply; 22+ messages in thread
From: Guy Watkins @ 2009-08-22 20:59 UTC (permalink / raw)
  To: Info, linux-raid

} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Info@quantum-sci.net
} Sent: Saturday, August 22, 2009 4:45 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: Help
} 
} On Saturday 22 August 2009 11:12:35 Info@quantum-sci.net wrote:
} > Goswin says, "For scanning your videos raid10 with far layout is
} probably best with
} > a large read ahead."  I have the RAID10 blocksize set to 1024 for the
} video partition, but any idea how to set readahead?
} 
} My gosh, it turns out this setting is astounding.  You test your drive
} speed with some large file, as such:
} # time dd if={somelarge}.iso of=/dev/null bs=256k
} 
} ... and check your drive's default readahead setting:
} # blockdev --getra /dev/sda
} 256
} 
} ... then test with various settings like 1024, 1536, 2048, 4096, 8192, and
} maybe 16384:
} # blockdev --setra 4096 /dev/sda
} 
} Here are the results for my laptop.  I can't test the HTPC with the array
} yet, as it's still syncing.
}    256	 40.4 MB/s
}  1024	123 MB/s
}  1536	2.7 GB/s
}  2048	2.4 GB/s
}  4096	2.4 GB/s
}  8192	2.4 GB/s
} 16384	2.5 GB/s
} 
} I suspect it's best to use the minimum readahead for the best speed (in my
} case 1536), for two reasons:
} - To save memory;
} - So there isn't such a performance impact when the blocks are not
} sequential.

The disk cache is being used.  You should reboot between each test, or use a
file much bigger than the amount of RAM you have.  Or use a different file
each time.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-22 18:12               ` Help Info
  2009-08-22 20:45                 ` Help Info
@ 2009-08-23 20:28                 ` John Robinson
  1 sibling, 0 replies; 22+ messages in thread
From: John Robinson @ 2009-08-23 20:28 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

On 22/08/2009 19:12, Info@quantum-sci.net wrote:
> On Saturday 22 August 2009 09:47:48 John Robinson wrote:
[...]
>>> What partition type should I use rather than raid autodetect?  Or should I revert to 0.90 metadata?
>> Probably type DA, Non-FS data, though type FD will be fine even if 
>> they're not auto-detected.
> 
> It simply found 'bad magick' with FD, so that doesn't work with the newer versions.  I tried to use both newer versions, but it's not possible.  You sound not quite sure of the partition type, so I'll stick with FD and 0.90.  Thanks though John.

I said "probably" DA because that's what's been suggested by others 
previously on this list. Others have simply used 83, but that's not 
ideal because if the partitions appear to have filesystems on (e.g. the 
metadata's not at the beginning), they might get auto-mounted without md 
RAID. I'm sure FD will work fine with later metadata versions as long as 
you have mdadm in your initramfs, and while as you've noted there'll be 
a whinge in the boot log about it not being version 0.90, it's not going 
to cause the kernel to lock up or anything like that.

Cheers,

John.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
       [not found]                     ` <200908230631.46865.Info@quantum-sci.net>
@ 2009-08-24 23:08                       ` Info
  2009-08-24 23:38                         ` Help NeilBrown
  0 siblings, 1 reply; 22+ messages in thread
From: Info @ 2009-08-24 23:08 UTC (permalink / raw)
  To: linux-raid


The sync has finally finished, but something's wrong with the first partition set;  Only sda1 is a member of md0.  In dmesg I find:
[    4.756365] md: kicking non-fresh sdb1 from array!

Huh?  If it's not fresh, why doesn't it sync it?  What should I do about this?  How did it happen on a new array?



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-24 23:08                       ` Help Info
@ 2009-08-24 23:38                         ` NeilBrown
  2009-08-25 13:18                           ` Help Info
  0 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2009-08-24 23:38 UTC (permalink / raw)
  To: Info; +Cc: linux-raid

On Tue, August 25, 2009 9:08 am, Info@quantum-sci.net wrote:
>
> The sync has finally finished, but something's wrong with the first
> partition set;  Only sda1 is a member of md0.  In dmesg I find:
> [    4.756365] md: kicking non-fresh sdb1 from array!
>
> Huh?  If it's not fresh, why doesn't it sync it?  What should I do about
> this?  How did it happen on a new array?


You'll need to provide a lot more information, starting with
kernel log at all relevant times (and don't use 'grep', just cut out
of contiguous section of the log including a few lines before and
after anything that might be relevant).
And "mdadm -E" of any relevant device.

"Kicking non-free sdb1 from arrays" is a message that you get when
assembling an array if the metadata on sdb1 is older than the others.
This can happen if it was evicted from the array due to failure or
if the array was assembled without sdb1 for some reason.  There
are probably other scenarios.
That is why I need to see recent history, including anything
from the last time the array was active.

NeilBrown



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-24 23:38                         ` Help NeilBrown
@ 2009-08-25 13:18                           ` Info
  2009-08-27 12:47                             ` Help Info
  0 siblings, 1 reply; 22+ messages in thread
From: Info @ 2009-08-25 13:18 UTC (permalink / raw)
  To: linux-raid

On Monday 24 August 2009 16:38:41 you wrote:
> You'll need to provide a lot more information, starting with
> kernel log at all relevant times (and don't use 'grep', just cut out
> of contiguous section of the log including a few lines before and
> after anything that might be relevant).
> And "mdadm -E" of any relevant device.

# mdadm -E /dev/sdb
mdadm: No md superblock detected on /dev/sdb.
#

Aug 22 21:17:17 localhost kernel: [    3.048020] ata4: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [    3.048037] ata6: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [    3.048045] ata5: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [    3.048054] ata3: SATA link down (SStatus 0 SControl 300)
Aug 22 21:17:17 localhost kernel: [    3.201035] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 22 21:17:17 localhost kernel: [    3.201044] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 22 21:17:17 localhost kernel: [    3.204330] ata2.00: ATA-8: WDC WD20EADS-00S2B0, 04.05G04, max UDMA/133
Aug 22 21:17:17 localhost kernel: [    3.204333] ata2.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
Aug 22 21:17:17 localhost kernel: [    3.206097] ata1.00: ATA-8: WDC WD20EADS-00R6B0, 01.00A01, max UDMA/133
Aug 22 21:17:17 localhost kernel: [    3.206100] ata1.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
Aug 22 21:17:17 localhost kernel: [    3.207345] ata2.00: configured for UDMA/133
Aug 22 21:17:17 localhost kernel: [    3.211122] ata1.00: configured for UDMA/133
Aug 22 21:17:17 localhost kernel: [    3.211203] scsi 0:0:0:0: Direct-Access     ATA      WDC WD20EADS-00R 01.0 PQ: 0 ANSI: 5
Aug 22 21:17:17 localhost kernel: [    3.211406] sd 0:0:0:0: [sda] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
Aug 22 21:17:17 localhost kernel: [    3.211417] sd 0:0:0:0: [sda] Write Protect is off
Aug 22 21:17:17 localhost kernel: [    3.211435] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Aug 22 21:17:17 localhost kernel: [    3.211495]  sda:<5>sd 0:0:0:0: Attached scsi generic sg0 type 0
Aug 22 21:17:17 localhost kernel: [    3.211568] scsi 1:0:0:0: Direct-Access     ATA      WDC WD20EADS-00S 04.0 PQ: 0 ANSI: 5
Aug 22 21:17:17 localhost kernel: [    3.211719] sd 1:0:0:0: [sdb] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
Aug 22 21:17:17 localhost kernel: [    3.211728] sd 1:0:0:0: [sdb] Write Protect is off
Aug 22 21:17:17 localhost kernel: [    3.211746] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Aug 22 21:17:17 localhost kernel: [    3.211798]  sdb:<5>sd 1:0:0:0: Attached scsi generic sg1 type 0
Aug 22 21:17:17 localhost kernel: [    3.222483]  sdb1 sdb2 sdb3
Aug 22 21:17:17 localhost kernel: [    3.222744] sd 1:0:0:0: [sdb] Attached SCSI disk
Aug 22 21:17:17 localhost kernel: [    3.223107]  sda1 sda2 sda3
Aug 22 21:17:17 localhost kernel: [    3.223321] sd 0:0:0:0: [sda] Attached SCSI disk

...

Aug 22 21:17:17 localhost kernel: [    4.719904] md: md0 stopped.
Aug 22 21:17:17 localhost kernel: [    4.756229] md: bind<sdb1>
Aug 22 21:17:17 localhost kernel: [    4.756348] md: bind<sda1>
Aug 22 21:17:17 localhost kernel: [    4.756365] md: kicking non-fresh sdb1 from array!
Aug 22 21:17:17 localhost kernel: [    4.756370] md: unbind<sdb1>
Aug 22 21:17:17 localhost kernel: [    4.761035] md: export_rdev(sdb1)
Aug 22 21:17:17 localhost kernel: [    4.762357] raid1: raid set md0 active with 1 out of 2 mirrors
Aug 22 21:17:17 localhost kernel: [    4.768650] md0: bitmap initialized from disk: read 10/10 pages, set 198 bits
Aug 22 21:17:17 localhost kernel: [    4.768653] created bitmap (151 pages) for device md0
Aug 22 21:17:17 localhost kernel: [    4.777530] md: md1 stopped.
Aug 22 21:17:17 localhost kernel: [    4.777616]  md0: unknown partition table
Aug 22 21:17:17 localhost kernel: [    4.781705] md: bind<sdb2>
Aug 22 21:17:17 localhost kernel: [    4.781820] md: bind<sda2>
Aug 22 21:17:17 localhost kernel: [    4.783078] raid10: raid set md1 active with 2 out of 2 devices
Aug 22 21:17:17 localhost kernel: [    4.791063] md1: bitmap initialized from disk: read 13/13 pages, set 0 bits
Aug 22 21:17:17 localhost kernel: [    4.791066] created bitmap (193 pages) for device md1
Aug 22 21:17:17 localhost kernel: [    4.827200] md: md2 stopped.
Aug 22 21:17:17 localhost kernel: [    4.827294]  md1: unknown partition table
Aug 22 21:17:17 localhost kernel: [    4.835293] md: bind<sdb3>
Aug 22 21:17:17 localhost kernel: [    4.835413] md: bind<sda3>
Aug 22 21:17:17 localhost kernel: [    4.846525] raid10: raid set md2 active with 2 out of 2 devices
Aug 22 21:17:17 localhost kernel: [    4.862129] md2: bitmap initialized from disk: read 14/14 pages, set 0 bits
Aug 22 21:17:17 localhost kernel: [    4.862132] created bitmap (223 pages) for device md2
Aug 22 21:17:17 localhost kernel: [    4.898461]  md2: unknown partition table

...

Hm, how would the superblock have been destroyed?  This is a little disturbing.




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Help
  2009-08-25 13:18                           ` Help Info
@ 2009-08-27 12:47                             ` Info
  0 siblings, 0 replies; 22+ messages in thread
From: Info @ 2009-08-27 12:47 UTC (permalink / raw)
  To: linux-raid


OK I think I've resolved this, so don't worry about me anymore.



On Tuesday 25 August 2009 06:18:38 Info@quantum-sci.net wrote:
> On Monday 24 August 2009 16:38:41 you wrote:
> > You'll need to provide a lot more information, starting with
> > kernel log at all relevant times (and don't use 'grep', just cut out
> > of contiguous section of the log including a few lines before and
> > after anything that might be relevant).
> > And "mdadm -E" of any relevant device.
> 
> # mdadm -E /dev/sdb
> mdadm: No md superblock detected on /dev/sdb.
> #
> 
> Aug 22 21:17:17 localhost kernel: [    3.048020] ata4: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [    3.048037] ata6: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [    3.048045] ata5: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [    3.048054] ata3: SATA link down (SStatus 0 SControl 300)
> Aug 22 21:17:17 localhost kernel: [    3.201035] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
> Aug 22 21:17:17 localhost kernel: [    3.201044] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
> Aug 22 21:17:17 localhost kernel: [    3.204330] ata2.00: ATA-8: WDC WD20EADS-00S2B0, 04.05G04, max UDMA/133
> Aug 22 21:17:17 localhost kernel: [    3.204333] ata2.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
> Aug 22 21:17:17 localhost kernel: [    3.206097] ata1.00: ATA-8: WDC WD20EADS-00R6B0, 01.00A01, max UDMA/133
> Aug 22 21:17:17 localhost kernel: [    3.206100] ata1.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)
> Aug 22 21:17:17 localhost kernel: [    3.207345] ata2.00: configured for UDMA/133
> Aug 22 21:17:17 localhost kernel: [    3.211122] ata1.00: configured for UDMA/133
> Aug 22 21:17:17 localhost kernel: [    3.211203] scsi 0:0:0:0: Direct-Access     ATA      WDC WD20EADS-00R 01.0 PQ: 0 ANSI: 5
> Aug 22 21:17:17 localhost kernel: [    3.211406] sd 0:0:0:0: [sda] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
> Aug 22 21:17:17 localhost kernel: [    3.211417] sd 0:0:0:0: [sda] Write Protect is off
> Aug 22 21:17:17 localhost kernel: [    3.211435] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
> Aug 22 21:17:17 localhost kernel: [    3.211495]  sda:<5>sd 0:0:0:0: Attached scsi generic sg0 type 0
> Aug 22 21:17:17 localhost kernel: [    3.211568] scsi 1:0:0:0: Direct-Access     ATA      WDC WD20EADS-00S 04.0 PQ: 0 ANSI: 5
> Aug 22 21:17:17 localhost kernel: [    3.211719] sd 1:0:0:0: [sdb] 3907029168 512-byte hardware sectors: (2.00 TB/1.81 TiB)
> Aug 22 21:17:17 localhost kernel: [    3.211728] sd 1:0:0:0: [sdb] Write Protect is off
> Aug 22 21:17:17 localhost kernel: [    3.211746] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
> Aug 22 21:17:17 localhost kernel: [    3.211798]  sdb:<5>sd 1:0:0:0: Attached scsi generic sg1 type 0
> Aug 22 21:17:17 localhost kernel: [    3.222483]  sdb1 sdb2 sdb3
> Aug 22 21:17:17 localhost kernel: [    3.222744] sd 1:0:0:0: [sdb] Attached SCSI disk
> Aug 22 21:17:17 localhost kernel: [    3.223107]  sda1 sda2 sda3
> Aug 22 21:17:17 localhost kernel: [    3.223321] sd 0:0:0:0: [sda] Attached SCSI disk
> 
> ...
> 
> Aug 22 21:17:17 localhost kernel: [    4.719904] md: md0 stopped.
> Aug 22 21:17:17 localhost kernel: [    4.756229] md: bind<sdb1>
> Aug 22 21:17:17 localhost kernel: [    4.756348] md: bind<sda1>
> Aug 22 21:17:17 localhost kernel: [    4.756365] md: kicking non-fresh sdb1 from array!
> Aug 22 21:17:17 localhost kernel: [    4.756370] md: unbind<sdb1>
> Aug 22 21:17:17 localhost kernel: [    4.761035] md: export_rdev(sdb1)
> Aug 22 21:17:17 localhost kernel: [    4.762357] raid1: raid set md0 active with 1 out of 2 mirrors
> Aug 22 21:17:17 localhost kernel: [    4.768650] md0: bitmap initialized from disk: read 10/10 pages, set 198 bits
> Aug 22 21:17:17 localhost kernel: [    4.768653] created bitmap (151 pages) for device md0
> Aug 22 21:17:17 localhost kernel: [    4.777530] md: md1 stopped.
> Aug 22 21:17:17 localhost kernel: [    4.777616]  md0: unknown partition table
> Aug 22 21:17:17 localhost kernel: [    4.781705] md: bind<sdb2>
> Aug 22 21:17:17 localhost kernel: [    4.781820] md: bind<sda2>
> Aug 22 21:17:17 localhost kernel: [    4.783078] raid10: raid set md1 active with 2 out of 2 devices
> Aug 22 21:17:17 localhost kernel: [    4.791063] md1: bitmap initialized from disk: read 13/13 pages, set 0 bits
> Aug 22 21:17:17 localhost kernel: [    4.791066] created bitmap (193 pages) for device md1
> Aug 22 21:17:17 localhost kernel: [    4.827200] md: md2 stopped.
> Aug 22 21:17:17 localhost kernel: [    4.827294]  md1: unknown partition table
> Aug 22 21:17:17 localhost kernel: [    4.835293] md: bind<sdb3>
> Aug 22 21:17:17 localhost kernel: [    4.835413] md: bind<sda3>
> Aug 22 21:17:17 localhost kernel: [    4.846525] raid10: raid set md2 active with 2 out of 2 devices
> Aug 22 21:17:17 localhost kernel: [    4.862129] md2: bitmap initialized from disk: read 14/14 pages, set 0 bits
> Aug 22 21:17:17 localhost kernel: [    4.862132] created bitmap (223 pages) for device md2
> Aug 22 21:17:17 localhost kernel: [    4.898461]  md2: unknown partition table
> 
> ...
> 
> Hm, how would the superblock have been destroyed?  This is a little disturbing.
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2009-08-27 12:47 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-21 13:27 RAID10 Layouts Info
2009-08-21 16:43 ` Goswin von Brederlow
2009-08-21 18:02   ` Info
2009-08-21 19:20     ` Help Info
2009-08-21 19:38       ` Help John Robinson
2009-08-21 20:51         ` Help Info
2009-08-22  6:14       ` Help Info
2009-08-22  9:34         ` Help NeilBrown
2009-08-22 12:56           ` Help Info
2009-08-22 16:47             ` Help John Robinson
2009-08-22 18:12               ` Help Info
2009-08-22 20:45                 ` Help Info
2009-08-22 20:59                   ` Help Guy Watkins
     [not found]                     ` <200908230631.46865.Info@quantum-sci.net>
2009-08-24 23:08                       ` Help Info
2009-08-24 23:38                         ` Help NeilBrown
2009-08-25 13:18                           ` Help Info
2009-08-27 12:47                             ` Help Info
2009-08-23 20:28                 ` Help John Robinson
2009-08-22  6:31     ` RAID10 Layouts Goswin von Brederlow
2009-08-21 20:42   ` Keld Jørn Simonsen
2009-08-21 21:04     ` Info
2009-08-21 21:57     ` Bill Davidsen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.