All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID down, dont know why!
@ 2009-11-08 14:00 Andrew Dunn
  2009-11-08 14:07 ` Joe Landman
  2009-11-08 14:22 ` Robin Hill
  0 siblings, 2 replies; 12+ messages in thread
From: Andrew Dunn @ 2009-11-08 14:00 UTC (permalink / raw)
  To: linux-raid list

I just copied 4+ TiB of information to this array, restarted 5 times and tried to access it.... What is going on?

What kind of logs do you need, I really need help!


This is an automatically generated mail message from mdadm
running on ALEXANDRIA

A Fail event had been detected on md device /dev/md0.

It could be related to component device /dev/sdl1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid6 sdl1[9](F) sdm1[8] sdi1[10](F) sdj1[11](F) sdk1[12](F) sdh1[3] sdf1[1] sdg1[2] sde1[0]
      6837318656 blocks level 6, 1024k chunk, algorithm 2 [9/5] [UUUU____U]
      
md1 : active raid0 sdc1[1] sdb1[0]
      586067072 blocks 64k chunks
      
unused devices: <none>


-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:00 RAID down, dont know why! Andrew Dunn
@ 2009-11-08 14:07 ` Joe Landman
  2009-11-08 14:08   ` Andrew Dunn
       [not found]   ` <4AF82D29.507@harddata.com>
  2009-11-08 14:22 ` Robin Hill
  1 sibling, 2 replies; 12+ messages in thread
From: Joe Landman @ 2009-11-08 14:07 UTC (permalink / raw)
  To: Andrew Dunn; +Cc: linux-raid list

Andrew Dunn wrote:
> I just copied 4+ TiB of information to this array, restarted 5 times
> and tried to access it.... What is going on?

It looks like you have 4 failed drives. sdl,sdi,sdj,sdk

Is it possible you lost power or connectivity to those drives?

If you have lsscsi installed, what does lsscsi tell you about this?

lsscsi  | grep sd[ijkl]

Given the proximity of the drives in ordering, I'd suspect a power loss, 
or cable seating, or similar to those drives.

Reseat power/signal cables on the drive bays, and see if this helps.


Joe

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman@scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:07 ` Joe Landman
@ 2009-11-08 14:08   ` Andrew Dunn
  2009-11-08 14:15     ` Joe Landman
       [not found]     ` <4AF82DAC.4020307@harddata.com>
       [not found]   ` <4AF82D29.507@harddata.com>
  1 sibling, 2 replies; 12+ messages in thread
From: Andrew Dunn @ 2009-11-08 14:08 UTC (permalink / raw)
  To: landman; +Cc: linux-raid list

storrgie@ALEXANDRIA:~$ lsscsi  | grep sd[ijkl]
[11:0:0:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdi
[11:0:1:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdj
[11:0:2:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdk
[11:0:3:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdl


Joe Landman wrote:
> Andrew Dunn wrote:
>> I just copied 4+ TiB of information to this array, restarted 5 times
>> and tried to access it.... What is going on?
>
> It looks like you have 4 failed drives. sdl,sdi,sdj,sdk
>
> Is it possible you lost power or connectivity to those drives?
>
> If you have lsscsi installed, what does lsscsi tell you about this?
>
> lsscsi  | grep sd[ijkl]
>
> Given the proximity of the drives in ordering, I'd suspect a power
> loss, or cable seating, or similar to those drives.
>
> Reseat power/signal cables on the drive bays, and see if this helps.
>
>
> Joe
>

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:08   ` Andrew Dunn
@ 2009-11-08 14:15     ` Joe Landman
  2009-11-08 14:21       ` Andrew Dunn
       [not found]     ` <4AF82DAC.4020307@harddata.com>
  1 sibling, 1 reply; 12+ messages in thread
From: Joe Landman @ 2009-11-08 14:15 UTC (permalink / raw)
  To: Andrew Dunn; +Cc: linux-raid list

Andrew Dunn wrote:
> storrgie@ALEXANDRIA:~$ lsscsi  | grep sd[ijkl]
> [11:0:0:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdi
> [11:0:1:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdj
> [11:0:2:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdk
> [11:0:3:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdl
> 

Does smartctl report drive failure?

	smartctl -a /dev/sdi | grep "SMART overall-health"
	smartctl -a /dev/sdj | grep "SMART overall-health"
	smartctl -a /dev/sdk | grep "SMART overall-health"
	smartctl -a /dev/sdl | grep "SMART overall-health"

> 
> Joe Landman wrote:
>> Andrew Dunn wrote:
>>> I just copied 4+ TiB of information to this array, restarted 5 times
>>> and tried to access it.... What is going on?
>> It looks like you have 4 failed drives. sdl,sdi,sdj,sdk
>>
>> Is it possible you lost power or connectivity to those drives?
>>
>> If you have lsscsi installed, what does lsscsi tell you about this?
>>
>> lsscsi  | grep sd[ijkl]
>>
>> Given the proximity of the drives in ordering, I'd suspect a power
>> loss, or cable seating, or similar to those drives.
>>
>> Reseat power/signal cables on the drive bays, and see if this helps.
>>
>>
>> Joe
>>
> 


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman@scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:15     ` Joe Landman
@ 2009-11-08 14:21       ` Andrew Dunn
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Dunn @ 2009-11-08 14:21 UTC (permalink / raw)
  To: landman; +Cc: linux-raid list

storrgie@ALEXANDRIA:~$ sudo mdadm -D /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Fri Nov  6 07:06:34 2009
     Raid Level : raid6
     Array Size : 6837318656 (6520.58 GiB 7001.41 GB)
  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
   Raid Devices : 9
  Total Devices : 9
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Nov  8 09:17:55 2009
          State : clean, degraded, recovering
 Active Devices : 8
Working Devices : 9
 Failed Devices : 0
  Spare Devices : 1

     Chunk Size : 1024K

 Rebuild Status : 0% complete

           UUID : 397e0b3f:34cbe4cc:613e2239:070da8c8 (local to host
ALEXANDRIA)
         Events : 0.56

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       8       81        1      active sync   /dev/sdf1
       2       8       97        2      active sync   /dev/sdg1
       3       8      113        3      active sync   /dev/sdh1
       4       8      129        4      active sync   /dev/sdi1
       5       8      145        5      active sync   /dev/sdj1
       9       8      161        6      spare rebuilding   /dev/sdk1
       7       8      177        7      active sync   /dev/sdl1
       8       8      193        8      active sync   /dev/sdm1

Did a:
sudo mdadm --assemble --force /dev/md0 /dev/sd[efghijklm]1

Now its rebuilding? Why did it go down in the first place?

Power and connections are fine and smart reports:

storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sde | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdf | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdg | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdh | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdi | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdj | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdk | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdl | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED
storrgie@ALEXANDRIA:~$ sudo smartctl -a /dev/sdm | grep "SMART
overall-health"
SMART overall-health self-assessment test result: PASSED


Joe Landman wrote:
> Andrew Dunn wrote:
>> storrgie@ALEXANDRIA:~$ lsscsi  | grep sd[ijkl]
>> [11:0:0:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdi
>> [11:0:1:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdj
>> [11:0:2:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdk
>> [11:0:3:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdl
>>
>
> Does smartctl report drive failure?
>
>     smartctl -a /dev/sdi | grep "SMART overall-health"
>     smartctl -a /dev/sdj | grep "SMART overall-health"
>     smartctl -a /dev/sdk | grep "SMART overall-health"
>     smartctl -a /dev/sdl | grep "SMART overall-health"
>
>>
>> Joe Landman wrote:
>>> Andrew Dunn wrote:
>>>> I just copied 4+ TiB of information to this array, restarted 5 times
>>>> and tried to access it.... What is going on?
>>> It looks like you have 4 failed drives. sdl,sdi,sdj,sdk
>>>
>>> Is it possible you lost power or connectivity to those drives?
>>>
>>> If you have lsscsi installed, what does lsscsi tell you about this?
>>>
>>> lsscsi  | grep sd[ijkl]
>>>
>>> Given the proximity of the drives in ordering, I'd suspect a power
>>> loss, or cable seating, or similar to those drives.
>>>
>>> Reseat power/signal cables on the drive bays, and see if this helps.
>>>
>>>
>>> Joe
>>>
>>
>
>

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:00 RAID down, dont know why! Andrew Dunn
  2009-11-08 14:07 ` Joe Landman
@ 2009-11-08 14:22 ` Robin Hill
  2009-11-08 14:24   ` Andrew Dunn
  1 sibling, 1 reply; 12+ messages in thread
From: Robin Hill @ 2009-11-08 14:22 UTC (permalink / raw)
  To: linux-raid list

[-- Attachment #1: Type: text/plain, Size: 1059 bytes --]

On Sun Nov 08, 2009 at 09:00:29AM -0500, Andrew Dunn wrote:

> I just copied 4+ TiB of information to this array, restarted 5 times
> and tried to access it.... What is going on?
> 
> What kind of logs do you need, I really need help!
> 
From the message you've posted, it looks like something has triggered
the (simultaneous) removal of four drives from the array.  I'd check the
dmesg output - it should provide some information.  I'd guess these four
drives are all attached to the same controller (are they external or
internal?), so possibly the controller reset (or for external drives, it
could be a cable issue).

You should be able to force an assembly anyway (using the --force flag)
but I'd make sure you know exactly what the issue is first, otherwise
this is likely to happen again.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:22 ` Robin Hill
@ 2009-11-08 14:24   ` Andrew Dunn
  2009-11-08 15:01     ` Robin Hill
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Dunn @ 2009-11-08 14:24 UTC (permalink / raw)
  To: linux-raid list

What would I be looking for on this? Its a lot to sift through.

Currently just line-by-lining it.

Robin Hill wrote:
> On Sun Nov 08, 2009 at 09:00:29AM -0500, Andrew Dunn wrote:
>
>   
>> I just copied 4+ TiB of information to this array, restarted 5 times
>> and tried to access it.... What is going on?
>>
>> What kind of logs do you need, I really need help!
>>
>>     
> From the message you've posted, it looks like something has triggered
> the (simultaneous) removal of four drives from the array.  I'd check the
> dmesg output - it should provide some information.  I'd guess these four
> drives are all attached to the same controller (are they external or
> internal?), so possibly the controller reset (or for external drives, it
> could be a cable issue).
>
> You should be able to force an assembly anyway (using the --force flag)
> but I'd make sure you know exactly what the issue is first, otherwise
> this is likely to happen again.
>
> Cheers,
>     Robin
>   

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 14:24   ` Andrew Dunn
@ 2009-11-08 15:01     ` Robin Hill
  2009-11-08 22:08       ` Ryan Wagoner
  0 siblings, 1 reply; 12+ messages in thread
From: Robin Hill @ 2009-11-08 15:01 UTC (permalink / raw)
  To: linux-raid list

[-- Attachment #1: Type: text/plain, Size: 1314 bytes --]

On Sun Nov 08, 2009 at 09:24:20AM -0500, Andrew Dunn wrote:
> Robin Hill wrote:
> > On Sun Nov 08, 2009 at 09:00:29AM -0500, Andrew Dunn wrote:
> >
> >   
> >> I just copied 4+ TiB of information to this array, restarted 5 times
> >> and tried to access it.... What is going on?
> >>
> >> What kind of logs do you need, I really need help!
> >>
> >>     
> > From the message you've posted, it looks like something has triggered
> > the (simultaneous) removal of four drives from the array.  I'd check the
> > dmesg output - it should provide some information.  I'd guess these four
> > drives are all attached to the same controller (are they external or
> > internal?), so possibly the controller reset (or for external drives, it
> > could be a cable issue).
> >
> What would I be looking for on this? Its a lot to sift through.
> 
> Currently just line-by-lining it.
> 
Look for where the drives are being kicked out of the array (should be
towards the bottom).  Just above that should be some error messages
(often including bus resets).

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 15:01     ` Robin Hill
@ 2009-11-08 22:08       ` Ryan Wagoner
  2009-11-08 22:15         ` Andrew Dunn
  0 siblings, 1 reply; 12+ messages in thread
From: Ryan Wagoner @ 2009-11-08 22:08 UTC (permalink / raw)
  To: linux-raid list

Is this the box on your blog at http://blog.agdunn.net/?p=391 ? If so
those cards are to be used in the Supermicro UIO slot, which is
basically just an inverted PCI Express slot. However since there is
only one UIO slot per board they might have not tested compatibility
with multiple in the same system.

I do have one of these boards installed on an Intel board without
issue. I have had the 7 drives connected in mdadm RAID for almost 2
years now with no dropouts. You might try a port multiplier since the
card supports it and one drive isn't going to use the full bandwidth
of a single SAS cable.

Ryan

On Sun, Nov 8, 2009 at 10:01 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> On Sun Nov 08, 2009 at 09:24:20AM -0500, Andrew Dunn wrote:
>> Robin Hill wrote:
>> > On Sun Nov 08, 2009 at 09:00:29AM -0500, Andrew Dunn wrote:
>> >
>> >
>> >> I just copied 4+ TiB of information to this array, restarted 5 times
>> >> and tried to access it.... What is going on?
>> >>
>> >> What kind of logs do you need, I really need help!
>> >>
>> >>
>> > From the message you've posted, it looks like something has triggered
>> > the (simultaneous) removal of four drives from the array.  I'd check the
>> > dmesg output - it should provide some information.  I'd guess these four
>> > drives are all attached to the same controller (are they external or
>> > internal?), so possibly the controller reset (or for external drives, it
>> > could be a cable issue).
>> >
>> What would I be looking for on this? Its a lot to sift through.
>>
>> Currently just line-by-lining it.
>>
> Look for where the drives are being kicked out of the array (should be
> towards the bottom).  Just above that should be some error messages
> (often including bus resets).
>
> Cheers,
>    Robin
> --
>     ___
>    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
>   / / )      | Little Jim says ....                            |
>  // !!       |      "He fallen in de water !!"                 |
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
  2009-11-08 22:08       ` Ryan Wagoner
@ 2009-11-08 22:15         ` Andrew Dunn
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Dunn @ 2009-11-08 22:15 UTC (permalink / raw)
  To: Ryan Wagoner; +Cc: linux-raid list

I had this setup running under ubuntu 9.04 with raid6, all nine drives
for about a month without any of these issues.

I installed the new ubuntu fresh and tried to rebuild the array with a
larger chunk size.

I also created the file system with the proper stride and width parameters.

I am wondering if TLER, or a larger chunk, ext options, or OS change
might have caused this. I do not think that it is a backpane issue... I
will however re-seat all of the drives here in a little bit.

Ryan Wagoner wrote:
> Is this the box on your blog at http://blog.agdunn.net/?p=391 ? If so
> those cards are to be used in the Supermicro UIO slot, which is
> basically just an inverted PCI Express slot. However since there is
> only one UIO slot per board they might have not tested compatibility
> with multiple in the same system.
>
> I do have one of these boards installed on an Intel board without
> issue. I have had the 7 drives connected in mdadm RAID for almost 2
> years now with no dropouts. You might try a port multiplier since the
> card supports it and one drive isn't going to use the full bandwidth
> of a single SAS cable.
>
> Ryan
>
> On Sun, Nov 8, 2009 at 10:01 AM, Robin Hill <robin@robinhill.me.uk> wrote:
>   
>> On Sun Nov 08, 2009 at 09:24:20AM -0500, Andrew Dunn wrote:
>>     
>>> Robin Hill wrote:
>>>       
>>>> On Sun Nov 08, 2009 at 09:00:29AM -0500, Andrew Dunn wrote:
>>>>
>>>>
>>>>         
>>>>> I just copied 4+ TiB of information to this array, restarted 5 times
>>>>> and tried to access it.... What is going on?
>>>>>
>>>>> What kind of logs do you need, I really need help!
>>>>>
>>>>>
>>>>>           
>>>> From the message you've posted, it looks like something has triggered
>>>> the (simultaneous) removal of four drives from the array.  I'd check the
>>>> dmesg output - it should provide some information.  I'd guess these four
>>>> drives are all attached to the same controller (are they external or
>>>> internal?), so possibly the controller reset (or for external drives, it
>>>> could be a cable issue).
>>>>
>>>>         
>>> What would I be looking for on this? Its a lot to sift through.
>>>
>>> Currently just line-by-lining it.
>>>
>>>       
>> Look for where the drives are being kicked out of the array (should be
>> towards the bottom).  Just above that should be some error messages
>> (often including bus resets).
>>
>> Cheers,
>>    Robin
>> --
>>     ___
>>    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
>>   / / )      | Little Jim says ....                            |
>>  // !!       |      "He fallen in de water !!"                 |
>>
>>     
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
       [not found]     ` <4AF82DE4.2040805@scalableinformatics.com>
@ 2009-11-09 21:23       ` Andrew Dunn
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Dunn @ 2009-11-09 21:23 UTC (permalink / raw)
  To: landman, linux-raid; +Cc: Maurice Hilarius

Its something different than hardware, I am getting random failures now,
as in:

Normal (X marks no drive even present, U marks a drop):
A B X X
C D X X
E F X X
G H I X

First loss:

A B X X
C D X X
U U X X
U U I X

Second Loss:

First loss:

U U X X
C D X X
E U X X
G H U X

Just dropping randomly.

I had the system running under ubuntu 9.04 server for 1 month without
ever seeing this, it wont run an hour without this issue now. Currently
trying to find my 9.04 disk so I can just go back, I need this thing
online by wednesday.

I wish I could have easily gotten more information for everyone, this is
a huge problem for people with my setup. I think its possibly a
controller driver thing.




Joe Landman wrote:
> Maurice Hilarius wrote:
>> Joe Landman wrote:
>>> Andrew Dunn wrote:
>>>> I just copied 4+ TiB of information to this array, restarted 5 times
>>>> and tried to access it.... What is going on?
>>>
>>> It looks like you have 4 failed drives. sdl,sdi,sdj,sdk
>>>
>>>
>> Exactly.
>> Looks like one multilane cable is disconnected.
>> Each "feeds" 4 drives.
>> a b c d
>> e f g h
>> i j k l
>
> Yes.  Tha is what I was thinking.  I asked Andrew to reseat cables,
> and check to make sure that power is going to the block on the
> backplane or mobile storage cannister.
>
>
>

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID down, dont know why!
       [not found]     ` <4AF82DAC.4020307@harddata.com>
@ 2009-11-09 22:03       ` Andrew Dunn
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Dunn @ 2009-11-09 22:03 UTC (permalink / raw)
  To: Maurice Hilarius; +Cc: landman, linux-raid list

/dev/sdm did not drop out, and it is on the same cable as sd[kl]

Maurice Hilarius wrote:
> Andrew Dunn wrote:
>> storrgie@ALEXANDRIA:~$ lsscsi  | grep sd[ijkl]
>> [11:0:0:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdi
>> [11:0:1:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdj
>> [11:0:2:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdk
>> [11:0:3:0]   disk    ATA      WDC WD1001FALS-0 0K05  /dev/sdl
>>   
> Looks like cable got "jostled" and the 4 drives "dropped out" of the
> RAID,
> then it re-connected and now they show up as devs again.
>
>
>
> -- 
> With our best regards,
>
> /Maurice W. Hilarius         Telephone: 01-780-456-9771
> Hard Data Ltd.                FAX:          01-780-456-9772
> 11060 - 166 Avenue         email:maurice@harddata.com
> <mailto:email:maurice@harddata.com>
> Edmonton, AB, Canada      T5X 1Y3/
>

-- 
Andrew Dunn
http://agdunn.net


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2009-11-09 22:03 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-11-08 14:00 RAID down, dont know why! Andrew Dunn
2009-11-08 14:07 ` Joe Landman
2009-11-08 14:08   ` Andrew Dunn
2009-11-08 14:15     ` Joe Landman
2009-11-08 14:21       ` Andrew Dunn
     [not found]     ` <4AF82DAC.4020307@harddata.com>
2009-11-09 22:03       ` Andrew Dunn
     [not found]   ` <4AF82D29.507@harddata.com>
     [not found]     ` <4AF82DE4.2040805@scalableinformatics.com>
2009-11-09 21:23       ` Andrew Dunn
2009-11-08 14:22 ` Robin Hill
2009-11-08 14:24   ` Andrew Dunn
2009-11-08 15:01     ` Robin Hill
2009-11-08 22:08       ` Ryan Wagoner
2009-11-08 22:15         ` Andrew Dunn

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.