All of lore.kernel.org
 help / color / mirror / Atom feed
* ICH RAID5 delisted and broken after mdadm installation
@ 2016-01-18 17:10 Пётр Б.
  2016-01-18 17:13 ` Пётр Б.
  0 siblings, 1 reply; 3+ messages in thread
From: Пётр Б. @ 2016-01-18 17:10 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1487 bytes --]

Hello guys.

sda, sdb and sdd were a RAID5 array with 64K chunk size and were used
in Windows and were always connected to the ICH9r controller in this
order.

HDD containing the Windows (sdb) was failing, Windows crashed severly
and I booted into Debian (from sde) to backup information (BIOS listed
RAID5 volume correctly). I booted into Debian and installed mdadm, I
told it that all drives are needed for boot in the process (not true
atm).

After I installed mdadm no new partitions appeared in Gnome. After I
rebooted I saw in RAID BIOS output that sda and sdb are "offline
members" and that sdc is non-raid (it's true) and that sdd is non-raid
(it's false). Also, no RAID volumes are detected.

/dev/md/imsm0 is a link for /dev/md127 which is empty
/dev/md/imsm1 is a link for /dev/md128 which is empty

mdadm reports that no RAID superblock information was found on sdd.

"mdadm --assemble md0 sda sdb sdd" fails:
>mdadm: cannot open device sda: Device or resource busy
>mdadm: sda has no superblock - assembly aborted

It outputs the same for whhichever HDD I list first - sda, sdb, sdd.

Linux 2.6.32-5-amd64 #1 SMP Sun Dec 21 18:01:12 UTC 2014 x86_64 GNU/Linux
mdadm - v3.1.4 - 31st August 2010

I won't boot into Windows before I fix this. Please give me some tips.

P.S. What was I doing wrong? Except using ICH RAID (which, as I
learned ATM does not even implies additional hardware unit and is no
better than software RAID) and using Windows on a failing drive of
course.

[-- Attachment #2: raid_status --]
[-- Type: application/octet-stream, Size: 2116 bytes --]

sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.0.00
    Orig Family : 69ee55dc
         Family : 69ee55dc
     Generation : 0000002f
           UUID : 3a35d3bc:76e760fa:18961fb6:79631a32
       Checksum : 3ecaf78e correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : Z4Y725VC
          State : active
             Id : 00020000
    Usable Size : 1953520904 (931.51 GiB 1000.20 GB)

[Volume_0000]:
           UUID : 05630097:5c3ca47d:b9278835:03c27bac
     RAID Level : 0
        Members : 2
          Slots : [UU]
      This Slot : 0
     Array Size : 3907041280 (1863.02 GiB 2000.41 GB)
   Per Dev Size : 1953520904 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 15261880
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk01 Serial : Z4Y82FDE
          State : active
             Id : 00030000
    Usable Size : 1953520904 (931.51 GiB 1000.20 GB)
sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.2.02
    Orig Family : 0cb49cd3
         Family : 0cb7d1ef
     Generation : 00017bba
           UUID : 4c95fbd9:97f70aef:b4d7dc43:2bc8345e
       Checksum : 5cd59d95 correct
    MPB Sectors : 2
          Disks : 3
   RAID Devices : 1

  Disk01 Serial : Z4Y82FDE
          State : active
             Id : 00030000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[RAID5_64K]:
           UUID : b045475e:53e6e056:60eba77d:b826d370
     RAID Level : 5
        Members : 3
          Slots : [UUU]
      This Slot : 1
     Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
   Per Dev Size : 1953517832 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 15261856
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk00 Serial : Z4Y725VC
          State : active
             Id : 00020000
    Usable Size : 1953518541 (931.51 GiB 1000.20 GB)

  Disk02 Serial : 9QJ81RTJ
          State : active
             Id : 00050000
    Usable Size : 1953518541 (931.51 GiB 1000.20 GB)

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: ICH RAID5 delisted and broken after mdadm installation
  2016-01-18 17:10 ICH RAID5 delisted and broken after mdadm installation Пётр Б.
@ 2016-01-18 17:13 ` Пётр Б.
  2016-01-19 16:13   ` Пётр Б.
  0 siblings, 1 reply; 3+ messages in thread
From: Пётр Б. @ 2016-01-18 17:13 UTC (permalink / raw)
  To: linux-raid

I found a mistake: sdc contains Windows and is failing, not sdb.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: ICH RAID5 delisted and broken after mdadm installation
  2016-01-18 17:13 ` Пётр Б.
@ 2016-01-19 16:13   ` Пётр Б.
  0 siblings, 0 replies; 3+ messages in thread
From: Пётр Б. @ 2016-01-19 16:13 UTC (permalink / raw)
  To: linux-raid

Fixed it myself with these tips:

https://communities.intel.com/thread/26252?tstart=0
http://marc.info/?l=linux-raid&m=134308187613187&w=2

Now, for some strange reason it was working for some time and then got
delisted again ("Offline members" in BIOS). Assembled and started with
mdadm and it worked again after reboot.

2016-01-18 20:13 GMT+03:00, Пётр Б. <satnatantas@gmail.com>:
> I found a mistake: sdc contains Windows and is failing, not sdb.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-01-19 16:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-18 17:10 ICH RAID5 delisted and broken after mdadm installation Пётр Б.
2016-01-18 17:13 ` Пётр Б.
2016-01-19 16:13   ` Пётр Б.

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.