All of lore.kernel.org
 help / color / mirror / Atom feed
* Help - raid not assembling right on boot (was: Resizing a RAID1)
@ 2011-01-27  4:02 Hank Barta
  2011-01-27 11:56 ` Justin Piszcz
  0 siblings, 1 reply; 9+ messages in thread
From: Hank Barta @ 2011-01-27  4:02 UTC (permalink / raw)
  To: linux-raid

I followed the procedure below. Essentially removing one drive from a
RAID1, zeroing the superblock, repartitioning the drive, starting a
new RAID1 in degraded mode, copying over the data and repeating the
process on the second drive.

Everything seemed to be going well with the new RAID mounted and the
second drive syncing right along. However on a subsequent reboot the
RAID did not seem to come up properly. I was no longer able to mount
it. I also noticed that the resync had restarted. I found I could
temporarily resolve this by stopping the RAID1 and reassembling it and
specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
/dev/sdc2) At this point, resync starts again and I can mount
/dev/md2. The problem crops up again on the next reboot. Information
revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
following reassembly. It looks like at boot the entire drives
(/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
partitions.

I do not know where this is coming from. I tried zeroing the
superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
not look like RAID devices.

Results from 'mdadm --detail /dev/md2' before and after is:

=============================
root@oak:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 00.90
  Creation Time : Tue Jan 25 10:39:52 2011
     Raid Level : raid1
     Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Jan 26 21:16:04 2011
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 2% complete

           UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
         Events : 0.13376

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       2       8       16        1      spare rebuilding   /dev/sdb
root@oak:~#
root@oak:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 00.90
  Creation Time : Tue Jan 25 10:39:52 2011
     Raid Level : raid1
     Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Jan 26 21:25:40 2011
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 0% complete

           UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
         Events : 0.13382

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sdc2
       2       8       18        1      spare rebuilding   /dev/sdb2
=============================

Contents of /etc/mdadm/mdadm.conf are:
=============================
hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#ARRAY /dev/md2 level=raid1 num-devices=2
UUID=19d72028:63677f91:cd71bfd9:6916a14f
   #spares=2

# This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
# by mkconf $Id$
hbarta@oak:~$
=============================
(I commented out the two lines following "definitions of existing MD
arrays" because I thought they might be the culprit.)

They seem to match:
=============================
hbarta@oak:~$ sudo mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=954a3be2:f23e1239:cd71bfd9:6916a14f
ARRAY /dev/md2 level=raid1 num-devices=2
UUID=19d72028:63677f91:cd71bfd9:6916a14f
   spares=2
hbarta@oak:~$
=============================
except for the addition of a second RAID which I added after installing mdadm.

I have no idea how to fix this (*) and appreciate any help with how to do so.


(*) All I can think of is to zero both entire drives and start from
the beginning.

On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta <hbarta@gmail.com> wrote:
> My previous experiment with USB flash drives has not gone too far. I
> can install Ubuntu Server 10.04 to a single USB flash drive and boot
> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
> D525MW from it. The Intel board will boot install media on USB flash,
> but not a normal install. (This is an aside.) The desire to use an
> alternate boot was to avoid having to fiddle with a two drive RAID1.
> The drives have a single partition consisting of the entire drive
> which is combined into the RAID1.
>
> My desire to get this system up and running is overrunning my desire
> to get the USB flash raid to boot. My strategy is to
>  - remove one drive from the raid,
>  - repartition it to allow for a system installation
>  - create a new RAID1 with that drive and format the new data
> partition. (both would be  RAID1 and now both degraded to one drive)
>  - copy data from the existing RAID1 data partition to the new RAID1
> data partition.
>  - stop the old RAID1
>  - repartition the other drive (most recently the old RAID1) to match
> the new RAID1
>  - add the second drive to the new RAID1
>  - watch it rebuild and breathe big sigh of relief.
>
> When convenient I can install Linux to the space I've opened up via
> the above machinations and move this project down the road.
>
> That looks pretty straightforward to me, but I've never let that sort
> of thing prevent me from cobbling things up in the past. (And at this
> moment, I'm making a copy of the RAID1 to an external drive just in
> case.) For anyone interested, I'll share the details of my plan to the
> command level in the case that any of you can spot a problem I have
> overlooked.
>
> A related question Is what are the constraints for partitioning the
> drive to achieve best performance? I plan to create a 10G partition on
> each drive for the system. Likewise, suggestions for tuning the RAID
> and filesystem configurations would be appreciated. Usage for the RAID
> is backup for my home LAN as well as storing pictures and more
> recently my video library so there's a mix of large and small files.
> I'm not obsessed with performance as most clients are on WiFi, but I
> might as well grab the low hanging fruit in this regard.
>
> Feel free to comment on any aspects of the details listed below.
>
> many thanks,
> hank
>
> This is what is presently on the drives:
> ========================
> root@oak:~# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md1 : active raid1 sdc1[0] sda1[1]
>      1953511936 blocks [2/2] [UU]
>
> unused devices: <none>
> root@oak:~# fdisk -l /dev/sda /dev/sdc
>
> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1      243201  1953512001   fd  Linux raid autodetect
>
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1      243201  1953512001   fd  Linux raid autodetect
> root@oak:~#
> ========================
>
> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
> The Samsung is one of those with 4K sectors. (I think the Seagate may
> be too.)
>
> Selecting /dev/sdc to migrate first (and following more or less the
> guide on http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-partition.html)
>
> Fail the drive:
>> mdadm --manage /dev/md1 --fail /dev/sdc1
>
> Remove from the array:
>> mdadm --manage /dev/md1 --remove /dev/sdc1
>
> Zero the superblock:
>> mdadm --zero-superblock /dev/sdc1
>
> <Repartition drive with one 10G primary partition at the beginning and
> a second primary partition using the remainder of the drive: /dev/sdc1
> and /dev/sdc2>
>
> Create new RAID:
>> mdadm --create /dev/md2 -n 2 --level=1 /dev/sdc2 missing
>
> Format:
>> mkfs.ext4 /dev/md2
>
> Mount:
>> mount /dev/md2 /mnt/md2
>
> Copy:
>> rsync -av -H -K --partial --partial-dir=.rsync-partial /mnt/md1/ /mnt/USB/
>
> Stop the old RAID:
>> mdadm --stop /dev/md1
>
> Zero the superblock:
>> mdadm --zero-superblock /dev/sda1
>
> Repartition to match the other drive
>
> Add the second drive to the RAID:
>> mdadm --manage /dev/md2 --add /dev/sda2
>
> Watch the resync complete.
>
> Done! (Except for doing something with the new 10G partition, but
> that's another subject.)
>
> Many thanks for reading this far!
>
> best,
> hank
>
> --
> '03 BMW F650CS - hers
> '98 Dakar K12RS - "BABY K" grew up.
> '93 R100R w/ Velorex 700 (MBD starts...)
> '95 Miata - "OUR LC"
> polish visor: apply squashed bugs, rinse, repeat
> Beautiful Sunny Winfield, Illinois
>



-- 
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27  4:02 Help - raid not assembling right on boot (was: Resizing a RAID1) Hank Barta
@ 2011-01-27 11:56 ` Justin Piszcz
  2011-01-27 12:20   ` Hank Barta
  0 siblings, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2011-01-27 11:56 UTC (permalink / raw)
  To: Hank Barta; +Cc: linux-raid

[-- Attachment #1: Type: TEXT/PLAIN, Size: 10965 bytes --]

Hi,

Show fdisk -l on both disks, are the partitions type 0xfd Linux raid Auto 
Detect?  If not, you will have that exact problem.

Justin.

On Wed, 26 Jan 2011, Hank Barta wrote:

> I followed the procedure below. Essentially removing one drive from a
> RAID1, zeroing the superblock, repartitioning the drive, starting a
> new RAID1 in degraded mode, copying over the data and repeating the
> process on the second drive.
>
> Everything seemed to be going well with the new RAID mounted and the
> second drive syncing right along. However on a subsequent reboot the
> RAID did not seem to come up properly. I was no longer able to mount
> it. I also noticed that the resync had restarted. I found I could
> temporarily resolve this by stopping the RAID1 and reassembling it and
> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
> /dev/sdc2) At this point, resync starts again and I can mount
> /dev/md2. The problem crops up again on the next reboot. Information
> revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
> following reassembly. It looks like at boot the entire drives
> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
> partitions.
>
> I do not know where this is coming from. I tried zeroing the
> superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
> not look like RAID devices.
>
> Results from 'mdadm --detail /dev/md2' before and after is:
>
> =============================
> root@oak:~# mdadm --detail /dev/md2
> /dev/md2:
>        Version : 00.90
>  Creation Time : Tue Jan 25 10:39:52 2011
>     Raid Level : raid1
>     Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
>   Raid Devices : 2
>  Total Devices : 2
> Preferred Minor : 2
>    Persistence : Superblock is persistent
>
>    Update Time : Wed Jan 26 21:16:04 2011
>          State : clean, degraded, recovering
> Active Devices : 1
> Working Devices : 2
> Failed Devices : 0
>  Spare Devices : 1
>
> Rebuild Status : 2% complete
>
>           UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
>         Events : 0.13376
>
>    Number   Major   Minor   RaidDevice State
>       0       8       32        0      active sync   /dev/sdc
>       2       8       16        1      spare rebuilding   /dev/sdb
> root@oak:~#
> root@oak:~# mdadm --detail /dev/md2
> /dev/md2:
>        Version : 00.90
>  Creation Time : Tue Jan 25 10:39:52 2011
>     Raid Level : raid1
>     Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
>   Raid Devices : 2
>  Total Devices : 2
> Preferred Minor : 2
>    Persistence : Superblock is persistent
>
>    Update Time : Wed Jan 26 21:25:40 2011
>          State : clean, degraded, recovering
> Active Devices : 1
> Working Devices : 2
> Failed Devices : 0
>  Spare Devices : 1
>
> Rebuild Status : 0% complete
>
>           UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
>         Events : 0.13382
>
>    Number   Major   Minor   RaidDevice State
>       0       8       34        0      active sync   /dev/sdc2
>       2       8       18        1      spare rebuilding   /dev/sdb2
> =============================
>
> Contents of /etc/mdadm/mdadm.conf are:
> =============================
> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST <system>
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR root
>
> # definitions of existing MD arrays
> #ARRAY /dev/md2 level=raid1 num-devices=2
> UUID=19d72028:63677f91:cd71bfd9:6916a14f
>   #spares=2
>
> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
> # by mkconf $Id$
> hbarta@oak:~$
> =============================
> (I commented out the two lines following "definitions of existing MD
> arrays" because I thought they might be the culprit.)
>
> They seem to match:
> =============================
> hbarta@oak:~$ sudo mdadm --examine --scan
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=954a3be2:f23e1239:cd71bfd9:6916a14f
> ARRAY /dev/md2 level=raid1 num-devices=2
> UUID=19d72028:63677f91:cd71bfd9:6916a14f
>   spares=2
> hbarta@oak:~$
> =============================
> except for the addition of a second RAID which I added after installing mdadm.
>
> I have no idea how to fix this (*) and appreciate any help with how to do so.
>
>
> (*) All I can think of is to zero both entire drives and start from
> the beginning.
>
> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta <hbarta@gmail.com> wrote:
>> My previous experiment with USB flash drives has not gone too far. I
>> can install Ubuntu Server 10.04 to a single USB flash drive and boot
>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
>> D525MW from it. The Intel board will boot install media on USB flash,
>> but not a normal install. (This is an aside.) The desire to use an
>> alternate boot was to avoid having to fiddle with a two drive RAID1.
>> The drives have a single partition consisting of the entire drive
>> which is combined into the RAID1.
>>
>> My desire to get this system up and running is overrunning my desire
>> to get the USB flash raid to boot. My strategy is to
>>  - remove one drive from the raid,
>>  - repartition it to allow for a system installation
>>  - create a new RAID1 with that drive and format the new data
>> partition. (both would be  RAID1 and now both degraded to one drive)
>>  - copy data from the existing RAID1 data partition to the new RAID1
>> data partition.
>>  - stop the old RAID1
>>  - repartition the other drive (most recently the old RAID1) to match
>> the new RAID1
>>  - add the second drive to the new RAID1
>>  - watch it rebuild and breathe big sigh of relief.
>>
>> When convenient I can install Linux to the space I've opened up via
>> the above machinations and move this project down the road.
>>
>> That looks pretty straightforward to me, but I've never let that sort
>> of thing prevent me from cobbling things up in the past. (And at this
>> moment, I'm making a copy of the RAID1 to an external drive just in
>> case.) For anyone interested, I'll share the details of my plan to the
>> command level in the case that any of you can spot a problem I have
>> overlooked.
>>
>> A related question Is what are the constraints for partitioning the
>> drive to achieve best performance? I plan to create a 10G partition on
>> each drive for the system. Likewise, suggestions for tuning the RAID
>> and filesystem configurations would be appreciated. Usage for the RAID
>> is backup for my home LAN as well as storing pictures and more
>> recently my video library so there's a mix of large and small files.
>> I'm not obsessed with performance as most clients are on WiFi, but I
>> might as well grab the low hanging fruit in this regard.
>>
>> Feel free to comment on any aspects of the details listed below.
>>
>> many thanks,
>> hank
>>
>> This is what is presently on the drives:
>> ========================
>> root@oak:~# cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md1 : active raid1 sdc1[0] sda1[1]
>>      1953511936 blocks [2/2] [UU]
>>
>> unused devices: <none>
>> root@oak:~# fdisk -l /dev/sda /dev/sdc
>>
>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>>   Device Boot      Start         End      Blocks   Id  System
>> /dev/sda1   *           1      243201  1953512001   fd  Linux raid autodetect
>>
>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>>   Device Boot      Start         End      Blocks   Id  System
>> /dev/sdc1               1      243201  1953512001   fd  Linux raid autodetect
>> root@oak:~#
>> ========================
>>
>> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
>> The Samsung is one of those with 4K sectors. (I think the Seagate may
>> be too.)
>>
>> Selecting /dev/sdc to migrate first (and following more or less the
>> guide on http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-partition.html)
>>
>> Fail the drive:
>>> mdadm --manage /dev/md1 --fail /dev/sdc1
>>
>> Remove from the array:
>>> mdadm --manage /dev/md1 --remove /dev/sdc1
>>
>> Zero the superblock:
>>> mdadm --zero-superblock /dev/sdc1
>>
>> <Repartition drive with one 10G primary partition at the beginning and
>> a second primary partition using the remainder of the drive: /dev/sdc1
>> and /dev/sdc2>
>>
>> Create new RAID:
>>> mdadm --create /dev/md2 -n 2 --level=1 /dev/sdc2 missing
>>
>> Format:
>>> mkfs.ext4 /dev/md2
>>
>> Mount:
>>> mount /dev/md2 /mnt/md2
>>
>> Copy:
>>> rsync -av -H -K --partial --partial-dir=.rsync-partial /mnt/md1/ /mnt/USB/
>>
>> Stop the old RAID:
>>> mdadm --stop /dev/md1
>>
>> Zero the superblock:
>>> mdadm --zero-superblock /dev/sda1
>>
>> Repartition to match the other drive
>>
>> Add the second drive to the RAID:
>>> mdadm --manage /dev/md2 --add /dev/sda2
>>
>> Watch the resync complete.
>>
>> Done! (Except for doing something with the new 10G partition, but
>> that's another subject.)
>>
>> Many thanks for reading this far!
>>
>> best,
>> hank
>>
>> --
>> '03 BMW F650CS - hers
>> '98 Dakar K12RS - "BABY K" grew up.
>> '93 R100R w/ Velorex 700 (MBD starts...)
>> '95 Miata - "OUR LC"
>> polish visor: apply squashed bugs, rinse, repeat
>> Beautiful Sunny Winfield, Illinois
>>
>
>
>
> -- 
> '03 BMW F650CS - hers
> '98 Dakar K12RS - "BABY K" grew up.
> '93 R100R w/ Velorex 700 (MBD starts...)
> '95 Miata - "OUR LC"
> polish visor: apply squashed bugs, rinse, repeat
> Beautiful Sunny Winfield, Illinois
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27 11:56 ` Justin Piszcz
@ 2011-01-27 12:20   ` Hank Barta
  2011-01-27 12:37     ` Justin Piszcz
  2011-01-27 20:47     ` NeilBrown
  0 siblings, 2 replies; 9+ messages in thread
From: Hank Barta @ 2011-01-27 12:20 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-raid

Thanks for the suggestion:

=============================
hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    20973567    10485760   fd  Linux raid autodetect
/dev/sdb2        20973568  3907029167  1943027800   fd  Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    20973567    10485760   fd  Linux raid autodetect
/dev/sdc2        20973568  3907029167  1943027800   fd  Linux raid autodetect
hbarta@oak:~$
=============================

Everything seems OK as far as I can see.

thanks,
hank



On Thu, Jan 27, 2011 at 5:56 AM, Justin Piszcz <jpiszcz@lucidpixels.com> wrote:
> Hi,
>
> Show fdisk -l on both disks, are the partitions type 0xfd Linux raid Auto
> Detect?  If not, you will have that exact problem.
>
> Justin.
>
> On Wed, 26 Jan 2011, Hank Barta wrote:
>
>> I followed the procedure below. Essentially removing one drive from a
>> RAID1, zeroing the superblock, repartitioning the drive, starting a
>> new RAID1 in degraded mode, copying over the data and repeating the
>> process on the second drive.
>>
>> Everything seemed to be going well with the new RAID mounted and the
>> second drive syncing right along. However on a subsequent reboot the
>> RAID did not seem to come up properly. I was no longer able to mount
>> it. I also noticed that the resync had restarted. I found I could
>> temporarily resolve this by stopping the RAID1 and reassembling it and
>> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
>> /dev/sdc2) At this point, resync starts again and I can mount
>> /dev/md2. The problem crops up again on the next reboot. Information
>> revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
>> following reassembly. It looks like at boot the entire drives
>> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
>> partitions.
>>
>> I do not know where this is coming from. I tried zeroing the
>> superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
>> not look like RAID devices.
>>
>> Results from 'mdadm --detail /dev/md2' before and after is:
>>
>> =============================
>> root@oak:~# mdadm --detail /dev/md2
>> /dev/md2:
>>       Version : 00.90
>>  Creation Time : Tue Jan 25 10:39:52 2011
>>    Raid Level : raid1
>>    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
>>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
>>  Raid Devices : 2
>>  Total Devices : 2
>> Preferred Minor : 2
>>   Persistence : Superblock is persistent
>>
>>   Update Time : Wed Jan 26 21:16:04 2011
>>         State : clean, degraded, recovering
>> Active Devices : 1
>> Working Devices : 2
>> Failed Devices : 0
>>  Spare Devices : 1
>>
>> Rebuild Status : 2% complete
>>
>>          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
>>        Events : 0.13376
>>
>>   Number   Major   Minor   RaidDevice State
>>      0       8       32        0      active sync   /dev/sdc
>>      2       8       16        1      spare rebuilding   /dev/sdb
>> root@oak:~#
>> root@oak:~# mdadm --detail /dev/md2
>> /dev/md2:
>>       Version : 00.90
>>  Creation Time : Tue Jan 25 10:39:52 2011
>>    Raid Level : raid1
>>    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
>>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
>>  Raid Devices : 2
>>  Total Devices : 2
>> Preferred Minor : 2
>>   Persistence : Superblock is persistent
>>
>>   Update Time : Wed Jan 26 21:25:40 2011
>>         State : clean, degraded, recovering
>> Active Devices : 1
>> Working Devices : 2
>> Failed Devices : 0
>>  Spare Devices : 1
>>
>> Rebuild Status : 0% complete
>>
>>          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
>>        Events : 0.13382
>>
>>   Number   Major   Minor   RaidDevice State
>>      0       8       34        0      active sync   /dev/sdc2
>>      2       8       18        1      spare rebuilding   /dev/sdb2
>> =============================
>>
>> Contents of /etc/mdadm/mdadm.conf are:
>> =============================
>> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
>> # mdadm.conf
>> #
>> # Please refer to mdadm.conf(5) for information about this file.
>> #
>>
>> # by default, scan all partitions (/proc/partitions) for MD superblocks.
>> # alternatively, specify devices to scan, using wildcards if desired.
>> DEVICE partitions
>>
>> # auto-create devices with Debian standard permissions
>> CREATE owner=root group=disk mode=0660 auto=yes
>>
>> # automatically tag new arrays as belonging to the local system
>> HOMEHOST <system>
>>
>> # instruct the monitoring daemon where to send mail alerts
>> MAILADDR root
>>
>> # definitions of existing MD arrays
>> #ARRAY /dev/md2 level=raid1 num-devices=2
>> UUID=19d72028:63677f91:cd71bfd9:6916a14f
>>  #spares=2
>>
>> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
>> # by mkconf $Id$
>> hbarta@oak:~$
>> =============================
>> (I commented out the two lines following "definitions of existing MD
>> arrays" because I thought they might be the culprit.)
>>
>> They seem to match:
>> =============================
>> hbarta@oak:~$ sudo mdadm --examine --scan
>> ARRAY /dev/md0 level=raid1 num-devices=2
>> UUID=954a3be2:f23e1239:cd71bfd9:6916a14f
>> ARRAY /dev/md2 level=raid1 num-devices=2
>> UUID=19d72028:63677f91:cd71bfd9:6916a14f
>>  spares=2
>> hbarta@oak:~$
>> =============================
>> except for the addition of a second RAID which I added after installing
>> mdadm.
>>
>> I have no idea how to fix this (*) and appreciate any help with how to do
>> so.
>>
>>
>> (*) All I can think of is to zero both entire drives and start from
>> the beginning.
>>
>> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta <hbarta@gmail.com> wrote:
>>>
>>> My previous experiment with USB flash drives has not gone too far. I
>>> can install Ubuntu Server 10.04 to a single USB flash drive and boot
>>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
>>> D525MW from it. The Intel board will boot install media on USB flash,
>>> but not a normal install. (This is an aside.) The desire to use an
>>> alternate boot was to avoid having to fiddle with a two drive RAID1.
>>> The drives have a single partition consisting of the entire drive
>>> which is combined into the RAID1.
>>>
>>> My desire to get this system up and running is overrunning my desire
>>> to get the USB flash raid to boot. My strategy is to
>>>  - remove one drive from the raid,
>>>  - repartition it to allow for a system installation
>>>  - create a new RAID1 with that drive and format the new data
>>> partition. (both would be  RAID1 and now both degraded to one drive)
>>>  - copy data from the existing RAID1 data partition to the new RAID1
>>> data partition.
>>>  - stop the old RAID1
>>>  - repartition the other drive (most recently the old RAID1) to match
>>> the new RAID1
>>>  - add the second drive to the new RAID1
>>>  - watch it rebuild and breathe big sigh of relief.
>>>
>>> When convenient I can install Linux to the space I've opened up via
>>> the above machinations and move this project down the road.
>>>
>>> That looks pretty straightforward to me, but I've never let that sort
>>> of thing prevent me from cobbling things up in the past. (And at this
>>> moment, I'm making a copy of the RAID1 to an external drive just in
>>> case.) For anyone interested, I'll share the details of my plan to the
>>> command level in the case that any of you can spot a problem I have
>>> overlooked.
>>>
>>> A related question Is what are the constraints for partitioning the
>>> drive to achieve best performance? I plan to create a 10G partition on
>>> each drive for the system. Likewise, suggestions for tuning the RAID
>>> and filesystem configurations would be appreciated. Usage for the RAID
>>> is backup for my home LAN as well as storing pictures and more
>>> recently my video library so there's a mix of large and small files.
>>> I'm not obsessed with performance as most clients are on WiFi, but I
>>> might as well grab the low hanging fruit in this regard.
>>>
>>> Feel free to comment on any aspects of the details listed below.
>>>
>>> many thanks,
>>> hank
>>>
>>> This is what is presently on the drives:
>>> ========================
>>> root@oak:~# cat /proc/mdstat
>>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>>> [raid4] [raid10]
>>> md1 : active raid1 sdc1[0] sda1[1]
>>>      1953511936 blocks [2/2] [UU]
>>>
>>> unused devices: <none>
>>> root@oak:~# fdisk -l /dev/sda /dev/sdc
>>>
>>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
>>> 255 heads, 63 sectors/track, 243201 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disk identifier: 0x00000000
>>>
>>>   Device Boot      Start         End      Blocks   Id  System
>>> /dev/sda1   *           1      243201  1953512001   fd  Linux raid
>>> autodetect
>>>
>>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
>>> 255 heads, 63 sectors/track, 243201 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disk identifier: 0x00000000
>>>
>>>   Device Boot      Start         End      Blocks   Id  System
>>> /dev/sdc1               1      243201  1953512001   fd  Linux raid
>>> autodetect
>>> root@oak:~#
>>> ========================
>>>
>>> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
>>> The Samsung is one of those with 4K sectors. (I think the Seagate may
>>> be too.)
>>>
>>> Selecting /dev/sdc to migrate first (and following more or less the
>>> guide on
>>> http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-partition.html)
>>>
>>> Fail the drive:
>>>>
>>>> mdadm --manage /dev/md1 --fail /dev/sdc1
>>>
>>> Remove from the array:
>>>>
>>>> mdadm --manage /dev/md1 --remove /dev/sdc1
>>>
>>> Zero the superblock:
>>>>
>>>> mdadm --zero-superblock /dev/sdc1
>>>
>>> <Repartition drive with one 10G primary partition at the beginning and
>>> a second primary partition using the remainder of the drive: /dev/sdc1
>>> and /dev/sdc2>
>>>
>>> Create new RAID:
>>>>
>>>> mdadm --create /dev/md2 -n 2 --level=1 /dev/sdc2 missing
>>>
>>> Format:
>>>>
>>>> mkfs.ext4 /dev/md2
>>>
>>> Mount:
>>>>
>>>> mount /dev/md2 /mnt/md2
>>>
>>> Copy:
>>>>
>>>> rsync -av -H -K --partial --partial-dir=.rsync-partial /mnt/md1/
>>>> /mnt/USB/
>>>
>>> Stop the old RAID:
>>>>
>>>> mdadm --stop /dev/md1
>>>
>>> Zero the superblock:
>>>>
>>>> mdadm --zero-superblock /dev/sda1
>>>
>>> Repartition to match the other drive
>>>
>>> Add the second drive to the RAID:
>>>>
>>>> mdadm --manage /dev/md2 --add /dev/sda2
>>>
>>> Watch the resync complete.
>>>
>>> Done! (Except for doing something with the new 10G partition, but
>>> that's another subject.)
>>>
>>> Many thanks for reading this far!
>>>
>>> best,
>>> hank
>>>
>>> --
>>> '03 BMW F650CS - hers
>>> '98 Dakar K12RS - "BABY K" grew up.
>>> '93 R100R w/ Velorex 700 (MBD starts...)
>>> '95 Miata - "OUR LC"
>>> polish visor: apply squashed bugs, rinse, repeat
>>> Beautiful Sunny Winfield, Illinois
>>>
>>
>>
>>
>> --
>> '03 BMW F650CS - hers
>> '98 Dakar K12RS - "BABY K" grew up.
>> '93 R100R w/ Velorex 700 (MBD starts...)
>> '95 Miata - "OUR LC"
>> polish visor: apply squashed bugs, rinse, repeat
>> Beautiful Sunny Winfield, Illinois
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27 12:20   ` Hank Barta
@ 2011-01-27 12:37     ` Justin Piszcz
  2011-01-27 13:39       ` Hank Barta
  2011-01-27 20:47     ` NeilBrown
  1 sibling, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2011-01-27 12:37 UTC (permalink / raw)
  To: Hank Barta; +Cc: linux-raid


On Thu, 27 Jan 2011, Hank Barta wrote:

> Thanks for the suggestion:
>
> =============================
> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
>
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1            2048    20973567    10485760   fd  Linux raid autodetect
> /dev/sdb2        20973568  3907029167  1943027800   fd  Linux raid autodetect
>
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1            2048    20973567    10485760   fd  Linux raid autodetect
> /dev/sdc2        20973568  3907029167  1943027800   fd  Linux raid autodetect
> hbarta@oak:~$
> =============================
>
> Everything seems OK as far as I can see.
>
> thanks,
> hank

Hi,

That looks correct, so you boot from /dev/sdb, /dev/sdc?  Normally when I
do a RAID1 it is with /dev/sda, /dev/sdb for SATA systems...  It looks
good, if you reboot again does it want to resync again?

Justin.



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27 12:37     ` Justin Piszcz
@ 2011-01-27 13:39       ` Hank Barta
  2011-01-27 15:06         ` Justin Piszcz
  0 siblings, 1 reply; 9+ messages in thread
From: Hank Barta @ 2011-01-27 13:39 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-raid

The system presently boots from /dev/sda:
=============================
hbarta@oak:~$ sudo fdisk -luc /dev/sda

Disk /dev/sda: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c071b

  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    39063551    19530752   83  Linux
/dev/sda2        39065598   390721535   175827969    5  Extended
/dev/sda5        39065600    54687743     7811072   82  Linux swap / Solaris
/dev/sda6        54689792   390721535   168015872   83  Linux
hbarta@oak:~$
=============================

Eventually I plan to migrate the RAID to another system where it will
boot from what is now /dev/sd[bc]

At present I have the RAID listed in /etc/fstab so the boot process
stalls when it tries to mount /dev/md2. At that point I can get to a
console and:
- stop a spurious RAID listed in /proc/mdstat. This is named
/dev/md_<something>. I copied /proc/mdstat to /tmp at that point but
this is apparently before /tmp gets cleared on boot.
- stop /dev/md2 At this point in the boot process it has not started to resync.
- assemble /dev/md2. This time it does not start resync.
- mount /dev/md2
- exit the console and complete the boot process.

In the output below, I have highlighted some lines of particular
interest using "<<<<<<<<<<<<<<<<<<<<<<<<"

From dmesg I find:
=============================
[    1.777908] udev: starting version 151
[    1.782359] md: linear personality registered for level -1
...
[    1.797816] md: multipath personality registered for level -4
...
[    1.814115] md: raid0 personality registered for level 0
...
[    2.706178] md: raid1 personality registered for level 1
...
[    2.730265] md: bind<sdb>
   <<<<<<<<<<<<<<<<<<<<<<<<
[    2.768834] md: bind<sdc>
   <<<<<<<<<<<<<<<<<<<<<<<<
[    2.770005] raid1: raid set md2 active with 2 out of 2 mirrors
[    2.770022] md2: detected capacity change from 0 to 1989660377088
[    2.779491]  md2: p1 p2
[    2.810420] md2: p2 size 3886055600 exceeds device capacity,
limited to end of disk
[    2.871677] raid6: int64x1   2414 MB/s
[    3.041683] raid6: int64x2   3306 MB/s
[    3.211675] raid6: int64x4   2498 MB/s
[    3.381687] raid6: int64x8   2189 MB/s
[    3.551687] raid6: sse2x1    3856 MB/s
[    3.721674] raid6: sse2x2    6233 MB/s
[    3.891676] raid6: sse2x4    7434 MB/s
[    3.891678] raid6: using algorithm sse2x4 (7434 MB/s)
[    3.892539] xor: automatically using best checksumming function: generic_sse
[    3.941685]    generic_sse: 11496.800 MB/sec
[    3.941687] xor: using function: generic_sse (11496.800 MB/sec)
[    3.944793] md: raid6 personality registered for level 6
[    3.944795] md: raid5 personality registered for level 5
[    3.944796] md: raid4 personality registered for level 4
[    3.949094] md: raid10 personality registered for level 10
[    4.034790] EXT4-fs (sda1): mounted filesystem with ordered data mode
...
[   15.313074] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   15.322662] md: bind<md2p1>
      <<<<<<<<<<<<<<<<<<<<<<<<
[   15.347522] [drm] ring test succeeded in 1 usecs
=============================

and finally where boot process halts and I intervene manually:

=============================
[   16.147562] EXT4-fs (sda6): mounted filesystem with ordered data mode
[   16.532107] EXT4-fs (md2p2): bad geometry: block count 485756928
exceeds size of device (483135232 blocks)
[  212.816279] md: md_d0 stopped.
[  212.816289] md: unbind<md2p1>
[  212.861783] md: export_rdev(md2p1)
[  225.764663] md: md2 stopped.
[  225.764669] md: unbind<sdc>
[  225.811751] md: export_rdev(sdc)
[  225.811779] md: unbind<sdb>
[  225.891748] md: export_rdev(sdb)
[  249.653886] md: md2 stopped.
[  249.655627] md: bind<sdb2>
[  249.655788] md: bind<sdc2>
[  249.679172] raid1: raid set md2 active with 2 out of 2 mirrors
[  249.679194] md2: detected capacity change from 0 to 1989660377088
[  249.680142]  md2: unknown partition table
[  270.774369] EXT4-fs (md2): mounted filesystem with ordered data mode
=============================
(no further pattern match in dmesg for 'md:')

The following command seems to find a RAID superblock on /dev/sdb and
/dev/sdc which would explain why they are assembled at boot:
=============================
root@oak:/var/log# mdadm --examine --scan -vv
mdadm: No md superblock detected on /dev/block/9:2.
/dev/sdc2:
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
 Creation Time : Tue Jan 25 10:39:52 2011
    Raid Level : raid1
 Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 2

   Update Time : Thu Jan 27 07:12:16 2011
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 6b4365e0 - correct
        Events : 13448


     Number   Major   Minor   RaidDevice State
this     0       8       34        0      active sync   /dev/sdc2

  0     0       8       34        0      active sync   /dev/sdc2
  1     1       8       18        1      active sync   /dev/sdb2
/dev/sdc1:
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 954a3be2:f23e1239:cd71bfd9:6916a14f (local to host oak)
 Creation Time : Wed Jan 26 20:20:06 2011
    Raid Level : raid1
 Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
    Array Size : 10485696 (10.00 GiB 10.74 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 0

   Update Time : Wed Jan 26 21:16:05 2011
         State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 25dccb8 - correct
        Events : 3


     Number   Major   Minor   RaidDevice State
this     1       8       33        1      active sync   /dev/sdc1

  0     0       8       17        0      active sync   /dev/sdb1
  1     1       8       33        1      active sync   /dev/sdc1
/dev/sdc:
              <<<<<<<<<<<<<<<<<<<<<<<<
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
 Creation Time : Tue Jan 25 10:39:52 2011
    Raid Level : raid1
 Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 2

   Update Time : Thu Jan 27 07:12:16 2011
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 6b4365e0 - correct
        Events : 13448


     Number   Major   Minor   RaidDevice State
this     0       8       34        0      active sync   /dev/sdc2

  0     0       8       34        0      active sync   /dev/sdc2
  1     1       8       18        1      active sync   /dev/sdb2
/dev/sdb2:
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
 Creation Time : Tue Jan 25 10:39:52 2011
    Raid Level : raid1
 Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 2

   Update Time : Thu Jan 27 07:12:16 2011
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 6b4365d2 - correct
        Events : 13448


     Number   Major   Minor   RaidDevice State
this     1       8       18        1      active sync   /dev/sdb2

  0     0       8       34        0      active sync   /dev/sdc2
  1     1       8       18        1      active sync   /dev/sdb2
/dev/sdb1:
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 954a3be2:f23e1239:cd71bfd9:6916a14f (local to host oak)
 Creation Time : Wed Jan 26 20:20:06 2011
    Raid Level : raid1
 Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
    Array Size : 10485696 (10.00 GiB 10.74 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 0

   Update Time : Wed Jan 26 21:16:05 2011
         State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 25dccb8 - correct
        Events : 3


     Number   Major   Minor   RaidDevice State
this     1       8       33        1      active sync   /dev/sdc1

  0     0       8       17        0      active sync   /dev/sdb1
  1     1       8       33        1      active sync   /dev/sdc1
/dev/sdb:
             <<<<<<<<<<<<<<<<<<<<<<<<
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
 Creation Time : Tue Jan 25 10:39:52 2011
    Raid Level : raid1
 Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 2

   Update Time : Thu Jan 27 07:12:16 2011
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0
      Checksum : 6b4365d2 - correct
        Events : 13448


     Number   Major   Minor   RaidDevice State
this     1       8       18        1      active sync   /dev/sdb2

  0     0       8       34        0      active sync   /dev/sdc2
  1     1       8       18        1      active sync   /dev/sdb2
mdadm: No md superblock detected on /dev/sda6.
mdadm: No md superblock detected on /dev/sda5.
mdadm: No md superblock detected on /dev/sda2.
mdadm: No md superblock detected on /dev/sda1.
mdadm: No md superblock detected on /dev/sda.
root@oak:/var/log#
=============================

If I try to zero the superblock that seems to be in error, I get:
=============================
root@oak:/var/log# mdadm --zero-superblock /dev/sdb
mdadm: Couldn't open /dev/sdb for write - not zeroing
root@oak:/var/log#
=============================

thanks again,
hank


On Thu, Jan 27, 2011 at 6:37 AM, Justin Piszcz <jpiszcz@lucidpixels.com> wrote:
>
> On Thu, 27 Jan 2011, Hank Barta wrote:
>
>> Thanks for the suggestion:
>>
>> =============================
>> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
>>
>> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>>  Device Boot      Start         End      Blocks   Id  System
>> /dev/sdb1            2048    20973567    10485760   fd  Linux raid
>> autodetect
>> /dev/sdb2        20973568  3907029167  1943027800   fd  Linux raid
>> autodetect
>>
>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>>  Device Boot      Start         End      Blocks   Id  System
>> /dev/sdc1            2048    20973567    10485760   fd  Linux raid
>> autodetect
>> /dev/sdc2        20973568  3907029167  1943027800   fd  Linux raid
>> autodetect
>> hbarta@oak:~$
>> =============================
>>
>> Everything seems OK as far as I can see.
>>
>> thanks,
>> hank
>
> Hi,
>
> That looks correct, so you boot from /dev/sdb, /dev/sdc?  Normally when I
> do a RAID1 it is with /dev/sda, /dev/sdb for SATA systems...  It looks
> good, if you reboot again does it want to resync again?
>
> Justin.
>
>
>



--
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27 13:39       ` Hank Barta
@ 2011-01-27 15:06         ` Justin Piszcz
  0 siblings, 0 replies; 9+ messages in thread
From: Justin Piszcz @ 2011-01-27 15:06 UTC (permalink / raw)
  To: Hank Barta; +Cc: linux-raid

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2866 bytes --]



On Thu, 27 Jan 2011, Hank Barta wrote:

Hi,

You may just want to dd and start over; or:


If I try to zero the superblock that seems to be in error, I get:
=============================
root@oak:/var/log# mdadm --zero-superblock /dev/sdb
mdadm: Couldn't open /dev/sdb for write - not zeroing
root@oak:/var/log#
=============================

Have you tried using the partition itself?
/dev/sdb1?

Also, any reason for making partitions on a MD raid device?
[   16.532107] EXT4-fs (md2p2): bad geometry: block count 485756928
exceeds size of device (483135232 blocks)

This is generally not a good idea.

It sounds like you want to make a raid-1 with two disks and pop it into a 
new system?

Way I typically do this is insert both drives into the new system, boot 
off a system rescue cd and then create the raid there, boot off the cd 
again, with root=/dev/md2, then run LILO, make sure to use 0.90 
superblocks.

See below:

USE --assume-clean FOR LARGE FILESYSTEMS so you can reboot directly after
array creation and system restore implantation. (you will want to run an
echo repair > /sys/..sync_action though afterwards)

root@Knoppix:/t/etc# mdadm --create -e 0.90 --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 
mdadm: size set to 8393856K
mdadm: array /dev/md0 started.
root@Knoppix:/t/etc# cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdb1[1] sda1[0]
       8393856 blocks [2/2] [UU]
       [>....................]  resync =  1.4% (120512/8393856) finish=3.4min speed=40170K/sec

root@Knoppix:/t/etc# mdadm --create -e 0.90 --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm: size set to 136448K
mdadm: array /dev/md1 started.
root@Knoppix:/t/etc#

root@Knoppix:/t/etc# mdadm --create -e 0.90 --verbose /dev/md2 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
mdadm: size set to 382178240K
mdadm: array /dev/md2 started.
root@Knoppix:/t/etc#

root@Knoppix:/t/etc# cat /proc/mdstat 
Personalities : [raid1] 
md2 : active raid1 sdb3[1] sda3[0]
       382178240 blocks [2/2] [UU]
         resync=DELAYED

md1 : active raid1 sdb2[1] sda2[0]
       136448 blocks [2/2] [UU]
         resync=DELAYED

md0 : active raid1 sdb1[1] sda1[0]
       8393856 blocks [2/2] [UU]
       [==========>..........]  resync = 51.0% (4283072/8393856) finish=1.0min speed=62280K/sec

unused devices: <none>
root@Knoppix:/t/etc#

After this, you'll need to set them as 0xfd and make sure the boot is bootable,
your LILO config should look something like this:

boot=/dev/md1
root=/dev/md2
map=/boot/map
prompt
delay=100
timeout=100
lba32
vga=normal
append=""
raid-extra-boot="/dev/sda,/dev/sdb" # make boot blocks on both drives
default=2.6.37-3

image=/boot/2.6.37-3
   label=2.6.37-3
   read-only
   root=/dev/md2


Justin.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27 12:20   ` Hank Barta
  2011-01-27 12:37     ` Justin Piszcz
@ 2011-01-27 20:47     ` NeilBrown
       [not found]       ` <AANLkTinMhbozd3_28TRszxbqDuGyyvr7PcijFWWZEJEP@mail.gmail.com>
  2011-01-28  2:50       ` Hank Barta
  1 sibling, 2 replies; 9+ messages in thread
From: NeilBrown @ 2011-01-27 20:47 UTC (permalink / raw)
  To: Hank Barta; +Cc: Justin Piszcz, linux-raid

On Thu, 27 Jan 2011 06:20:39 -0600 Hank Barta <hbarta@gmail.com> wrote:

> Thanks for the suggestion:
> 
> =============================
> hbarta@oak:~$ sudo fdisk -luc /dev/sd[bc]
> 
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1            2048    20973567    10485760   fd  Linux raid autodetect
> /dev/sdb2        20973568  3907029167  1943027800   fd  Linux raid autodetect

These start numbers are multiples of 64K.

With 0.90 metadata, md thinks that the metadata for a partition that starts
at a multiple of 64K and ends a the end of the device looks just like metadata
for the whole devices.

If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.

NeilBrown


> 
> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1            2048    20973567    10485760   fd  Linux raid autodetect
> /dev/sdc2        20973568  3907029167  1943027800   fd  Linux raid autodetect
> hbarta@oak:~$
> =============================
> 
> Everything seems OK as far as I can see.
> 
> thanks,
> hank
> 
> 
> 
> On Thu, Jan 27, 2011 at 5:56 AM, Justin Piszcz <jpiszcz@lucidpixels.com> wrote:
> > Hi,
> >
> > Show fdisk -l on both disks, are the partitions type 0xfd Linux raid Auto
> > Detect?  If not, you will have that exact problem.
> >
> > Justin.
> >
> > On Wed, 26 Jan 2011, Hank Barta wrote:
> >
> >> I followed the procedure below. Essentially removing one drive from a
> >> RAID1, zeroing the superblock, repartitioning the drive, starting a
> >> new RAID1 in degraded mode, copying over the data and repeating the
> >> process on the second drive.
> >>
> >> Everything seemed to be going well with the new RAID mounted and the
> >> second drive syncing right along. However on a subsequent reboot the
> >> RAID did not seem to come up properly. I was no longer able to mount
> >> it. I also noticed that the resync had restarted. I found I could
> >> temporarily resolve this by stopping the RAID1 and reassembling it and
> >> specifying the partitions. (e.g. mdadm ---assemble /dev/md2 /dev/sdb2
> >> /dev/sdc2) At this point, resync starts again and I can mount
> >> /dev/md2. The problem crops up again on the next reboot. Information
> >> revealed by 'mdadm --detail /dev/md2' changes between "from boot" and
> >> following reassembly. It looks like at boot the entire drives
> >> (/dev/sdb, /dev/sdc) are combined into a RAID1 rather than the desired
> >> partitions.
> >>
> >> I do not know where this is coming from. I tried zeroing the
> >> superblock for both /dev/sdb and /dev/sdc and mdadm reported they did
> >> not look like RAID devices.
> >>
> >> Results from 'mdadm --detail /dev/md2' before and after is:
> >>
> >> =============================
> >> root@oak:~# mdadm --detail /dev/md2
> >> /dev/md2:
> >>       Version : 00.90
> >>  Creation Time : Tue Jan 25 10:39:52 2011
> >>    Raid Level : raid1
> >>    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Raid Devices : 2
> >>  Total Devices : 2
> >> Preferred Minor : 2
> >>   Persistence : Superblock is persistent
> >>
> >>   Update Time : Wed Jan 26 21:16:04 2011
> >>         State : clean, degraded, recovering
> >> Active Devices : 1
> >> Working Devices : 2
> >> Failed Devices : 0
> >>  Spare Devices : 1
> >>
> >> Rebuild Status : 2% complete
> >>
> >>          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
> >>        Events : 0.13376
> >>
> >>   Number   Major   Minor   RaidDevice State
> >>      0       8       32        0      active sync   /dev/sdc
> >>      2       8       16        1      spare rebuilding   /dev/sdb
> >> root@oak:~#
> >> root@oak:~# mdadm --detail /dev/md2
> >> /dev/md2:
> >>       Version : 00.90
> >>  Creation Time : Tue Jan 25 10:39:52 2011
> >>    Raid Level : raid1
> >>    Array Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Used Dev Size : 1943027712 (1853.02 GiB 1989.66 GB)
> >>  Raid Devices : 2
> >>  Total Devices : 2
> >> Preferred Minor : 2
> >>   Persistence : Superblock is persistent
> >>
> >>   Update Time : Wed Jan 26 21:25:40 2011
> >>         State : clean, degraded, recovering
> >> Active Devices : 1
> >> Working Devices : 2
> >> Failed Devices : 0
> >>  Spare Devices : 1
> >>
> >> Rebuild Status : 0% complete
> >>
> >>          UUID : 19d72028:63677f91:cd71bfd9:6916a14f (local to host oak)
> >>        Events : 0.13382
> >>
> >>   Number   Major   Minor   RaidDevice State
> >>      0       8       34        0      active sync   /dev/sdc2
> >>      2       8       18        1      spare rebuilding   /dev/sdb2
> >> =============================
> >>
> >> Contents of /etc/mdadm/mdadm.conf are:
> >> =============================
> >> hbarta@oak:~$ cat /etc/mdadm/mdadm.conf
> >> # mdadm.conf
> >> #
> >> # Please refer to mdadm.conf(5) for information about this file.
> >> #
> >>
> >> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> >> # alternatively, specify devices to scan, using wildcards if desired.
> >> DEVICE partitions
> >>
> >> # auto-create devices with Debian standard permissions
> >> CREATE owner=root group=disk mode=0660 auto=yes
> >>
> >> # automatically tag new arrays as belonging to the local system
> >> HOMEHOST <system>
> >>
> >> # instruct the monitoring daemon where to send mail alerts
> >> MAILADDR root
> >>
> >> # definitions of existing MD arrays
> >> #ARRAY /dev/md2 level=raid1 num-devices=2
> >> UUID=19d72028:63677f91:cd71bfd9:6916a14f
> >>  #spares=2
> >>
> >> # This file was auto-generated on Wed, 26 Jan 2011 09:53:42 -0600
> >> # by mkconf $Id$
> >> hbarta@oak:~$
> >> =============================
> >> (I commented out the two lines following "definitions of existing MD
> >> arrays" because I thought they might be the culprit.)
> >>
> >> They seem to match:
> >> =============================
> >> hbarta@oak:~$ sudo mdadm --examine --scan
> >> ARRAY /dev/md0 level=raid1 num-devices=2
> >> UUID=954a3be2:f23e1239:cd71bfd9:6916a14f
> >> ARRAY /dev/md2 level=raid1 num-devices=2
> >> UUID=19d72028:63677f91:cd71bfd9:6916a14f
> >>  spares=2
> >> hbarta@oak:~$
> >> =============================
> >> except for the addition of a second RAID which I added after installing
> >> mdadm.
> >>
> >> I have no idea how to fix this (*) and appreciate any help with how to do
> >> so.
> >>
> >>
> >> (*) All I can think of is to zero both entire drives and start from
> >> the beginning.
> >>
> >> On Tue, Jan 25, 2011 at 9:41 AM, Hank Barta <hbarta@gmail.com> wrote:
> >>>
> >>> My previous experiment with USB flash drives has not gone too far. I
> >>> can install Ubuntu Server 10.04 to a single USB flash drive and boot
> >>> my Eee PC 901 and Thinkpad T500 from it, but I cannot boot the Intel
> >>> D525MW from it. The Intel board will boot install media on USB flash,
> >>> but not a normal install. (This is an aside.) The desire to use an
> >>> alternate boot was to avoid having to fiddle with a two drive RAID1.
> >>> The drives have a single partition consisting of the entire drive
> >>> which is combined into the RAID1.
> >>>
> >>> My desire to get this system up and running is overrunning my desire
> >>> to get the USB flash raid to boot. My strategy is to
> >>>  - remove one drive from the raid,
> >>>  - repartition it to allow for a system installation
> >>>  - create a new RAID1 with that drive and format the new data
> >>> partition. (both would be  RAID1 and now both degraded to one drive)
> >>>  - copy data from the existing RAID1 data partition to the new RAID1
> >>> data partition.
> >>>  - stop the old RAID1
> >>>  - repartition the other drive (most recently the old RAID1) to match
> >>> the new RAID1
> >>>  - add the second drive to the new RAID1
> >>>  - watch it rebuild and breathe big sigh of relief.
> >>>
> >>> When convenient I can install Linux to the space I've opened up via
> >>> the above machinations and move this project down the road.
> >>>
> >>> That looks pretty straightforward to me, but I've never let that sort
> >>> of thing prevent me from cobbling things up in the past. (And at this
> >>> moment, I'm making a copy of the RAID1 to an external drive just in
> >>> case.) For anyone interested, I'll share the details of my plan to the
> >>> command level in the case that any of you can spot a problem I have
> >>> overlooked.
> >>>
> >>> A related question Is what are the constraints for partitioning the
> >>> drive to achieve best performance? I plan to create a 10G partition on
> >>> each drive for the system. Likewise, suggestions for tuning the RAID
> >>> and filesystem configurations would be appreciated. Usage for the RAID
> >>> is backup for my home LAN as well as storing pictures and more
> >>> recently my video library so there's a mix of large and small files.
> >>> I'm not obsessed with performance as most clients are on WiFi, but I
> >>> might as well grab the low hanging fruit in this regard.
> >>>
> >>> Feel free to comment on any aspects of the details listed below.
> >>>
> >>> many thanks,
> >>> hank
> >>>
> >>> This is what is presently on the drives:
> >>> ========================
> >>> root@oak:~# cat /proc/mdstat
> >>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> >>> [raid4] [raid10]
> >>> md1 : active raid1 sdc1[0] sda1[1]
> >>>      1953511936 blocks [2/2] [UU]
> >>>
> >>> unused devices: <none>
> >>> root@oak:~# fdisk -l /dev/sda /dev/sdc
> >>>
> >>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
> >>> 255 heads, 63 sectors/track, 243201 cylinders
> >>> Units = cylinders of 16065 * 512 = 8225280 bytes
> >>> Sector size (logical/physical): 512 bytes / 512 bytes
> >>> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >>> Disk identifier: 0x00000000
> >>>
> >>>   Device Boot      Start         End      Blocks   Id  System
> >>> /dev/sda1   *           1      243201  1953512001   fd  Linux raid
> >>> autodetect
> >>>
> >>> Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
> >>> 255 heads, 63 sectors/track, 243201 cylinders
> >>> Units = cylinders of 16065 * 512 = 8225280 bytes
> >>> Sector size (logical/physical): 512 bytes / 512 bytes
> >>> I/O size (minimum/optimal): 512 bytes / 512 bytes
> >>> Disk identifier: 0x00000000
> >>>
> >>>   Device Boot      Start         End      Blocks   Id  System
> >>> /dev/sdc1               1      243201  1953512001   fd  Linux raid
> >>> autodetect
> >>> root@oak:~#
> >>> ========================
> >>>
> >>> One drive is a Seagate ST32000542AS and the other a Samsung HD204UI.
> >>> The Samsung is one of those with 4K sectors. (I think the Seagate may
> >>> be too.)
> >>>
> >>> Selecting /dev/sdc to migrate first (and following more or less the
> >>> guide on
> >>> http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-partition.html)
> >>>
> >>> Fail the drive:
> >>>>
> >>>> mdadm --manage /dev/md1 --fail /dev/sdc1
> >>>
> >>> Remove from the array:
> >>>>
> >>>> mdadm --manage /dev/md1 --remove /dev/sdc1
> >>>
> >>> Zero the superblock:
> >>>>
> >>>> mdadm --zero-superblock /dev/sdc1
> >>>
> >>> <Repartition drive with one 10G primary partition at the beginning and
> >>> a second primary partition using the remainder of the drive: /dev/sdc1
> >>> and /dev/sdc2>
> >>>
> >>> Create new RAID:
> >>>>
> >>>> mdadm --create /dev/md2 -n 2 --level=1 /dev/sdc2 missing
> >>>
> >>> Format:
> >>>>
> >>>> mkfs.ext4 /dev/md2
> >>>
> >>> Mount:
> >>>>
> >>>> mount /dev/md2 /mnt/md2
> >>>
> >>> Copy:
> >>>>
> >>>> rsync -av -H -K --partial --partial-dir=.rsync-partial /mnt/md1/
> >>>> /mnt/USB/
> >>>
> >>> Stop the old RAID:
> >>>>
> >>>> mdadm --stop /dev/md1
> >>>
> >>> Zero the superblock:
> >>>>
> >>>> mdadm --zero-superblock /dev/sda1
> >>>
> >>> Repartition to match the other drive
> >>>
> >>> Add the second drive to the RAID:
> >>>>
> >>>> mdadm --manage /dev/md2 --add /dev/sda2
> >>>
> >>> Watch the resync complete.
> >>>
> >>> Done! (Except for doing something with the new 10G partition, but
> >>> that's another subject.)
> >>>
> >>> Many thanks for reading this far!
> >>>
> >>> best,
> >>> hank
> >>>
> >>> --
> >>> '03 BMW F650CS - hers
> >>> '98 Dakar K12RS - "BABY K" grew up.
> >>> '93 R100R w/ Velorex 700 (MBD starts...)
> >>> '95 Miata - "OUR LC"
> >>> polish visor: apply squashed bugs, rinse, repeat
> >>> Beautiful Sunny Winfield, Illinois
> >>>
> >>
> >>
> >>
> >> --
> >> '03 BMW F650CS - hers
> >> '98 Dakar K12RS - "BABY K" grew up.
> >> '93 R100R w/ Velorex 700 (MBD starts...)
> >> '95 Miata - "OUR LC"
> >> polish visor: apply squashed bugs, rinse, repeat
> >> Beautiful Sunny Winfield, Illinois
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
       [not found]       ` <AANLkTinMhbozd3_28TRszxbqDuGyyvr7PcijFWWZEJEP@mail.gmail.com>
@ 2011-01-27 21:14         ` Jérôme Poulin
  0 siblings, 0 replies; 9+ messages in thread
From: Jérôme Poulin @ 2011-01-27 21:14 UTC (permalink / raw)
  To: linux-raid

Sorry if it is a double post, I forgot to switch to plain text.

On Thu, Jan 27, 2011 at 3:47 PM, NeilBrown <neilb@suse.de> wrote:
>
> These start numbers are multiples of 64K.
>
> With 0.90 metadata, md thinks that the metadata for a partition that starts
> at a multiple of 64K and ends a the end of the device looks just like metadata
> for the whole devices.
>

I have a similar problem with GRUB2, shouldn't MD check for partitions
first, then disks?
I've got the same problem at home with my RAID5 in GPT partitions,
GRUB sees the whole disk as RAID however I have 3 partitions on each.
Because of mdadm.conf it is OK but I guess type 0xFD on standard MBR
would fail too.
I had a discussion with the GRUB team of checking partitions first
then whole disk and was referred to this list to see if it is how we
should proceed or not.

>
> If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.
>
> NeilBrown
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Help - raid not assembling right on boot (was: Resizing a RAID1)
  2011-01-27 20:47     ` NeilBrown
       [not found]       ` <AANLkTinMhbozd3_28TRszxbqDuGyyvr7PcijFWWZEJEP@mail.gmail.com>
@ 2011-01-28  2:50       ` Hank Barta
  1 sibling, 0 replies; 9+ messages in thread
From: Hank Barta @ 2011-01-28  2:50 UTC (permalink / raw)
  To: NeilBrown; +Cc: Justin Piszcz, linux-raid

On Thu, Jan 27, 2011 at 2:47 PM, NeilBrown <neilb@suse.de> wrote:
>>
>>    Device Boot      Start         End      Blocks   Id  System
>> /dev/sdb1            2048    20973567    10485760   fd  Linux raid autodetect
>> /dev/sdb2        20973568  3907029167  1943027800   fd  Linux raid autodetect
>
> These start numbers are multiples of 64K.
>
> With 0.90 metadata, md thinks that the metadata for a partition that starts
> at a multiple of 64K and ends a the end of the device looks just like metadata
> for the whole devices.
>
> If you use 1.0 (or 1;1 or 1.2) metadata this problem will disappear.

Many thanks for the tip.

============
              1, 1.0, 1.1, 1.2
                     Use the new version-1 format superblock.   This
has  few  restric‐
                     tions.   The different sub-versions store the
superblock at differ‐
                     ent locations on the device, either at the end
(for  1.0),  at  the
                     start (for 1.1) or 4K from the start (for 1.2).
============

I went with 1.1 and that seems to work w/out this problem.

thanks,
hank

-- 
'03 BMW F650CS - hers
'98 Dakar K12RS - "BABY K" grew up.
'93 R100R w/ Velorex 700 (MBD starts...)
'95 Miata - "OUR LC"
polish visor: apply squashed bugs, rinse, repeat
Beautiful Sunny Winfield, Illinois
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2011-01-28  2:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-27  4:02 Help - raid not assembling right on boot (was: Resizing a RAID1) Hank Barta
2011-01-27 11:56 ` Justin Piszcz
2011-01-27 12:20   ` Hank Barta
2011-01-27 12:37     ` Justin Piszcz
2011-01-27 13:39       ` Hank Barta
2011-01-27 15:06         ` Justin Piszcz
2011-01-27 20:47     ` NeilBrown
     [not found]       ` <AANLkTinMhbozd3_28TRszxbqDuGyyvr7PcijFWWZEJEP@mail.gmail.com>
2011-01-27 21:14         ` Jérôme Poulin
2011-01-28  2:50       ` Hank Barta

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.