linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm --grow Hard Drive Size Puzzle
@ 2012-09-30  2:19 Aaron Greenspan
  2012-09-30  2:48 ` Chris Murphy
  2012-10-01  2:55 ` NeilBrown
  0 siblings, 2 replies; 8+ messages in thread
From: Aaron Greenspan @ 2012-09-30  2:19 UTC (permalink / raw)
  To: linux-raid

Hi Neil,

I found your personal web site by doing a Google search on mdadm, so I'm not sure if you are the right person to ask this of, or if it's a dumb question in the first place, but here's what I'm running into.

I had a RAID 1 array on CentOS 6 (which comes with mdadm 3.2.3) of two 250GB SATA Western Digital hard drives. It was finally time to upgrade their capacity, so I purchased two 2TB SATA Seagate hard drives. I replaced them one at a time, first by removing old drive B (slot 1), then copying over the partitions from old drive A (slot 0) to new drive C (slot 1), and then swapping the drives so that I could copy the partitions from new drive C (slot 0) to drive D (slot 1).

This generally worked fine, with one exception. As I said, the drives are 2TB each. Somehow I'm only being given 1TB to work with. Here's what mdadm reports for the first new drive:

[root@kermit plainsite]# mdadm --examine /dev/sda5
/dev/sda5:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15f9c88e:502e7c20:cdceb61e:fa745c07
           Name : localhost.localdomain:3
  Creation Time : Tue Aug  9 13:31:58 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3462374884 (1650.99 GiB 1772.74 GB)
     Array Size : 1731187442 (825.49 GiB 886.37 GB)
  Used Dev Size : 1731187442 (825.49 GiB 886.37 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : bbe2518d:ccd383d3:476b9ab4:54f4f1f7

    Update Time : Sat Sep 29 20:09:12 2012
       Checksum : 725e9e7d - correct
         Events : 16261


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)

...and the second...

[root@kermit plainsite]# mdadm --examine /dev/sdb5
/dev/sdb5:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15f9c88e:502e7c20:cdceb61e:fa745c07
           Name : localhost.localdomain:3
  Creation Time : Tue Aug  9 13:31:58 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3462374884 (1650.99 GiB 1772.74 GB)
     Array Size : 1731187442 (825.49 GiB 886.37 GB)
  Used Dev Size : 1731187442 (825.49 GiB 886.37 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : dd133b6e:55cf2958:4b39275f:4f9d2cf3

    Update Time : Sat Sep 29 20:09:12 2012
       Checksum : 7db54225 - correct
         Events : 16261


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)

The big question is why the "Avail Dev Size" is 1772.74 GB, but the "Array Size" is only 886.37 GB. When I run mdadm --grow /dev/md3 --size=max, I get this:

[root@kermit plainsite]# mdadm --grow /dev/md3 --size=max
mdadm: component size of /dev/md3 unchanged at 865593721K

If I try to force it, I get this:

[root@kermit plainsite]# mdadm --grow /dev/md3 --size=1731187442
mdadm: Cannot set device size for /dev/md3: No space left on device

The other partitions are not taking up the other 1TB; it definitely seems available. Just so you have more data, here's the output of some other commands:

---

[root@kermit plainsite]# more /proc/mdstat 
Personalities : [raid1] 
md2 : active raid1 sdb3[3] sda3[2]
      1020115 blocks super 1.2 [2/2] [UU]
      
md3 : active raid1 sdb5[2] sda5[3]
      865593721 blocks super 1.1 [2/2] [UU]
      
md1 : active raid1 sdb2[2] sda2[3]
      221182844 blocks super 1.1 [2/2] [UU]
      bitmap: 0/2 pages [0KB], 65536KB chunk

md0 : active raid1 sdb1[2] sda1[3]
      102388 blocks super 1.0 [2/2] [UU]
      
unused devices: <none>

---

[root@kermit etc]# hdparm -i /dev/sda

/dev/sda:

 Model=ST2000DM001-9YN164, FwRev=CC46, SerialNo=W2F02P7Y
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=3907029168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4 
 DMA modes:  mdma0 mdma1 mdma2 
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
 AdvancedPM=yes: unknown setting WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-4,5,6,7

 * signifies the current active mode

---

[root@kermit etc]# hdparm -i /dev/sdb

/dev/sdb:

 Model=ST2000DM001-9YN164, FwRev=CC46, SerialNo=W2F02PE2
 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16
 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=3907029168
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4 
 DMA modes:  mdma0 mdma1 mdma2 
 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
 AdvancedPM=yes: unknown setting WriteCache=enabled
 Drive conforms to: unknown:  ATA/ATAPI-4,5,6,7

 * signifies the current active mode

---

[root@kermit /]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md3             852011868 730683448  78060472  91% /
tmpfs                  2010380         0   2010380   0% /dev/shm
/dev/md0                 99138     90797      3222  97% /boot
/dev/md1             217711416 104934568 101717708  51% /home

---

[root@kermit etc]# more mdadm.conf 
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=1871a7f9:dcb6f1e0:53fc2afe:edc5ea24
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=96b2d1ae:401c7fd8:da33bdac:4a5c1252
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=6f6eecca:718bd092:ddf96dca:18317b4c
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=15f9c88e:502e7c20:cdceb61e:fa745c07

---

[root@kermit etc]# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.1
  Creation Time : Tue Aug  9 13:31:58 2011
     Raid Level : raid1
     Array Size : 865593721 (825.49 GiB 886.37 GB)
  Used Dev Size : 865593721 (825.49 GiB 886.37 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Sep 29 17:49:37 2012
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:3
           UUID : 15f9c88e:502e7c20:cdceb61e:fa745c07
         Events : 15287

    Number   Major   Minor   RaidDevice State
       2       8       21        0      active sync   /dev/sdb5
       3       8        5        1      active sync   /dev/sda5

---

Any thoughts?

Thanks,

Aaron
	
Aaron Greenspan
President & CEO
Think Computer Corporation

telephone +1 415 670 9350
toll free +1 888 815 8599
fax +1 415 373 3959
e-mail aarong@thinkcomputer.com
web http://www.thinkcomputer.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-09-30  2:19 mdadm --grow Hard Drive Size Puzzle Aaron Greenspan
@ 2012-09-30  2:48 ` Chris Murphy
  2012-10-01  2:55 ` NeilBrown
  1 sibling, 0 replies; 8+ messages in thread
From: Chris Murphy @ 2012-09-30  2:48 UTC (permalink / raw)
  To: Linux RAID


On Sep 29, 2012, at 8:19 PM, Aaron Greenspan wrote:

> md2 : active raid1 sdb3[3] sda3[2]
>      1020115 blocks super 1.2 [2/2] [UU]
> 
> md3 : active raid1 sdb5[2] sda5[3]
>      865593721 blocks super 1.1 [2/2] [UU]
> 
> md1 : active raid1 sdb2[2] sda2[3]
>      221182844 blocks super 1.1 [2/2] [UU]
>      bitmap: 0/2 pages [0KB], 65536KB chunk
> 
> md0 : active raid1 sdb1[2] sda1[3]
>      102388 blocks super 1.0 [2/2] [UU]

I'm curious why three different versions of md metadata are being used. Did you use dd to copy the data over, or how? Were you booted off a LiveCD or some other disk when doing the copying, or how? Can you supply the results from fdisk -l ?

Chris Murphy


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-09-30  2:19 mdadm --grow Hard Drive Size Puzzle Aaron Greenspan
  2012-09-30  2:48 ` Chris Murphy
@ 2012-10-01  2:55 ` NeilBrown
  2012-10-01  4:55   ` Aaron Greenspan
                     ` (2 more replies)
  1 sibling, 3 replies; 8+ messages in thread
From: NeilBrown @ 2012-10-01  2:55 UTC (permalink / raw)
  To: Aaron Greenspan; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1502 bytes --]

On Sat, 29 Sep 2012 19:19:34 -0700 Aaron Greenspan <aarong@thinkcomputer.com>
wrote:

> Hi Neil,
> 
> I found your personal web site by doing a Google search on mdadm, so I'm not sure if you are the right person to ask this of, or if it's a dumb question in the first place, but here's what I'm running into.

Yes, this is the right place to ask.
No, this is not a dumb question.


> 
> I had a RAID 1 array on CentOS 6 (which comes with mdadm 3.2.3) of two 250GB SATA Western Digital hard drives. It was finally time to upgrade their capacity, so I purchased two 2TB SATA Seagate hard drives. I replaced them one at a time, first by removing old drive B (slot 1), then copying over the partitions from old drive A (slot 0) to new drive C (slot 1), and then swapping the drives so that I could copy the partitions from new drive C (slot 0) to drive D (slot 1).
> 
> This generally worked fine, with one exception. As I said, the drives are 2TB each. Somehow I'm only being given 1TB to work with. Here's what mdadm reports for the first new drive:

Should work.... what kernel are you running?  There was a bug between 2.6.30
and 2.6.37 which would have the effect.
A quick good search suggest that Centos 6 uses 2.6.32, which would be
affected until 2.6.32.27 which contains the fix.

If you reboot (or just stop the array and re-assemble it), md should get
itself sorted out and the --grow will work.

The commit that fixes the bug is  c26a44ed1e552aaa1d4ceb7

NeilBrown


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-10-01  2:55 ` NeilBrown
@ 2012-10-01  4:55   ` Aaron Greenspan
  2012-10-01  5:40     ` NeilBrown
  2012-10-01  5:12   ` Aaron Greenspan
  2012-10-26 16:53   ` Emmanuel Noobadmin
  2 siblings, 1 reply; 8+ messages in thread
From: Aaron Greenspan @ 2012-10-01  4:55 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Neil,

The kernel I'm using is...

[root@kermit plainsite]# uname -a
Linux kermit.thinkcomputer.com 2.6.32-71.el6.i686 #1 SMP Fri Nov 12 04:17:17 GMT 2010 i686 i686 i386 GNU/Linux

However, I have downloaded an update that I can install and hopefully that will fix the issue.

One other question: why do df, /proc/mdstat, and other utilities always list the RAID arrays in random order? I'm always expecting md0 to show up first, followed by md1, md2, etc. but this is rarely actually the way that the arrays are ordered. It makes it kind of confusing when different utilities do things differently.

Thanks for your help,

Aaron
	
Aaron Greenspan
President & CEO
Think Computer Corporation

telephone +1 415 670 9350
toll free +1 888 815 8599
fax +1 415 373 3959
e-mail aarong@thinkcomputer.com
web http://www.thinkcomputer.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-10-01  2:55 ` NeilBrown
  2012-10-01  4:55   ` Aaron Greenspan
@ 2012-10-01  5:12   ` Aaron Greenspan
  2012-10-01  5:56     ` Chris Murphy
  2012-10-26 16:53   ` Emmanuel Noobadmin
  2 siblings, 1 reply; 8+ messages in thread
From: Aaron Greenspan @ 2012-10-01  5:12 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Hi again,

To reply to Chris's question (I'm not on the mailing list and didn't get the e-mail directly):

I copied the drives by using the RAID resync process, not by using dd. This has happened over a period of years with different drives most likely, which might explain discrepancies in metadata versions.

Generally (as I alluded to in my last e-mail) I find the output of these utilities to be incredibly confusing. I'm a pretty technical person and I've worked with Linux for many years, but I honestly have no idea what "super 1.1" really means. Is there a reason everything has to be so concise and opaque? It doesn't seem like we're running out of screen space... In some parts of mdadm, a dot (period) means that part of an array is inactive (versus "A" representing active). Here, I guess it means that it's actually part of a version number for something I didn't realize was versioned. That's my own ignorance, but it's hardly clear, and most users will not start out as experts.

In the distant past I remember that there were two (or three?) different types of drive/array UDIDs depending on which utility you were working with. That's confusing as well.

Also, if you try running --grow /dev/mdX --size=max on a partition that is already at its maximum, the error message...

mdadm: Cannot set size on array members.
mdadm: Cannot set device size for /dev/md2: Invalid argument

...is totally useless and isn't even correct as far as I can tell. The argument --size=max IS valid, actually. The error doesn't say which array members it's having trouble with or why (some of them? all of them?). It doesn't suggest that you use mdadm --examine to determine if there's actually more space available, or just tell you the same figures that --examine would tell you anyway. From a usability perspective, it just seems like there's a lot that could be improved.

That all being said, I do appreciate everyone's help.

Aaron

---

[root@kermit dcd]# fdisk -l

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xa320cea3

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          13      104391   fd  Linux raid autodetect
Partition 1 does not start on physical sector boundary.
/dev/sda2              14       27551   221198985   fd  Linux raid autodetect
Partition 2 does not start on physical sector boundary.
/dev/sda3           27552       27678     1020127+  fd  Linux raid autodetect
Partition 3 does not start on physical sector boundary.
/dev/sda4           27679      243201  1731188497+   5  Extended
Partition 4 does not start on physical sector boundary.
/dev/sda5           27679      243201  1731188466   fd  Linux raid autodetect
Partition 5 does not start on physical sector boundary.

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          13      104391   fd  Linux raid autodetect
Partition 1 does not start on physical sector boundary.
/dev/sdb2              14       27551   221198985   fd  Linux raid autodetect
Partition 2 does not start on physical sector boundary.
/dev/sdb3           27552       27678     1020127+  fd  Linux raid autodetect
Partition 3 does not start on physical sector boundary.
/dev/sdb4           27679      243201  1731188497+   5  Extended
Partition 4 does not start on physical sector boundary.
/dev/sdb5           27679      243201  1731188466   fd  Linux raid autodetect
Partition 5 does not start on physical sector boundary.

Disk /dev/md0: 104 MB, 104845312 bytes
2 heads, 4 sectors/track, 25597 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Alignment offset: 512 bytes
Disk identifier: 0x00000000


Disk /dev/md1: 226.5 GB, 226491232256 bytes
2 heads, 4 sectors/track, 55295711 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Alignment offset: 1536 bytes
Disk identifier: 0x00000000


Disk /dev/md3: 886.4 GB, 886367970304 bytes
2 heads, 4 sectors/track, 216398430 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Alignment offset: 1536 bytes
Disk identifier: 0x00000000


Disk /dev/md2: 1044 MB, 1044597760 bytes
2 heads, 4 sectors/track, 255028 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Alignment offset: 512 bytes
Disk identifier: 0x00000000

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-10-01  4:55   ` Aaron Greenspan
@ 2012-10-01  5:40     ` NeilBrown
  0 siblings, 0 replies; 8+ messages in thread
From: NeilBrown @ 2012-10-01  5:40 UTC (permalink / raw)
  To: Aaron Greenspan; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1524 bytes --]

On Sun, 30 Sep 2012 21:55:27 -0700 Aaron Greenspan <aarong@thinkcomputer.com>
wrote:

> Neil,
> 
> The kernel I'm using is...
> 
> [root@kermit plainsite]# uname -a
> Linux kermit.thinkcomputer.com 2.6.32-71.el6.i686 #1 SMP Fri Nov 12 04:17:17 GMT 2010 i686 i686 i386 GNU/Linux
> 
> However, I have downloaded an update that I can install and hopefully that will fix the issue.

You don't actually need to install a new kernel - though it certainly won't
hurt.

Just reboot and you will be able to resize the array.

> 
> One other question: why do df, /proc/mdstat, and other utilities always list the RAID arrays in random order? I'm always expecting md0 to show up first, followed by md1, md2, etc. but this is rarely actually the way that the arrays are ordered. It makes it kind of confusing when different utilities do things differently.

I think the correct word is "arbitrary", not "random".

Just adjust your expectations.  Don't expect any particular order, and then
it won't look wrong.

Things tends to be listed in the order they are created, or the reverse of
that.  Though in some cases it might be the ordering of some hash of some
value.

If you want something sorted, use "sort" :-)

NeilBrown



> 
> Thanks for your help,
> 
> Aaron
> 	
> Aaron Greenspan
> President & CEO
> Think Computer Corporation
> 
> telephone +1 415 670 9350
> toll free +1 888 815 8599
> fax +1 415 373 3959
> e-mail aarong@thinkcomputer.com
> web http://www.thinkcomputer.com


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-10-01  5:12   ` Aaron Greenspan
@ 2012-10-01  5:56     ` Chris Murphy
  0 siblings, 0 replies; 8+ messages in thread
From: Chris Murphy @ 2012-10-01  5:56 UTC (permalink / raw)
  To: Aaron Greenspan; +Cc: Linux RAID


On Sep 30, 2012, at 11:12 PM, Aaron Greenspan wrote:
> 
> Sector size (logical/physical): 512 bytes / 4096 bytes
> 
> Partition 1 does not start on physical sector boundary.
> Partition 2 does not start on physical sector boundary.
> Partition 3 does not start on physical sector boundary.
> Partition 4 does not start on physical sector boundary.
> Partition 5 does not start on physical sector boundary.

These are 512e AF disks, and none of the partitions are aligned on either disk's 4K sectors. For mostly reads, the performance hit is probably not a big deal, but the penalty for writes for /, /home, and swap all on the same disk could be significant.

Let's see what other opinions there are on this. But I think you're better off doing things correctly from the start with these new disks: have them be properly aligned and use recent and consistent superblock formats for all md devices.

*shrug* but if the new kernel solves the problem, and you're up and running, and you're happy with the performance, it's hard to complain about that.

> Disk /dev/md0: 104 MB, 104845312 bytes
> Disk /dev/md1: 226.5 GB, 226491232256 bytes
> Disk /dev/md2: 1044 MB, 1044597760 bytes
> Disk /dev/md3: 886.4 GB, 886367970304 bytes

What is md2, it's not listed in df?

And it looks like / and /home aren't on LVM, so pvmove isn't an option unfortunately. That would make migrating root easier. Maybe someone else has an idea how to migrate root without LVM or syncing, I'm only thinking of using dd partition to partition copy, while booted from a LiveCD or something. Of course this would not update the superblock format.


Chris Murphy


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm --grow Hard Drive Size Puzzle
  2012-10-01  2:55 ` NeilBrown
  2012-10-01  4:55   ` Aaron Greenspan
  2012-10-01  5:12   ` Aaron Greenspan
@ 2012-10-26 16:53   ` Emmanuel Noobadmin
  2 siblings, 0 replies; 8+ messages in thread
From: Emmanuel Noobadmin @ 2012-10-26 16:53 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On 10/1/12, NeilBrown <neilb@suse.de> wrote:
> Should work.... what kernel are you running?  There was a bug between
> 2.6.30
> and 2.6.37 which would have the effect.
> A quick good search suggest that Centos 6 uses 2.6.32, which would be
> affected until 2.6.32.27 which contains the fix.
>
> If you reboot (or just stop the array and re-assemble it), md should get
> itself sorted out and the --grow will work.

I found this thread on the list while trying to figure out almost the
exact same problem on CentOS 6.3 where --grow doesn't increase the
array size inside a VM guest after I've lvextended the underlying LV
on the host machine.

You mentioned this bug was fixed in 2.6.32.27, does this mean my
kernel version 2.6.32-279.11.1.el6.x86_64 should not be seeing this
bug and I should be looking at some other cause? This is because fdisk
-l doesn't seem to be seeing the new size either.

p.s. replying to the list appears to default to include your personal
email in the reply, apologies if that is not right.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-10-26 16:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-30  2:19 mdadm --grow Hard Drive Size Puzzle Aaron Greenspan
2012-09-30  2:48 ` Chris Murphy
2012-10-01  2:55 ` NeilBrown
2012-10-01  4:55   ` Aaron Greenspan
2012-10-01  5:40     ` NeilBrown
2012-10-01  5:12   ` Aaron Greenspan
2012-10-01  5:56     ` Chris Murphy
2012-10-26 16:53   ` Emmanuel Noobadmin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).