* Re: 15 * 180gb in raid5 gives 299.49 GiB ? [not found] ` <Pine.LNX.4.53.0302060123150.6169@ddx.a2000.nu> @ 2003-02-06 1:13 ` Stephan van Hienen [not found] ` <15937.50001.367258.485512@wombat.chubb.wattle.id.au> 0 siblings, 1 reply; 7+ messages in thread From: Stephan van Hienen @ 2003-02-06 1:13 UTC (permalink / raw) To: linux-raid, Peter Chubb; +Cc: linux-kernel argh : tried to compile with this patch tried on 2.4.20 , 2.4.21-pre1 and 2.4.21-pre4 /usr/src/linux-2.4.21-pre1/arch/i386/lib/lib.a /usr/src/linux-2.4.21-pre1/lib/lib.a /usr/src/linux-2.4.21-pre1/arch/i386/lib/lib.a \ --end-group \ -o vmlinux drivers/scsi/scsidrv.o: In function `ahc_linux_biosparam': drivers/scsi/scsidrv.o(.text+0xf9c4): undefined reference to `__udivdi3' drivers/scsi/scsidrv.o(.text+0xfa0c): undefined reference to `__udivdi3' On Thu, 6 Feb 2003, Stephan van Hienen wrote: > hmms found out after posting this msg : > > http://www.gelato.unsw.edu.au/patches-index.html > > ³ ³ [*] Support for discs bigger than 2TB? ³ ³ > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <15937.50001.367258.485512@wombat.chubb.wattle.id.au>]
* Re: 15 * 180gb in raid5 gives 299.49 GiB ? [not found] ` <15937.50001.367258.485512@wombat.chubb.wattle.id.au> @ 2003-02-07 13:58 ` Stephan van Hienen [not found] ` <15945.31516.492846.870265@wombat.chubb.wattle.id.au> 0 siblings, 1 reply; 7+ messages in thread From: Stephan van Hienen @ 2003-02-07 13:58 UTC (permalink / raw) To: Peter Chubb; +Cc: linux-raid, linux-kernel On Thu, 6 Feb 2003, Peter Chubb wrote: > OK, must have missed a change. > > In drivers/scsi/aic7xxx_osm.c find the function ahc_linux_biosparam() > and cast disk->capacity to unsigned int like so: > > - cylinders = disk->capacity / (heads * sectors); > + cylinders = (unsigned)disk->capacity / (heads * sectors); Thnx Peter, this fixes the compile error now i run 2.4.20 with the patch, and build the raid correctly only a small thing left (in the raid code?) that needs to be fixed : (array size is neggative) mdadm version 1.0.1 but maybe it is just mdadm which is a buggy program since the 'Total Devices : 16' is also incorrect (seen before on multiple systems) ]# mdadm -D /dev/md0 /dev/md0: Version : 00.90.00 Creation Time : Thu Feb 6 14:20:02 2003 Raid Level : raid5 Array Size : -1833441152 (2347.49 GiB 2520.65 GB) Device Size : 175823296 (167.68 GiB 180.09 GB) Raid Devices : 15 Total Devices : 16 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Feb 7 10:15:15 2003 State : dirty, no-errors Active Devices : 15 Working Devices : 15 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 4 8 81 4 active sync /dev/sdf1 5 8 97 5 active sync /dev/sdg1 6 8 113 6 active sync /dev/sdh1 7 8 129 7 active sync /dev/sdi1 8 3 1 8 active sync /dev/hda1 9 22 1 9 active sync /dev/hdc1 10 33 1 10 active sync /dev/hde1 11 56 1 11 active sync /dev/hdi1 12 57 1 12 active sync /dev/hdk1 13 88 1 13 active sync /dev/hdm1 14 89 1 14 active sync /dev/hdo1 UUID : 967349d3:ae82ce10:f6d112a5:dccda06b ]# cat /proc/mdstat Personalities : [raid0] [raid5] read_ahead 1024 sectors md0 : active raid5 hdo1[14] hdm1[13] hdk1[12] hdi1[11] hde1[10] hdc1[9] hda1[8] sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0] 2461526144 blocks level 5, 64k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU] unused devices: <none> ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <15945.31516.492846.870265@wombat.chubb.wattle.id.au>]
* Re: raid5 2TB+ NO GO ? [not found] ` <15945.31516.492846.870265@wombat.chubb.wattle.id.au> @ 2003-02-12 10:39 ` Stephan van Hienen 2003-02-12 15:13 ` Mike Black 0 siblings, 1 reply; 7+ messages in thread From: Stephan van Hienen @ 2003-02-12 10:39 UTC (permalink / raw) To: Peter Chubb; +Cc: linux-kernel, linux-raid, bernard, ext2-devel On Wed, 12 Feb 2003, Peter Chubb wrote: > >>>>> "Stephan" == Stephan van Hienen <raid@a2000.nu> writes: > > Stephan, > Just noticed you're using raid5 --- I don't believe that level > 5 will work, as its data structures and internal algorithms are > 32-bit only. I've done no work on it to make it work (I've been > waiting for the rewrite in 2.5), and don't have time to do anything now. > > You could try making sector in the struct stripe_head a sector_t, but > I'm pretty sure you'll run into other problems. > > I only managed to get raid 0 and linear to work when I was testing. ok clear, so no raid5 for 2TB+ then :( looks like i have to remove some hd's then what will be the limit ? 13*180GB in raid5 ? or 12*180GB in raid5 ? Device Size : 175823296 (167.68 GiB 180.09 GB) 13* will give me 1,97TiB but will there be an internal raid5 problem ? (since it will be 13*180GB to be adressed) ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: raid5 2TB+ NO GO ? 2003-02-12 10:39 ` raid5 2TB+ NO GO ? Stephan van Hienen @ 2003-02-12 15:13 ` Mike Black 2003-02-14 10:21 ` kernel 0 siblings, 1 reply; 7+ messages in thread From: Mike Black @ 2003-02-12 15:13 UTC (permalink / raw) To: Stephan van Hienen, Peter Chubb Cc: linux-kernel, linux-raid, bernard, ext2-devel I did a 12x180G and as I recall was unable to do 13x180G as it overflowed during mke2fs. This was a year ago though so I don't know if that's been improved since then. I've got 13 of these with one drive marked as a spare: Disk /dev/sda: 255 heads, 63 sectors, 22072 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System /dev/sda1 1 22072 177293308+ fd Linux raid autodetect Number Major Minor RaidDevice State 0 8 177 0 active sync /dev/sdl1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 1 3 active sync /dev/sda1 4 8 49 4 active sync /dev/sdd1 5 8 65 5 active sync /dev/sde1 6 8 81 6 active sync /dev/sdf1 7 8 97 7 active sync /dev/sdg1 8 8 113 8 active sync /dev/sdh1 9 8 129 9 active sync /dev/sdi1 10 8 145 10 active sync /dev/sdj1 11 8 161 11 active sync /dev/sdk1 12 65 49 12 /dev/sdt1 ----- Original Message ----- From: "Stephan van Hienen" <raid@a2000.nu> To: "Peter Chubb" <peter@chubb.wattle.id.au> Cc: <linux-kernel@vger.kernel.org>; <linux-raid@vger.kernel.org>; <bernard@biesterbos.nl>; <ext2-devel@lists.sourceforge.net> Sent: Wednesday, February 12, 2003 5:39 AM Subject: Re: raid5 2TB+ NO GO ? > On Wed, 12 Feb 2003, Peter Chubb wrote: > > > >>>>> "Stephan" == Stephan van Hienen <raid@a2000.nu> writes: > > > > Stephan, > > Just noticed you're using raid5 --- I don't believe that level > > 5 will work, as its data structures and internal algorithms are > > 32-bit only. I've done no work on it to make it work (I've been > > waiting for the rewrite in 2.5), and don't have time to do anything now. > > > > You could try making sector in the struct stripe_head a sector_t, but > > I'm pretty sure you'll run into other problems. > > > > I only managed to get raid 0 and linear to work when I was testing. > > ok clear, so no raid5 for 2TB+ then :( > > looks like i have to remove some hd's then > > what will be the limit ? > > 13*180GB in raid5 ? > or 12*180GB in raid5 ? > > Device Size : 175823296 (167.68 GiB 180.09 GB) > > 13* will give me 1,97TiB but will there be an internal raid5 problem ? > (since it will be 13*180GB to be adressed) > > > - > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: raid5 2TB+ NO GO ? 2003-02-12 15:13 ` Mike Black @ 2003-02-14 10:21 ` kernel 2003-02-17 10:24 ` Stephan van Hienen 0 siblings, 1 reply; 7+ messages in thread From: kernel @ 2003-02-14 10:21 UTC (permalink / raw) To: Mike Black Cc: Stephan van Hienen, Peter Chubb, linux-kernel, linux-raid, bernard, ext2-devel On Wed, 12 Feb 2003, Mike Black wrote: > I did a 12x180G and as I recall was unable to do 13x180G as it overflowed during mke2fs. This was a year ago though so I don't know > if that's been improved since then. > does anyone know for sure what is the limit for md raid5 ? can i use 13*180GB in raid5 ? or should i go for 12*180GB in raid5 ? ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: raid5 2TB+ NO GO ? 2003-02-14 10:21 ` kernel @ 2003-02-17 10:24 ` Stephan van Hienen 2003-02-20 16:17 ` what is the exact raid5 limit (2TB (can i use 12 or 13*180GB?)) Stephan van Hienen 0 siblings, 1 reply; 7+ messages in thread From: Stephan van Hienen @ 2003-02-17 10:24 UTC (permalink / raw) To: kernel Cc: Mike Black, Peter Chubb, linux-kernel, linux-raid, bernard, ext2-devel On Fri, 14 Feb 2003 kernel@ddx.a2000.nu wrote: > On Wed, 12 Feb 2003, Mike Black wrote: > > > I did a 12x180G and as I recall was unable to do 13x180G as it overflowed during mke2fs. This was a year ago though so I don't know > > if that's been improved since then. > > > > does anyone know for sure what is the limit for md raid5 ? > > can i use 13*180GB in raid5 ? > or should i go for 12*180GB in raid5 ? I really want to create this raid this week so is there anyone with info what will be the limit ? ^ permalink raw reply [flat|nested] 7+ messages in thread
* what is the exact raid5 limit (2TB (can i use 12 or 13*180GB?)) 2003-02-17 10:24 ` Stephan van Hienen @ 2003-02-20 16:17 ` Stephan van Hienen 0 siblings, 0 replies; 7+ messages in thread From: Stephan van Hienen @ 2003-02-20 16:17 UTC (permalink / raw) To: kernel Cc: Mike Black, Peter Chubb, linux-kernel, linux-raid, bernard, ext2-devel On Mon, 17 Feb 2003, Stephan van Hienen wrote: > On Fri, 14 Feb 2003 kernel@ddx.a2000.nu wrote: > > > On Wed, 12 Feb 2003, Mike Black wrote: > > > > > I did a 12x180G and as I recall was unable to do 13x180G as it overflowed during mke2fs. This was a year ago though so I don't know > > > if that's been improved since then. > > > > > > > does anyone know for sure what is the limit for md raid5 ? > > > > can i use 13*180GB in raid5 ? > > or should i go for 12*180GB in raid5 ? I really want to create this raid this week so is there anyone with info what will be the limit ? ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2003-02-20 16:08 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <Pine.LNX.4.53.0302060059210.6169@ddx.a2000.nu> [not found] ` <Pine.LNX.4.53.0302060123150.6169@ddx.a2000.nu> 2003-02-06 1:13 ` 15 * 180gb in raid5 gives 299.49 GiB ? Stephan van Hienen [not found] ` <15937.50001.367258.485512@wombat.chubb.wattle.id.au> 2003-02-07 13:58 ` Stephan van Hienen [not found] ` <15945.31516.492846.870265@wombat.chubb.wattle.id.au> 2003-02-12 10:39 ` raid5 2TB+ NO GO ? Stephan van Hienen 2003-02-12 15:13 ` Mike Black 2003-02-14 10:21 ` kernel 2003-02-17 10:24 ` Stephan van Hienen 2003-02-20 16:17 ` what is the exact raid5 limit (2TB (can i use 12 or 13*180GB?)) Stephan van Hienen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).