linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-27 19:57 Jesse Pollard
  2001-03-27 20:20 ` Jan Harkes
  0 siblings, 1 reply; 34+ messages in thread
From: Jesse Pollard @ 2001-03-27 19:57 UTC (permalink / raw)
  To: jaharkes, LA Walsh; +Cc: linux-kernel, linux-fsdevel

---------  Received message begins Here  ---------

> 
> On Tue, Mar 27, 2001 at 09:15:08AM -0800, LA Walsh wrote:
> > 	Now lets look at the sites want to process terabytes of
> > data -- perhaps files systems up into the Pentabyte range.  Often I
> > can see these being large multi-node (think 16-1024 clusters as 
> > are in use today for large super-clusters).  If I was to characterize
> > the performance of them, I'd likely see the CPU pegged at 100% 
> > with 99% usage in user space.  Let's assume that increasing the
> > block size decreases disk accesses by as much as 10% (you'll have
> > to admit -- using a 64bit quantity vs. 32bit quantity isn't going
> > to even come close to increasing disk access times by 1 millisecond,
> > really, so it really is going to be a much smaller fraction when
> > compared to the actual disk latency).  
> [snip]
> > 	Is there some logical flaw in the above reasoning?
> 
> But those changes will affect even the fastpath, i.e. data that is
> already in the page/buffer caches. In which case we don't have to wait
> for disk access latency. Why would anyone who is working with a
> pentabyte of data even consider not relying on essentially always
> hitting data that is available the read-ahead cache.

It depends entirely on the application. Where the cache can contain
20% of the data, most accesses should already be in memory. If the
data is significantly larger, there is a high chance that the data
will not be there.

> 
> Using similar numbers as presented. If we are working our way through
> every single block in a Pentabyte filesystem, and the blocksize is 512
> bytes. Then the 1us in extra CPU cycles because of 64-bit operations
> would add, according to by back of the envelope calculation, 2199023
> seconds of CPU time a bit more than 25 days.

Ummm... I don't think it adds that much. You seem to be leaving out the
overlap disk/IO and computation for read-ahead. This should eliminate the
majority of the delay effect.

> Seriously, there is a lot more that needs to be done than introducing a
> 64-bit blocknumber. Effectively 512 byte blocks are far too small for
> that kind of data, and going to pagesize blocks (and increasing pagesize
> to 64KB or 2MB at the same time) is a solution that is far more likely
> to give good results since it reduces both the total the number of
> 'blocks' on the device as well as reducing the total amount of calls
> throughout kernel space instead of increasing the cost per call.

Talk about adding overhead... How long do you think it takes to read a
2MB block (not to mention the time to update that page..) The additional
contention on the fiberchannel I/O alone might kill it if the filesystem
is busy.

Granted, 512 bytes could be considered too small for some things, but
once you pass 32K you start adding a lot of rotational delay problems.
I've used file systems with 256K blocks - they are slow when compaired
to the throughput using 32K. I wasn't the one running the benchmarks,
but with a MaxStrat 400GB raid with 256K sized data transfer was much
slower (around 3 times slower) than 32K. (The target application was
a GIS server using Oracle).

-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-27 22:23 Jesse Pollard
  2001-03-27 23:56 ` Steve Lord
  0 siblings, 1 reply; 34+ messages in thread
From: Jesse Pollard @ 2001-03-27 22:23 UTC (permalink / raw)
  To: jaharkes, Jesse Pollard; +Cc: linux-kernel, linux-fsdevel

Jan Harkes <jaharkes@cs.cmu.edu>:
> 
> On Tue, Mar 27, 2001 at 01:57:42PM -0600, Jesse Pollard wrote:
> > > Using similar numbers as presented. If we are working our way through
> > > every single block in a Pentabyte filesystem, and the blocksize is 512
> > > bytes. Then the 1us in extra CPU cycles because of 64-bit operations
> > > would add, according to by back of the envelope calculation, 2199023
> > > seconds of CPU time a bit more than 25 days.
> > 
> > Ummm... I don't think it adds that much. You seem to be leaving out the
> > overlap disk/IO and computation for read-ahead. This should eliminate the
> > majority of the delay effect.
> 
> 1024 TB should be around 2*10^12 512-byte blocks, divide by 10^6 (1us)
> of "assumed" overhead per block operation is 2*10^6 seconds, no I
> believe I'm pretty close there. I am considering everything being
> "available in the cache", i.e. no waiting for disk access.

That would be true for small files (< 5GB). I have to deal with files that
may be 20-100 GB. Except for the largest systems (200GB of main memory)
the data will NOT be in the cache except for ~50% of the time. (assuming
only one user....)

> > > Seriously, there is a lot more that needs to be done than introducing a
> > > 64-bit blocknumber. Effectively 512 byte blocks are far too small for
> > > that kind of data, and going to pagesize blocks (and increasing pagesize
> > > to 64KB or 2MB at the same time) is a solution that is far more likely
> > > to give good results since it reduces both the total the number of
> > > 'blocks' on the device as well as reducing the total amount of calls
> > > throughout kernel space instead of increasing the cost per call.
> > 
> > Talk about adding overhead... How long do you think it takes to read a
> > 2MB block (not to mention the time to update that page..) The additional
> > contention on the fiberchannel I/O alone might kill it if the filesystem
> > is busy.
> 
> The time to update the pagetables is identical to the time to update a
> 4KB page when the OS is using a 2MB pagesize. Ofcourse it will take more
> time to load the data into the page, however it should be a consecutive
> stretch of data on disk, which should give a more efficient transfer
> than small blocks scattered around the disk.

You assume the file is accessed sequentially. The wether models don't do
that. They do have some locality, but only in a 3D sense. When you include
time it becomes closer to a random disk block reference when everything has to
be linearized.

> 
> > Granted, 512 bytes could be considered too small for some things, but
> > once you pass 32K you start adding a lot of rotational delay problems.
> > I've used file systems with 256K blocks - they are slow when compaired
> > to the throughput using 32K. I wasn't the one running the benchmarks,
> > but with a MaxStrat 400GB raid with 256K sized data transfer was much
> > slower (around 3 times slower) than 32K. (The target application was
> > a GIS server using Oracle).
> 
> But your subsystem (the disk) was probably still using 512 byte blocks,
> possibly scattered. And the OS was still using 4KB pages, it takes more
> time to reclaim and gather 64 pages per IO operation than one, that's
> why I'm saying that the pagesize needs to scale along with the blocksize.

It wasn't - the "disks" were composed of groups of 5 drives in a raid striped
for speed and spread across 5 SCSI III controlers. Each raid attached had
16MB internal cache. I think the controlers were using an entire sector
read (32K).

> The application might have been assuming a small block size as well, and
> the OS was told to do several read/modify/write cycles, perhaps even 512
> times as much as necessary.

There was some of that, but not much. Oracle (as I recall) allows for the
specification of transfer size. 

This also brings up the problem of small files. Allocating 2MB per file
would, waist quite a bit of disk space (assuming 5 - 10 million files
with only 15% having 25GB or more).

> I'm not saying that the current system will perform well when working
> with large blocks, but compared to increasing the size of block_t, a
> larger blocksize has more potential to give improvements in the long
> term without adding an unrecoverable performance hit.

Not when the filesystem is required for general use. It only makes it
simpler to actually have a large filesystem. It doesn't help when it
must be used.

Now you are saying that the throughput WILL go down, but only if you use
large block sizes.

I can go along with making block sizes up to 8K. Even 32K for special
circumstances (even 64K for dedicated use). But not larger. NFS overhead on
file I/O becomes way too excessive (...worst example now is having to read
a 2MB block to update 512 bytes, then write it back... :-) 

-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-27 19:30 Jesse Pollard
  0 siblings, 0 replies; 34+ messages in thread
From: Jesse Pollard @ 2001-03-27 19:30 UTC (permalink / raw)
  To: law, linux-kernel

LA Walsh <law@sgi.com>:
> Ion Badulescu wrote:
> > Compile option or not, 64-bit arithmetic is unacceptable on IA32. The
> > introduction of LFS was bad enough, we don't need yet another proof that
> > IA32 sucks. Especially when there *are* better alternatives.
> ===
>         So if it is a compile option -- the majority of people
> wouldn't be affected, is that in agreement?  Since the default would
> be to use the same arithmetic as we use  now.
> 
>         In fact, I posit that if anything, the majority of the people
> might be helped as the block_nr becomes a a 'typed' value -- and
> perhaps the sector_nr as well.  They remain the same size, but as
> a typed value the kernel gains increased integrity from the increased
> type checking.  At worst, it finds no new bugs and there is no impact
> in speed.  Are we in agreement so far?
> 
>         Now lets look at the sites want to process terabytes of
> data -- perhaps files systems up into the Pentabyte range.  Often I
> can see these being large multi-node (think 16-1024 clusters as 
> are in use today for large super-clusters).  If I was to characterize
> the performance of them, I'd likely see the CPU pegged at 100% 
> with 99% usage in user space.  Let's assume that increasing the
> block size decreases disk accesses by as much as 10% (you'll have
> to admit -- using a 64bit quantity vs. 32bit quantity isn't going
> to even come close to increasing disk access times by 1 millisecond,
> really, so it really is going to be a much smaller fraction when
> compared to the actual disk latency).  

Relatively small quibble - Current large clusters (SP3, 330 node 4cpu/node)
gets around 85% to 90% (real user) user mode total cpu. The rest is user
mode is attributed to overhead. Why:
    1. Inter-node communication/synchronization
    2. Memory bus saturation
    3. Users usually use only 3 cpus/node and allow the last cpu to handle
       filesystem/network/administration/batch handling functions. Using the
       last cpu in the node for part of the job reduces the overall throughput

>         Ok...but for the sake of
> argument using 10% -- that's still only 10% of 1% spent in the system.
> or a slowdown of .1%.  Now that's using a really liberal figure
> of 10%.  If you look at the actual speed of 64 bit arithmatic vs.
> 32, we're likely talking -- upper bound, 10x the clocks for 
> disk block arithmetic.  Disk block arithmetic is a small fraction
> of time spent in the kernel.  We have to be looking at *maximum*
> slowdowns in the range of a few hundred maybe a few thousand extra clocks.
> A 1000 extra clocks on a 1G machine is 1 microsecond, or approx
> 1/5000th your average seek latency on a *fast* hard disk.  So
> instead of 10% slowdown we are talking slowdowns in the 1/1000 range
> or less.  Now that's a slowdown in the 1% that was being spent in
> the kernel, so now we've slowdown the total program speed by .001%
> at the increase benefit (to that site) of being able to process
> those mega-gig's (Pentabytes) of information.  For a hit that is
> not noticable to human perception, they go from not being able to
> use super-clusters of IA32 machines (for which HW and SW is cheap), 
> to being able to use it.  That's quite a cost savings for them.
> 
>         Is there some logical flaw in the above reasoning?

-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

^ permalink raw reply	[flat|nested] 34+ messages in thread
[parent not found: <Pine.LNX.4.30.0103270022500.21075-100000@age.cs.columbia.edu>]
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-27 17:22 LA Walsh
  0 siblings, 0 replies; 34+ messages in thread
From: LA Walsh @ 2001-03-27 17:22 UTC (permalink / raw)
  To: linux-kernel

Ion Badulescu wrote:
> Are you being deliberately insulting, "L", or are you one of those users
> who bitch and scream for features they *need* at *any cost*, and who
> have never even opened up the book for Computer Architecture 101?
---
        Sorry, I was borderline insulting.  I'm getting pressure on
personal fronts other than just here.  But my degree is in computer
science and I've had almost 20 years experience programming things
as small as 8080's w/ 4K ram on up.  I'm familiar with 'cost' of
emulation.

> Let's try to keep the discussion civilized, shall we?
---
        Certainly.
> 
> Compile option or not, 64-bit arithmetic is unacceptable on IA32. The
> introduction of LFS was bad enough, we don't need yet another proof that
> IA32 sucks. Especially when there *are* better alternatives.
===
        So if it is a compile option -- the majority of people
wouldn't be affected, is that in agreement?  Since the default would
be to use the same arithmetic as we use  now.

        In fact, I posit that if anything, the majority of the people
might be helped as the block_nr becomes a a 'typed' value -- and
perhaps the sector_nr as well.  They remain the same size, but as
a typed value the kernel gains increased integrity from the increased
type checking.  At worst, it finds no new bugs and there is no impact
in speed.  Are we in agreement so far?

        Now lets look at the sites want to process terabytes of
data -- perhaps files systems up into the Pentabyte range.  Often I
can see these being large multi-node (think 16-1024 clusters as 
are in use today for large super-clusters).  If I was to characterize
the performance of them, I'd likely see the CPU pegged at 100% 
with 99% usage in user space.  Let's assume that increasing the
block size decreases disk accesses by as much as 10% (you'll have
to admit -- using a 64bit quantity vs. 32bit quantity isn't going
to even come close to increasing disk access times by 1 millisecond,
really, so it really is going to be a much smaller fraction when
compared to the actual disk latency).  

        Ok...but for the sake of
argument using 10% -- that's still only 10% of 1% spent in the system.
or a slowdown of .1%.  Now that's using a really liberal figure
of 10%.  If you look at the actual speed of 64 bit arithmatic vs.
32, we're likely talking -- upper bound, 10x the clocks for 
disk block arithmetic.  Disk block arithmetic is a small fraction
of time spent in the kernel.  We have to be looking at *maximum*
slowdowns in the range of a few hundred maybe a few thousand extra clocks.
A 1000 extra clocks on a 1G machine is 1 microsecond, or approx
1/5000th your average seek latency on a *fast* hard disk.  So
instead of 10% slowdown we are talking slowdowns in the 1/1000 range
or less.  Now that's a slowdown in the 1% that was being spent in
the kernel, so now we've slowdown the total program speed by .001%
at the increase benefit (to that site) of being able to process
those mega-gig's (Pentabytes) of information.  For a hit that is
not noticable to human perception, they go from not being able to
use super-clusters of IA32 machines (for which HW and SW is cheap), 
to being able to use it.  That's quite a cost savings for them.

        Is there some logical flaw in the above reasoning?

-linda
-- 
L A Walsh                        | Trust Technology, Core Linux, SGI
law@sgi.com                      | Voice: (650) 933-5338

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-26 21:27 Jesse Pollard
  2001-03-26 22:07 ` Jonathan Morton
  0 siblings, 1 reply; 34+ messages in thread
From: Jesse Pollard @ 2001-03-26 21:27 UTC (permalink / raw)
  To: dalecki, Eric W. Biederman; +Cc: linux-kernel

Martin Dalecki <dalecki@evision-ventures.com>:
> "Eric W. Biederman" wrote:
> > 
> > Matthew Wilcox <matthew@wil.cx> writes:
> > 
> > > On Mon, Mar 26, 2001 at 10:47:13AM -0700, Andreas Dilger wrote:
> > > > What do you mean by problems 5 years down the road?  The real issue is that
> > > > this 32-bit block count limit affects composite devices like MD RAID and
> > > > LVM today, not just individual disks.  There have been several postings
> > > > I have seen with people having a problem _today_ with a 2TB limit on
> > > > devices.
> > >
> > > people who can afford 2TB of disc can afford to buy a 64-bit processor.
> > 
> > Currently that doesn't solve the problem as block_nr is held in an int.
> > And as gcc compiles an int to a 32bit number on a 64bit processor, the
> > problem still isn't solved.
> > 
> > That at least we need to address.
> 
> And then you must face the fact that there may be the need for
> some of the shelf software, which isn't well supported on 
> correspondig 64 bit architectures... as well. So the
> arguemnt doesn't hold up to the reality in any way.

You are missing the point - I may need to use a 32 bit system to monitor
a large file system. I don't need the compute power of most 64 bit systems
to monitor user file activity.

> BTW. For many reasons 32 bit architecutres are in
> respoect of some application shemes *faster* the 64.

Which is why I want to use them with a 64 bit file system. Some of the
weather models run here have been known to exceed 100 GB data file. Yes
one  file. Most only need 20GB, but there are a couple of hundred of them...  

> Ultra III in 64 mode just crawls in comparision to 32.

Depends on what you are doing. If you need to handle large arrays of
floating point it is reasonable (not great, just reasonable).

> Alpha - unfortulatly an orphaned and dyring archtecutre... which
> is not well supported by sw verndors...

These are NOT the only 64 bit systems - Intel, PPC, IBM (in various guises).
If you need raw compute power, the Alpha is pretty good (we have over a
1000 in a Cray T3..).

-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-26 19:26 Jesse Pollard
  0 siblings, 0 replies; 34+ messages in thread
From: Jesse Pollard @ 2001-03-26 19:26 UTC (permalink / raw)
  To: matthew, LA Walsh; +Cc: linux-kernel, linux-fsdevel

---------  Received message begins Here  ---------

> 
> On Mon, Mar 26, 2001 at 08:39:21AM -0800, LA Walsh wrote:
> > I vaguely remember a discussion about this a few months back.
> > If I remember, the reasoning was it would unnecessarily slow
> > down smaller systems that would never have block devices in
> > the 4-28T range attached.  
> 
> 4k page size * 2GB = 8TB.
> 
> i consider it much more likely on such systems that the page size will
> be increased to maybe 16 or 64k which would give us 32TB or 128TB.
> you keep on trying to increase the size of types without looking at
> what gcc outputs in the way of code that manipulates 64-bit types.
> seriously, why don't you just try it?  see what the performance is.
> see what the code size is.  then come back with some numbers.  and i mean
> numbers, not `it doesn't feel any slower'.
> 
> personally, i'm going to see what the situation looks like in 5 years time
> and try to solve the problem then.  there're enough real problems with the
> VFS today that i don't feel inclined to fix tomorrow's potential problems.

I don't feel that it is that far away ... IBM has already released a 64 CPU
intel based system (NUMA). We already have systems in that class (though
64 bit based) that use 5 TB file systems. The need is coming, and appears
to be coming fast. It should be resolved during the improvements to the
VFS.

A second reason to include it in the VFS is that the low level filesystem
implementation would NOT be required to use it. If the administrator
CHOOSES to access a 16TB filesystem from a workstation, then it should
be possible (likely something like the GFS, where the administrator is
just monitoring things, would be reasonable for a 32 bit system to do).

As I see it, the VFS itself doesn't really care what the block size is,
it just carries relatively opaque values that the filesystem implementation
uses. Most of the overhead should just be copying an extra 4 bytes around.

-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-26 18:01 Manfred Spraul
  2001-03-26 18:07 ` Matthew Wilcox
  2001-03-26 19:40 ` LA Walsh
  0 siblings, 2 replies; 34+ messages in thread
From: Manfred Spraul @ 2001-03-26 18:01 UTC (permalink / raw)
  To: matthew; +Cc: law, linux-kernel

>> I vaguely remember a discussion about this a few months back.
>> If I remember, the reasoning was it would unnecessarily slow
>> down smaller systems that would never have block devices in
>> the 4-28T range attached.
>
>4k page size * 2GB = 8TB.

Try it.
If your drive (array) is larger than 512byte*4G (4TB) linux will eat
your data.

drivers/block/ll_rw_blk.c, in submit_bh()
>    bh->b_rsector = bh->b_blocknr * (bh->b_size >> 9);

But it shouldn't cause data corruptions:
It was discussed a few months ago, and iirc LVM refuses to create too
large volumes.

--
    Manfred



^ permalink raw reply	[flat|nested] 34+ messages in thread
* Re: 64-bit block sizes on 32-bit systems
@ 2001-03-26 17:35 LA Walsh
  0 siblings, 0 replies; 34+ messages in thread
From: LA Walsh @ 2001-03-26 17:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-fsdevel


Matthew Wilcox wrote:
> 
> On Mon, Mar 26, 2001 at 08:39:21AM -0800, LA Walsh wrote:
> > I vaguely remember a discussion about this a few months back.
> > If I remember, the reasoning was it would unnecessarily slow
> > down smaller systems that would never have block devices in
> > the 4-28T range attached.
> 
> 4k page size * 2GB = 8TB.
---
        Drat...was being more optimistic -- you're right
the block_nr can be negative.  Somehow thought page size could
be 8K....living in future land.  That just makes the limitations
even closer at hand...:-(

> you keep on trying to increase the size of types without looking at
> what gcc outputs in the way of code that manipulates 64-bit types.
---
        Maybe someone will backport some of the features of the
IA-64 code generator into 'gcc'.  I've been told that in some 
cases it's a 2.5x performance difference.  If 'gcc' is generating
bad code, then maybe the 'gcc' people will increase the quality
of their code -- I'm sure they are just as eagerly working on
gcc improvements as we are kernel improvements.  When I worked
on the PL/M compiler project at Intel, I know our code-optimization
guy would spend endless cycles trying to get better optimization
out of the code.  He got great joy out of doing so. -- and
that was almost 20 years ago -- and code generation has come
a *long* way since then.

> seriously, why don't you just try it?  see what the performance is.
> see what the code size is.  then come back with some numbers.  and i mean
> numbers, not `it doesn't feel any slower'.
---
        As for 'trying' it -- would anyone care if we virtualized
the block_nr into a typedef?  That seems like it would provide
for cleaner (type-checked) code at no performance penalty and
more easily allow such comparisons.

        Well this is my point: if I have disks > 8T, wouldn't
it be at *all* beneficial to be able to *choose* some slight
performance impact and access those large disks vs. having not
choice?  Having it as a configurable would allow a given 
installation to make that choice rather than them having no
choice.  BTW, are block_nr's on RAID arrays subject to this
limitation?
> 
> personally, i'm going to see what the situation looks like in 5 years time
> and try to solve the problem then.
---
        It's not the same, but SGI has had customers for over
3 years using >2T *files*.  The point I'm looking at is if
the P-X series gets developed enough, and someone is using a
4-16P system, a corp user might be approaching that limit
today or tomorrow.  Joe User, might not for 5 years, but that's
what the configurability is about.  Keep linux usable for both
ends of the scale -- "I love scalability"....

-l

-- 
L A Walsh                        | Trust Technology, Core Linux, SGI
law@sgi.com                      | Voice: (650) 933-5338
-- 
L A Walsh                        | Trust Technology, Core Linux, SGI
law@sgi.com                      | Voice: (650) 933-5338

^ permalink raw reply	[flat|nested] 34+ messages in thread
* 64-bit block sizes on 32-bit systems
@ 2001-03-26 16:39 LA Walsh
  2001-03-26 17:18 ` Matthew Wilcox
                   ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: LA Walsh @ 2001-03-26 16:39 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel

I vaguely remember a discussion about this a few months back.
If I remember, the reasoning was it would unnecessarily slow
down smaller systems that would never have block devices in
the 4-28T range attached.  

However, isn't it possible there will continue to be a series
of P-IV,V,VI,VII ...etc, addons that will be used for sometime
to come.  I've even heard it suggested that we might see
2 or more CPU's on a single chip as a way to increase cpu
capacity w/o driving up clock speed.  Given the cheapness of
.25T drives now, seeing the possibility of 4T drives doesn't seem
that remote (maybe 5 years?).  

Side question: does the 32-bit block size limit also apply to 
RAID disks or does it use a different block-nr type?

So...is it the plan, or has it been though about -- 'abstracting'
block numbes as a typedef 'block_nr', then at compile time
having it be selectable as to whether or not this was to
be a 32-bit or 64 bit quantity -- that way older systems would
lose no efficiency.  Drivers that couldn't be or hadn't been
ported to use 'block_nr' could default to being disabled if
64-bit blocks were selected, etc.

So has this idea been tossed about and or previously thrashed?

-l

-- 
L A Walsh                        | Trust Technology, Core Linux, SGI
law@sgi.com                      | Voice: (650) 933-5338

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2001-03-28 14:57 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-03-27 19:57 64-bit block sizes on 32-bit systems Jesse Pollard
2001-03-27 20:20 ` Jan Harkes
2001-03-27 21:55   ` LA Walsh
  -- strict thread matches above, loose matches on Subject: below --
2001-03-27 22:23 Jesse Pollard
2001-03-27 23:56 ` Steve Lord
2001-03-28  8:09   ` Brad Boyer
2001-03-28 14:53     ` Dave Kleikamp
2001-03-27 19:30 Jesse Pollard
     [not found] <Pine.LNX.4.30.0103270022500.21075-100000@age.cs.columbia.edu>
     [not found] ` <3AC0CA9C.3D804361@sgi.com>
2001-03-27 19:00   ` Jan Harkes
2001-03-27 17:22 LA Walsh
2001-03-26 21:27 Jesse Pollard
2001-03-26 22:07 ` Jonathan Morton
2001-03-27  4:14   ` Jesse Pollard
2001-03-26 19:26 Jesse Pollard
2001-03-26 18:01 Manfred Spraul
2001-03-26 18:07 ` Matthew Wilcox
2001-03-26 19:40 ` LA Walsh
2001-03-26 21:53   ` Manfred Spraul
2001-03-26 22:07     ` LA Walsh
2001-03-26 17:35 LA Walsh
2001-03-26 16:39 LA Walsh
2001-03-26 17:18 ` Matthew Wilcox
2001-03-26 17:47   ` Andreas Dilger
2001-03-26 18:09     ` Matthew Wilcox
2001-03-26 18:37       ` Eric W. Biederman
2001-03-26 19:36         ` Martin Dalecki
2001-03-26 23:03         ` AJ Lewis
2001-03-26 19:05       ` Scott Laird
2001-03-26 19:09       ` Andreas Dilger
2001-03-26 20:31         ` Dan Hollis
2001-03-26 19:20       ` Rik van Riel
2001-03-26 20:14       ` Jes Sorensen
2001-03-26 17:58 ` Eric W. Biederman
2001-03-28  8:06 ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).