linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <clameter@sgi.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, hch@infradead.org
Subject: Re: [patch 00/14] Page cache cleanup in anticipation of Large Blocksize support
Date: Thu, 14 Jun 2007 15:22:46 -0700 (PDT)	[thread overview]
Message-ID: <Pine.LNX.4.64.0706141517120.2240@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20070614150417.c73fb6b9.akpm@linux-foundation.org>

On Thu, 14 Jun 2007, Andrew Morton wrote:

> With 64k pagesize the amount of memory required to hold a kernel tree (say)
> will go from 270MB to 1400MB.   This is not an optimisation.

I do not think that the 100% users will do kernel compiles all day like 
we do. We likely would prefer 4k page size for our small text files.

> Several 64k pagesize people have already spent time looking at various
> tail-packing schemes to get around this serious problem.  And that's on
> _server_ class machines.  Large ones.  I don't think
> laptop/desktop/samll-server machines would want to go anywhere near this.

I never understood the point of that exercise. If you have variable page 
size then the 64k page size can be used specific to files that benefit 
from it. Typically usage scenarios are video audio streaming I/O, large 
picture files, large documents with embedded images. These are the major
usage scenarioes today and we suck the. Our DVD/CD subsystems are 
currently not capable of directly reading from these devices into the page 
cache since they do not do I/O in 4k chunks.

> > fsck times etc etc are becoming an issue for desktop 
> > systems
> 
> I don't see what fsck has to do with it.
> 
> fsck is single-threaded (hence no locking issues) and operates against the
> blockdev pagecache and does a _lot_ of small reads (indirect blocks,
> especially).  If the memory consumption for each 4k read jumps to 64k, fsck
> is likely to slow down due to performing a lot more additional IO and due
> to entering page reclaim much earlier.

Every 64k block contains more information and the number of pages managed
is reduced by a factor of 16. Less seeks , less tlb pressure , less reads, 
more cpu cache and cpu cache prefetch friendly behavior.

  reply	other threads:[~2007-06-14 22:22 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-06-14 19:38 [patch 00/14] Page cache cleanup in anticipation of Large Blocksize support clameter
2007-06-14 19:38 ` [patch 01/14] Define functions for page cache handling clameter
2007-06-14 19:56   ` Sam Ravnborg
2007-06-14 19:58     ` Christoph Lameter
2007-06-14 20:07       ` Sam Ravnborg
2007-06-14 19:38 ` [patch 02/14] Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user clameter
2007-06-14 19:38 ` [patch 03/14] Use page_cache_xx function in mm/filemap.c clameter
2007-06-14 19:38 ` [patch 04/14] Use page_cache_xxx in mm/page-writeback.c clameter
2007-06-14 19:38 ` [patch 05/14] Use page_cache_xxx in mm/truncate.c clameter
2007-06-14 19:38 ` [patch 06/14] Use page_cache_xxx in mm/rmap.c clameter
2007-06-14 19:38 ` [patch 07/14] Use page_cache_xx in mm/filemap_xip.c clameter
2007-06-14 19:38 ` [patch 08/14] Use page_cache_xx in mm/migrate.c clameter
2007-06-14 19:38 ` [patch 09/14] Use page_cache_xx in fs/libfs.c clameter
2007-06-14 19:38 ` [patch 10/14] Use page_cache_xx in fs/sync clameter
2007-06-14 19:38 ` [patch 11/14] Use page_cache_xx in fs/buffer.c clameter
2007-06-14 19:38 ` [patch 12/14] Use page_cache_xxx in mm/mpage.c clameter
2007-06-14 19:38 ` [patch 13/14] Use page_cache_xxx in mm/fadvise.c clameter
2007-06-14 19:38 ` [patch 14/14] Use page_cache_xx in fs/splice.c clameter
2007-06-14 20:06 ` [patch 00/14] Page cache cleanup in anticipation of Large Blocksize support Andrew Morton
2007-06-14 21:07   ` Christoph Hellwig
2007-06-14 21:25     ` Dave McCracken
2007-06-14 21:20   ` Christoph Lameter
2007-06-14 21:32     ` Andrew Morton
2007-06-14 21:37       ` Christoph Lameter
2007-06-14 22:04         ` Andrew Morton
2007-06-14 22:22           ` Christoph Lameter [this message]
2007-06-14 22:49             ` Andrew Morton
2007-06-15  0:45               ` Christoph Lameter
2007-06-15  1:40                 ` Andrew Morton
2007-06-15  2:04                   ` Christoph Lameter
2007-06-15  2:23                     ` Andrew Morton
2007-06-15  2:37                       ` Christoph Lameter
2007-06-15  9:03                       ` David Chinner
2007-06-14 23:30           ` David Chinner
2007-06-14 23:41             ` Andrew Morton
2007-06-15  0:29               ` David Chinner
2007-06-15 15:05           ` Dave Kleikamp
2007-06-17  1:25       ` Arjan van de Ven
2007-06-17  5:02         ` Matt Mackall
2007-06-18  2:08           ` Christoph Lameter
2007-06-18  3:00             ` Arjan van de Ven
2007-06-18  4:50             ` William Lee Irwin III
2007-06-14 23:54   ` David Chinner
2007-07-02 18:16   ` Badari Pulavarty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0706141517120.2240@schroedinger.engr.sgi.com \
    --to=clameter@sgi.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).