From: Antonio Vargas <wind@cocodriloo.com>
To: "Martin J. Bligh" <mbligh@aracnet.com>
Cc: Antonio Vargas <wind@cocodriloo.com>,
linux-kernel@vger.kernel.org, nicoya@apia.dhs.org
Subject: Re: cow-ahead N pages for fault clustering
Date: Fri, 18 Apr 2003 19:35:11 +0200 [thread overview]
Message-ID: <20030418173511.GC27055@wind.cocodriloo.com> (raw)
In-Reply-To: <20890000.1050385742@[10.10.2.4]>
On Mon, Apr 14, 2003 at 10:49:03PM -0700, Martin J. Bligh wrote:
> >> > >> Ah, you probably don't want to do that ... it's very expensive.
> >> > >> Moreover, if you exec 2ns later, all the effort will be wasted ...
> >> > >> and it's very hard to deterministically predict whether you'll exec
> >> > >> or not (stupid UNIX semantics). Doing it lazily is probably best,
> >> > >> and as to "nodes would not have to reference the memory from
> >> > >> others" - you're still doing that, you're just batching it on the
> >> > >> front end.
> >> > >
> >> > > True... What about a vma-level COW-ahead just like we have a
> >> > > file-level read-ahead, then? I mean batching the COW at
> >> > > unCOW-because-of-write time.
> >> >
> >> > That'd be interesting ... and you can test that on a UP box, is not
> >> > just NUMA. Depends on the workload quite heavily, I suspect.
> >> >
> >> > > btw, COW-ahead sound really silly :)
> >> >
> >> > Yeah. So be sure to call it that if it works out ... we need more
> >> > things like that ;-) Moooooo.
> >>
> >> What about the attached one? I'm compiling it right now to test in UML :)
> >>
> >> [ snip fake-NUMA-on-SMP discussion ]
> >>
> >
> > OK, too quick for me... this next one applies, compiles and boots on
> > 2.5.66 + uml. Now I wonder how can I test if this is useful... ideas?
>
> Well, benchmark it ;-) My favourite trick is to just
> "/usr/bin/time make bzImage" on some fixed kernel version & config,
> but aim7 / aim9 is pretty easy to set up too, and might be interesting.
>
> M.
I've benchmarked my patch with a 2-pages-per-fault loop:
make allnoconfig
date >>aaa
make bzImage
date >>aaa
and then checked manually the time difference
Took the same time both on vanilla 2.5.66 and my 2.5.66+cowahead.
Perhaps it's better for other workloads...
ps. my posted patch had a little bug: it did the cow loop only 1 time,
so it only cow'ed 1 page... be sure to change the end test if
you want to benchmark it futher.
prev parent reply other threads:[~2003-04-18 17:11 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-04-14 13:31 Quick question about hyper-threading (also some NUMA stuff) Timothy Miller
2003-04-14 14:55 ` Martin J. Bligh
2003-04-14 15:29 ` Antonio Vargas
2003-04-14 15:39 ` Martin J. Bligh
2003-04-14 15:57 ` Antonio Vargas
2003-04-14 16:24 ` Martin J. Bligh
2003-04-14 16:43 ` Antonio Vargas
2003-04-14 16:37 ` Martin J. Bligh
2003-04-14 17:14 ` Antonio Vargas
2003-04-14 17:22 ` Martin J. Bligh
2003-04-14 18:32 ` cow-ahead N pages for fault clustering Antonio Vargas
2003-04-14 18:47 ` Antonio Vargas
2003-04-15 5:49 ` Martin J. Bligh
2003-04-18 17:35 ` Antonio Vargas [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030418173511.GC27055@wind.cocodriloo.com \
--to=wind@cocodriloo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mbligh@aracnet.com \
--cc=nicoya@apia.dhs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).