linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Dobson <colpatch@us.ibm.com>
To: Andrew Morton <akpm@digeo.com>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
	linux-mm@kvack.org, LSE <lse-tech@lists.sourceforge.net>,
	Martin Bligh <mjbligh@us.ibm.com>,
	Michael Hohnbaum <hohnbaum@us.ibm.com>
Subject: Re: [rfc][patch] Memory Binding API v0.3 2.5.41
Date: Thu, 10 Oct 2002 11:29:00 -0700	[thread overview]
Message-ID: <3DA5C6EC.7040904@us.ibm.com> (raw)
In-Reply-To: 3DA4EE6C.6B4184CC@digeo.com

Andrew Morton wrote:
> Matthew Dobson wrote:
>>Greetings & Salutations,
>>        Here's a wonderful patch that I know you're all dying for...  Memory
>>Binding!
> Seems reasonable to me.
Good news!

> Could you tell us a bit about the operator's view of this?
> 
> I assume that a typical usage scenario would be to bind a process
> to a bunch of CPUs and to then bind that process to a bunch of
> memblks as well? 
> 
> If so, then how does the operator know how to identify those
> memblks?  To perform the (cpu list) <-> (memblk list) mapping?
Well, that's what the super-duper In-Kernel topology API is for!  ;)  If 
the operator wanted to ensure that all the processes memory was *only* 
allocated from the memblks closest to her bound CPUs, she'd loop over 
her cpu binding, and for each set bit, she'd:
	bitmask &= 1 << __node_to_memblk(__cpu_to_node(cpu));
I suppose that I could include a macro to do this in the patch, but I 
was a bit afraid (and still am) that it already may be a bit large for 
people's tastes.  I've got some suggestions on how to split it up/pare 
it down, so we'll see.

> Also, what advantage does this provide over the current node-local
> allocation policy?  I'd have thought that once you'd bound a 
> process to a CPU (or to a node's CPUs) that as long as the zone
> fallback list was right, that process would be getting local memory
> pretty much all the time anyway?
Very true...  This is to specifically allow for processes that want to 
do something *different* than the default policy.  Again, akin to CPU 
affinity, this is not something that the average process is going to 
ever use, or even care about.  The majority of processes don't 
specifically bind themselves to certain CPUs or groups of CPUs, because 
for them the default scheduler policies are fine.  For the majority of 
processes, the default memory allocation policy works just dandy.  This 
is for processes that want to do something different: Testing/debugging 
apps on a large (likely NUMA) system, high-end databases, and who knows 
what else?  There is also a plan to add a function call to bind specific 
regions of a processes memory to certain memblks, and this would allow 
for efficient shared memory for process groups spread across a large system.

> Last but not least: you got some benchmark numbers for this?
Nope..  It is not something that is going to (on average) improve 
benchmark numbers for something like a kernel compile...  As you 
mentioned above, the default policy is to allocate from the local memory 
block anyway.  This API is more useful for something where you want to 
pin your memory close to a particular process that your process is 
working with, or pin you memory to a different node than the one you're 
executing on to purposely test/debug something.  If you'd like, I can do 
some kernbench runs or something to come up with some numbers to show 
that it doesn't negatively affect performance, but I don't know of any 
benchmarking suites offhand that would show positive numbers.

> Thanks.
My pleasure! ;)

Cheers!

-Matt


  reply	other threads:[~2002-10-10 18:26 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-10  1:12 [rfc][patch] Memory Binding API v0.3 2.5.41 Matthew Dobson
2002-10-10  3:05 ` Andrew Morton
2002-10-10 18:29   ` Matthew Dobson [this message]
2002-10-10  4:06 ` Martin J. Bligh
2002-10-10 18:43   ` Matthew Dobson
2002-10-10  9:00 ` Arjan van de Ven
2002-10-10 18:55   ` Matthew Dobson
2002-10-10 10:06 ` Arjan van de Ven
2002-10-10 11:22   ` Alan Cox
2002-10-10 11:28     ` William Lee Irwin III
2002-10-10 19:09       ` Matthew Dobson
2002-10-10 19:06     ` Matthew Dobson
2002-10-10 19:01   ` Matthew Dobson
2002-10-13 22:22 ` Eric W. Biederman
2002-10-15  0:14   ` Matthew Dobson
2002-10-15  0:20     ` Martin J. Bligh
2002-10-15  0:38       ` Matthew Dobson
2002-10-15  0:43         ` Martin J. Bligh
2002-10-15  0:51           ` Matthew Dobson
2002-10-15  0:58             ` William Lee Irwin III
2002-10-15  0:55         ` [Lse-tech] " john stultz
2002-10-15  1:08           ` Martin J. Bligh
2002-10-15  1:20             ` William Lee Irwin III
2002-10-15  1:29               ` Martin J. Bligh
2002-10-15  1:40                 ` William Lee Irwin III
2002-10-15  1:57                   ` William Lee Irwin III
2002-10-15  1:08           ` William Lee Irwin III
2002-10-15  1:16             ` Martin J. Bligh
2002-10-15 17:21     ` Eric W. Biederman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3DA5C6EC.7040904@us.ibm.com \
    --to=colpatch@us.ibm.com \
    --cc=akpm@digeo.com \
    --cc=hohnbaum@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lse-tech@lists.sourceforge.net \
    --cc=mjbligh@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).