From: Andrew Morton <akpm@osdl.org>
To: Ravikiran G Thirumalai <kiran@scalex86.org>
Cc: linux-kernel@vger.kernel.org, shai@scalex86.org
Subject: Re: [patch] Cache align futex hash buckets
Date: Mon, 20 Feb 2006 17:09:40 -0800 [thread overview]
Message-ID: <20060220170940.1496e1d5.akpm@osdl.org> (raw)
In-Reply-To: <20060221010430.GE3594@localhost.localdomain>
Ravikiran G Thirumalai <kiran@scalex86.org> wrote:
>
> On Mon, Feb 20, 2006 at 04:23:17PM -0800, Andrew Morton wrote:
> > Ravikiran G Thirumalai <kiran@scalex86.org> wrote:
> > >
> > > On Mon, Feb 20, 2006 at 03:34:19PM -0800, Andrew Morton wrote:
> > > > Andrew Morton <akpm@osdl.org> wrote:
> > > > >
> > > > > > @@ -100,9 +100,10 @@ struct futex_q {
> > > > > > struct futex_hash_bucket {
> > > > > > spinlock_t lock;
> > > > > > struct list_head chain;
> > > > > > -};
> > > > > > +} ____cacheline_internodealigned_in_smp;
> > > > > >
> > > > > > -static struct futex_hash_bucket futex_queues[1<<FUTEX_HASHBITS];
> > > > > > +static struct futex_hash_bucket futex_queues[1<<FUTEX_HASHBITS]
> > > > > > + __cacheline_aligned_in_smp;
> > > > > >
> > > > >
> > > > > How much memory does that thing end up consuming?
> > > >
> > > > I think a megabyte?
> > >
> > > On most machines it would be 256 * 128 = 32k. or 16k on arches with 64B
> > > cachelines. This looked like a simpler solution for spinlocks falling on
> > > the same cacheline. So is 16/32k unreasonable?
> > >
> >
> > CONFIG_X86_VSMP enables 4096-byte padding for
> > ____cacheline_internodealigned_in_smp. It's a megabyte.
>
> Yes, only on vSMPowered systems. Well, we have a large
> internode cacheline, but these machines have lots of memory too. I
> thought a simpler padding solution might be acceptable as futex_queues
> would be large only on our boxes.
Well it's your architecture... As long as you're finding this to be a
sufficiently large problem in testing to justify consuming a meg of memory
then fine, let's do it.
But your initial changelog was rather benchmark-free? It's always nice to
see numbers accompanying a purported optimisation patch.
next prev parent reply other threads:[~2006-02-21 1:11 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-02-20 23:32 [patch] Cache align futex hash buckets Ravikiran G Thirumalai
2006-02-20 23:33 ` Andrew Morton
2006-02-20 23:34 ` Andrew Morton
2006-02-21 0:09 ` Ravikiran G Thirumalai
2006-02-21 0:23 ` Andrew Morton
2006-02-21 1:04 ` Ravikiran G Thirumalai
2006-02-21 1:09 ` Andrew Morton [this message]
2006-02-21 1:39 ` Ravikiran G Thirumalai
2006-02-21 14:44 ` Andi Kleen
2006-02-21 3:30 ` Nick Piggin
2006-02-21 18:33 ` Christoph Lameter
2006-02-21 23:14 ` Christoph Lameter
2006-02-22 0:40 ` Nick Piggin
2006-02-22 2:08 ` Andrew Morton
2006-02-22 2:35 ` Ravikiran G Thirumalai
2006-02-22 2:37 ` Nick Piggin
2006-02-22 20:17 ` Ravikiran G Thirumalai
2006-02-22 20:50 ` Andrew Morton
[not found] ` <20060223015144.GC3663@localhost.localdomain>
2006-02-23 2:08 ` Ravikiran G Thirumalai
2006-02-21 20:20 ` Ravikiran G Thirumalai
2006-02-22 0:45 ` Nick Piggin
2006-02-22 2:09 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060220170940.1496e1d5.akpm@osdl.org \
--to=akpm@osdl.org \
--cc=kiran@scalex86.org \
--cc=linux-kernel@vger.kernel.org \
--cc=shai@scalex86.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).