linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@sgi.com>
To: Erich Focht <efocht@ess.nec.de>
Cc: "Martin J. Bligh" <mbligh@aracnet.com>,
	Michael Hohnbaum <hohnbaum@us.ibm.com>,
	Ingo Molnar <mingo@elte.hu>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] pooling NUMA scheduler with initial load balancing
Date: Tue, 8 Oct 2002 21:15:13 -0400	[thread overview]
Message-ID: <20021008211513.A28583@sgi.com> (raw)
In-Reply-To: <200210081933.06677.efocht@ess.nec.de>; from efocht@ess.nec.de on Tue, Oct 08, 2002 at 07:33:06PM +0200

On Tue, Oct 08, 2002 at 07:33:06PM +0200, Erich Focht wrote:
> Aaargh, you got the wrong second patch :-( Sorry for that...
> 
> Thanks for the hints, I cleaned up the first patch, too. No
> CONFIG_NUMA_SCHED any more, switched to MAX_NUMNODES, including
> asm/numa.h from asm/topology.h, so no need for you to see it.


diff -urNp a/arch/i386/kernel/smpboot.c b/arch/i386/kernel/smpboot.c
--- a/arch/i386/kernel/smpboot.c	Fri Sep 27 23:49:54 2002
+++ b/arch/i386/kernel/smpboot.c	Tue Oct  8 11:37:56 2002
@@ -1194,6 +1194,11 @@ int __devinit __cpu_up(unsigned int cpu)
 void __init smp_cpus_done(unsigned int max_cpus)
 {
 	zap_low_mappings();
+#ifdef CONFIG_NUMA
+	pooldata_lock();
+	bld_pools();
+	pooldata_unlock();
+#endif

All callers of bld_pools() need the pooldata lock - taking
it inside that function makes the code a little more readable..
Also I'd suggest to rename bld_pools() to build_pools() ;)

-	cache_decay_ticks = 10;	/* XXX base this on PAL info and cache-bandwidth estimate */
+	cache_decay_ticks = 8;	/* XXX base this on PAL info and cache-bandwidth estimate */

Could you explain this change?  And it's affect on non-NUMA IA64
machines?

 /**
+ * atomic_inc_return - increment atomic variable and return new value
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1 and return it's new value.  Note that
+ * the guaranteed useful range of an atomic_t is only 24 bits.
+ */
+static inline int atomic_inc_return(atomic_t *v){
+	atomic_inc(v);
+	return v->counter;
+}

Who do you guarantee this is atomic?  Please make it fit
Documentation/CodyingStyle, btw..

+int numpools, pool_ptr[MAX_NUMNODES+1], pool_cpus[NR_CPUS], pool_nr_cpus[MAX_NUMNODES];
+unsigned long pool_mask[MAX_NUMNODES];

Hmm, shouldn't those [MAX_NUMNODES] arrays be in some per-node array
to avoid cacheline-bouncing?

+void pooldata_lock(void)
+{
+	int i;
+ retry:
+	while (atomic_read(&pool_lock));
+	if (atomic_inc_return(&pool_lock) > 1) {
+		atomic_dec(&pool_lock);
+		goto retry;
+	}

Why not a simple spin_lock()?

+	/* 
+	 * Wait a while, any loops using pool data should finish
+	 * in between. This is VERY ugly and should be replaced
+	 * by some real RCU stuff. [EF]
+	 */
+	for (i=0; i<100; i++)
+		udelay(1000);

Urgg.  I'd suggest you switch to RCU now and make your patch apply
ontop of it - another reason to apply the RCU core patch..

+void pooldata_unlock(void)
+{
+	atomic_dec(&pool_lock);
+}

Dito for spin_unlock.

+	/* avoid deadlock by timer interrupt on own cpu */
+	if (atomic_read(&pool_lock)) return;

spin_trylock..

All in all your code doesn't seem to be very cachelign-friendly,
lots of global bouncing.  Do you have any numbers on what your
patch changes for normal SMP configurations?

  parent reply	other threads:[~2002-10-08 17:56 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-06 16:51 [RFC] NUMA schedulers benchmark results Erich Focht
2002-10-06 20:24 ` Erich Focht
2002-10-07  0:00   ` Martin J. Bligh
2002-10-07  0:58 ` Martin J. Bligh
2002-10-07 16:52   ` Erich Focht
2002-10-07  7:25 ` Martin J. Bligh
2002-10-07  7:40   ` Ingo Molnar
2002-10-07 20:09   ` [PATCH] pooling NUMA scheduler with initial load balancing Erich Focht
     [not found]     ` <1420721189.1034032091@[10.10.2.3]>
2002-10-08 17:33       ` Erich Focht
2002-10-08 19:44         ` Martin J. Bligh
2002-10-09 16:26           ` Erich Focht
2002-10-09 17:33             ` Martin J. Bligh
2002-10-09 17:58               ` Andrew Theurer
2002-10-09 18:13                 ` Andrew Theurer
2002-10-09 23:02                 ` Erich Focht
2002-10-10 17:34                   ` Andrew Theurer
     [not found]                     ` <200210110947.11714.efocht@ess.nec.de>
2002-10-11  8:27                       ` Erich Focht
2002-10-11 14:47                         ` Martin J. Bligh
2002-10-11 15:29                           ` Erich Focht
2002-10-11 15:34                             ` Martin J. Bligh
2002-10-09  1:15         ` Christoph Hellwig [this message]
2002-10-09 10:29           ` Erich Focht
2002-10-07 16:37 ` [RFC] NUMA schedulers benchmark results Michael Hohnbaum
2002-10-07 20:35   ` Erich Focht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021008211513.A28583@sgi.com \
    --to=hch@sgi.com \
    --cc=efocht@ess.nec.de \
    --cc=hohnbaum@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mbligh@aracnet.com \
    --cc=mingo@elte.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).