From: Erich Focht <efocht@ess.nec.de>
To: Ingo Molnar <mingo@elte.hu>
Cc: linux-kernel@vger.kernel.org
Subject: [PATCH] scheduler: migration tasks startup
Date: Wed, 6 Mar 2002 11:32:46 +0100 (MET) [thread overview]
Message-ID: <Pine.LNX.4.21.0203061123270.2743-100000@sx6.ess.nec.de> (raw)
Hi,
we encountered problems with the initial distribution of the
migration_tasks across the CPUs. Machines with 16 and more CPUs
sometimes won't boot. Here is a fix which works reliably:
- all migration_tasks are started on CPU#0,
- migration_task #0 is used to migrate the others to their destination,
- synchronization with migration_mask isn't needed any more.
Thanks,
best regards,
Erich
diff -urN 2.5.6-pre1-fix/kernel/sched.c 2.5.6-pre1-fix2/kernel/sched.c
--- 2.5.6-pre1-fix/kernel/sched.c Thu Feb 28 19:14:29 2002
+++ 2.5.6-pre1-fix2/kernel/sched.c Wed Mar 6 11:28:59 2002
@@ -1557,8 +1557,9 @@
static volatile unsigned long migration_mask;
-static int migration_thread(void * unused)
+static int migration_thread(void * bind_cpu)
{
+ int cpu = cpu_logical_map((int) (long) bind_cpu);
struct sched_param param = { sched_priority: 99 };
runqueue_t *rq;
int ret;
@@ -1566,33 +1567,19 @@
daemonize();
sigfillset(¤t->blocked);
set_fs(KERNEL_DS);
- ret = setscheduler(0, SCHED_FIFO, ¶m);
/*
- * We have to migrate manually - there is no migration thread
- * to do this for us yet :-)
- *
- * We use the following property of the Linux scheduler. At
- * this point no other task is running, so by keeping all
- * migration threads running, the load-balancer will distribute
- * them between all CPUs equally. At that point every migration
- * task binds itself to the current CPU.
+ * The first migration task is started on CPU #0. This one can migrate
+ * the tasks to their destination CPUs.
*/
-
- /* wait for all migration threads to start up. */
- while (!migration_mask)
- yield();
-
- for (;;) {
- preempt_disable();
- if (test_and_clear_bit(smp_processor_id(), &migration_mask))
- current->cpus_allowed = 1 << smp_processor_id();
- if (test_thread_flag(TIF_NEED_RESCHED))
- schedule();
- if (!migration_mask)
- break;
- preempt_enable();
+ if (cpu != 0) {
+ while (!cpu_rq(cpu_logical_map(0))->migration_thread)
+ yield();
+ set_cpus_allowed(current, 1UL << cpu);
}
+ printk("migration_task %d on cpu=%d\n",cpu,smp_processor_id());
+ ret = setscheduler(0, SCHED_FIFO, ¶m);
+
rq = this_rq();
rq->migration_thread = current;
preempt_enable();
@@ -1651,17 +1638,15 @@
{
int cpu;
+ current->cpus_allowed = 1UL << cpu_logical_map(0);
for (cpu = 0; cpu < smp_num_cpus; cpu++)
- if (kernel_thread(migration_thread, NULL,
+ if (kernel_thread(migration_thread, (void *) (long) cpu,
CLONE_FS | CLONE_FILES | CLONE_SIGNAL) < 0)
BUG();
-
- migration_mask = (1 << smp_num_cpus) - 1;
+ current->cpus_allowed = -1L;
for (cpu = 0; cpu < smp_num_cpus; cpu++)
while (!cpu_rq(cpu)->migration_thread)
schedule_timeout(2);
- if (migration_mask)
- BUG();
}
#endif
next reply other threads:[~2002-03-06 10:33 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-03-06 10:32 Erich Focht [this message]
2002-03-07 9:24 ` [PATCH] scheduler: migration tasks startup Anton Blanchard
2002-03-07 12:42 ` Erich Focht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.21.0203061123270.2743-100000@sx6.ess.nec.de \
--to=efocht@ess.nec.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).