From: "tip-bot2 for Peter Zijlstra" <tip-bot2@linutronix.de>
To: linux-tip-commits@vger.kernel.org
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>,
Ingo Molnar <mingo@kernel.org>, x86 <x86@kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: [tip: sched/core] smp: Optimize send_call_function_single_ipi()
Date: Mon, 01 Jun 2020 09:52:20 -0000 [thread overview]
Message-ID: <159100514006.17951.5633093083657745774.tip-bot2@tip-bot2> (raw)
In-Reply-To: <20200526161907.953304789@infradead.org>
The following commit has been merged into the sched/core branch of tip:
Commit-ID: b2a02fc43a1f40ef4eb2fb2b06357382608d4d84
Gitweb: https://git.kernel.org/tip/b2a02fc43a1f40ef4eb2fb2b06357382608d4d84
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Tue, 26 May 2020 18:11:01 +02:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 28 May 2020 10:54:15 +02:00
smp: Optimize send_call_function_single_ipi()
Just like the ttwu_queue_remote() IPI, make use of _TIF_POLLING_NRFLAG
to avoid sending IPIs to idle CPUs.
[ mingo: Fix UP build bug. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200526161907.953304789@infradead.org
---
kernel/sched/core.c | 10 ++++++++++
kernel/sched/idle.c | 5 +++++
kernel/sched/sched.h | 7 ++++---
kernel/smp.c | 16 +++++++++++++++-
4 files changed, 34 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2cacc1e..fa0d499 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2296,6 +2296,16 @@ static void wake_csd_func(void *info)
sched_ttwu_pending();
}
+void send_call_function_single_ipi(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+
+ if (!set_nr_if_polling(rq->idle))
+ arch_send_call_function_single_ipi(cpu);
+ else
+ trace_sched_wake_idle_without_ipi(cpu);
+}
+
/*
* Queue a task on the target CPUs wake_list and wake the CPU via IPI if
* necessary. The wakee CPU on receipt of the IPI will queue the task
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index b743bf3..387fd75 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -289,6 +289,11 @@ static void do_idle(void)
*/
smp_mb__after_atomic();
+ /*
+ * RCU relies on this call to be done outside of an RCU read-side
+ * critical section.
+ */
+ flush_smp_call_function_from_idle();
sched_ttwu_pending();
schedule_idle();
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3c163cb..75b0629 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1506,11 +1506,12 @@ static inline void unregister_sched_domain_sysctl(void)
}
#endif
-#else
+extern void flush_smp_call_function_from_idle(void);
+#else /* !CONFIG_SMP: */
+static inline void flush_smp_call_function_from_idle(void) { }
static inline void sched_ttwu_pending(void) { }
-
-#endif /* CONFIG_SMP */
+#endif
#include "stats.h"
#include "autogroup.h"
diff --git a/kernel/smp.c b/kernel/smp.c
index f720e38..9f11813 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -135,6 +135,8 @@ static __always_inline void csd_unlock(call_single_data_t *csd)
static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data);
+extern void send_call_function_single_ipi(int cpu);
+
/*
* Insert a previously allocated call_single_data_t element
* for execution on the given CPU. data must already have
@@ -178,7 +180,7 @@ static int generic_exec_single(int cpu, call_single_data_t *csd,
* equipped to do the right thing...
*/
if (llist_add(&csd->llist, &per_cpu(call_single_queue, cpu)))
- arch_send_call_function_single_ipi(cpu);
+ send_call_function_single_ipi(cpu);
return 0;
}
@@ -278,6 +280,18 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
}
}
+void flush_smp_call_function_from_idle(void)
+{
+ unsigned long flags;
+
+ if (llist_empty(this_cpu_ptr(&call_single_queue)))
+ return;
+
+ local_irq_save(flags);
+ flush_smp_call_function_queue(true);
+ local_irq_restore(flags);
+}
+
/*
* smp_call_function_single - Run a function on a specific CPU
* @func: The function to run. This must be fast and non-blocking.
next prev parent reply other threads:[~2020-06-01 9:53 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-26 16:10 [RFC][PATCH 0/7] Fix the scheduler-IPI mess Peter Zijlstra
2020-05-26 16:10 ` [RFC][PATCH 1/7] sched: Fix smp_call_function_single_async() usage for ILB Peter Zijlstra
2020-05-26 23:56 ` Frederic Weisbecker
2020-05-27 10:23 ` Vincent Guittot
2020-05-27 11:28 ` Frederic Weisbecker
2020-05-27 12:07 ` Vincent Guittot
2020-05-29 15:26 ` Valentin Schneider
2020-06-01 9:52 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-06-01 11:40 ` Frederic Weisbecker
2020-05-26 16:10 ` [RFC][PATCH 2/7] smp: Optimize flush_smp_call_function_queue() Peter Zijlstra
2020-05-28 12:28 ` Frederic Weisbecker
2020-06-01 9:52 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 3/7] smp: Move irq_work_run() out of flush_smp_call_function_queue() Peter Zijlstra
2020-05-29 13:04 ` Frederic Weisbecker
2020-06-01 9:52 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi() Peter Zijlstra
2020-05-27 9:56 ` Peter Zijlstra
2020-05-27 10:15 ` Peter Zijlstra
2020-05-27 15:56 ` Paul E. McKenney
2020-05-27 16:35 ` Peter Zijlstra
2020-05-27 17:12 ` Peter Zijlstra
2020-05-27 19:39 ` Paul E. McKenney
2020-05-28 1:35 ` Joel Fernandes
2020-05-28 8:59 ` [tip: core/rcu] rcu: Allow for smp_call_function() running callbacks from idle tip-bot2 for Peter Zijlstra
2021-01-21 16:56 ` [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi() Peter Zijlstra
2021-01-22 0:20 ` Paul E. McKenney
2021-01-22 8:31 ` Peter Zijlstra
2021-01-22 15:35 ` Paul E. McKenney
2020-05-29 13:01 ` Frederic Weisbecker
2020-06-01 9:52 ` tip-bot2 for Peter Zijlstra [this message]
2020-05-26 16:11 ` [RFC][PATCH 5/7] irq_work, smp: Allow irq_work on call_single_queue Peter Zijlstra
2020-05-28 23:40 ` Frederic Weisbecker
2020-05-29 13:36 ` Peter Zijlstra
2020-06-05 9:37 ` Peter Zijlstra
2020-06-05 15:02 ` Frederic Weisbecker
2020-06-05 16:17 ` Peter Zijlstra
2020-06-05 15:24 ` Kees Cook
2020-06-10 13:24 ` Frederic Weisbecker
2020-06-01 9:52 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 6/7] sched: Add rq::ttwu_pending Peter Zijlstra
2020-06-01 9:52 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-05-26 16:11 ` [RFC][PATCH 7/7] sched: Replace rq::wake_list Peter Zijlstra
2020-05-29 15:10 ` Valdis Klētnieks
2020-06-01 9:52 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2020-06-02 15:16 ` Frederic Weisbecker
2020-06-04 14:18 ` [RFC][PATCH 7/7] " Guenter Roeck
2020-06-05 0:24 ` Eric Biggers
2020-06-05 7:41 ` Peter Zijlstra
2020-06-05 16:15 ` Eric Biggers
2020-06-06 23:13 ` Guenter Roeck
2020-06-09 20:21 ` Eric Biggers
2020-06-09 21:25 ` Guenter Roeck
2020-06-09 21:38 ` Eric Biggers
2020-06-09 22:06 ` Peter Zijlstra
2020-06-09 23:03 ` Guenter Roeck
2020-06-10 9:09 ` Peter Zijlstra
2020-06-18 17:57 ` Steven Rostedt
2020-06-18 19:06 ` Guenter Roeck
2020-06-09 22:07 ` Peter Zijlstra
2020-06-05 8:10 ` Peter Zijlstra
2020-06-05 13:33 ` Guenter Roeck
2020-06-05 14:09 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=159100514006.17951.5633093083657745774.tip-bot2@tip-bot2 \
--to=tip-bot2@linutronix.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tip-commits@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).