From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1946470AbdEZBQh (ORCPT ); Thu, 25 May 2017 21:16:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:37958 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161032AbdEZBQW (ORCPT ); Thu, 25 May 2017 21:16:22 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB2EC2395D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=mhiramat@kernel.org Date: Fri, 26 May 2017 10:16:19 +0900 From: Masami Hiramatsu To: paulmck@linux.vnet.ibm.com Cc: Ingo Molnar , Masami Hiramatsu , Steven Rostedt , linux-kernel@vger.kernel.org, Peter Zijlstra , Ananth N Mavinakayanahalli , Thomas Gleixner , "H . Peter Anvin" Subject: Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT Message-Id: <20170526101619.57d2c23de9ba10e31ea190e8@kernel.org> In-Reply-To: <20170525151401.GP3956@linux.vnet.ibm.com> References: <149562719270.15375.4565081030740506940.stgit@devbox> <20170525061555.m2ihe2dvjyedyzwn@gmail.com> <20170525151401.GP3956@linux.vnet.ibm.com> X-Mailer: Sylpheed 3.5.0 (GTK+ 2.24.30; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 25 May 2017 08:14:01 -0700 "Paul E. McKenney" wrote: > On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > > > * Masami Hiramatsu wrote: > > > > > --- a/kernel/kprobes.c > > > +++ b/kernel/kprobes.c > > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > > > static bool kprobes_allow_optimization; > > > > > > /* > > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > > + * Since the threads running on dynamically allocated trampline code > > > + * can be interrupted, kprobes has to wait for those tasks back on > > > + * track and scheduled. If the kernel is preemptive, the thread can be > > > + * preempted by other tasks on the trampoline too. For such case, this > > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > > + */ > > > +static void synchronize_on_trampoline(void) > > > +{ > > > +#ifdef CONFIG_PREEMPT > > > + synchronize_rcu_tasks(); > > > +#else > > > + synchronize_sched(); > > > +#endif > > > +} > > > > So that's really unacceptably ugly. > > > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > > _especially_ if its API usage results in such ugly secondary #ifdefs... > > > > Why isn't there a single synchronize_rcu_tasks() API function, which does what is > > expected, where the _RCU_ code figures out how to implement it? > > > > I.e.: > > > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a > > helper Kconfig that is automatically selected by the RCU code itself. > > > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > > > Thanks, > > How about the following (untested) patch? This patch is good to me for kprobes at least. Reviewed-by: Masami Hiramatsu Steve, how about ftrace usage? Thank you, > > This is against -rcu's rcu/dev branch, FWIW. > > And I am also queuing patches with other cleanups, including do_exit(), > enabled by this approach. > > Thanx, Paul > > ------------------------------------------------------------------------ > > commit 9ab47fba45cea06e223e524d392621b64c174720 > Author: Paul E. McKenney > Date: Thu May 25 08:05:00 2017 -0700 > > rcu: Drive TASKS_RCU directly off of PREEMPT > > The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched > is used instead. This commit therefore makes synchronize_rcu_tasks() > and call_rcu_tasks() available always, but mapped to synchronize_sched() > and call_rcu_sched(), respectively, when !PREEMPT. This approach also > allows some #ifdefs to be removed from rcutorture. > > Reported-by: Ingo Molnar > Signed-off-by: Paul E. McKenney > > --- > > include/linux/rcupdate.h | 6 ++++-- > kernel/rcu/Kconfig | 3 +-- > kernel/rcu/rcutorture.c | 17 +---------------- > 3 files changed, 6 insertions(+), 20 deletions(-) > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h > index f816fc72b51e..c3f380befdd7 100644 > --- a/include/linux/rcupdate.h > +++ b/include/linux/rcupdate.h > @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); > void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); > void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); > void synchronize_sched(void); > -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > -void synchronize_rcu_tasks(void); > void rcu_barrier_tasks(void); > > #ifdef CONFIG_PREEMPT_RCU > @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; > rcu_all_qs(); \ > rcu_note_voluntary_context_switch_lite(t); \ > } while (0) > +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > +void synchronize_rcu_tasks(void); > #else /* #ifdef CONFIG_TASKS_RCU */ > #define TASKS_RCU(x) do { } while (0) > #define rcu_note_voluntary_context_switch_lite(t) do { } while (0) > #define rcu_note_voluntary_context_switch(t) rcu_all_qs() > +#define call_rcu_tasks call_rcu_sched > +#define synchronize_rcu_tasks synchronize_sched > #endif /* #else #ifdef CONFIG_TASKS_RCU */ > > /** > diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig > index be90c945063f..9210379c0353 100644 > --- a/kernel/rcu/Kconfig > +++ b/kernel/rcu/Kconfig > @@ -69,8 +69,7 @@ config TREE_SRCU > This option selects the full-fledged version of SRCU. > > config TASKS_RCU > - bool > - default n > + def_bool PREEMPT > select SRCU > help > This option enables a task-based RCU implementation that uses > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c > index aedc8f2ad955..273032dc8f2d 100644 > --- a/kernel/rcu/rcutorture.c > +++ b/kernel/rcu/rcutorture.c > @@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = { > .name = "sched" > }; > > -#ifdef CONFIG_TASKS_RCU > - > /* > * Definitions for RCU-tasks torture testing. > */ > @@ -698,24 +696,11 @@ static struct rcu_torture_ops tasks_ops = { > .name = "tasks" > }; > > -#define RCUTORTURE_TASKS_OPS &tasks_ops, > - > static bool __maybe_unused torturing_tasks(void) > { > return cur_ops == &tasks_ops; > } > > -#else /* #ifdef CONFIG_TASKS_RCU */ > - > -#define RCUTORTURE_TASKS_OPS > - > -static bool __maybe_unused torturing_tasks(void) > -{ > - return false; > -} > - > -#endif /* #else #ifdef CONFIG_TASKS_RCU */ > - > /* > * RCU torture priority-boost testing. Runs one real-time thread per > * CPU for moderate bursts, repeatedly registering RCU callbacks and > @@ -1712,7 +1697,7 @@ rcu_torture_init(void) > int firsterr = 0; > static struct rcu_torture_ops *torture_ops[] = { > &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, > - &sched_ops, RCUTORTURE_TASKS_OPS > + &sched_ops, &tasks_ops, > }; > > if (!torture_init_begin(torture_type, verbose, &torture_runnable)) > -- Masami Hiramatsu