* [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT @ 2017-05-24 12:00 Masami Hiramatsu 2017-05-24 14:47 ` Paul E. McKenney 2017-05-25 6:15 ` Ingo Molnar 0 siblings, 2 replies; 8+ messages in thread From: Masami Hiramatsu @ 2017-05-24 12:00 UTC (permalink / raw) To: Ingo Molnar, Paul E . McKenney, Steven Rostedt Cc: Masami Hiramatsu, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin To enable jump optimized probe with CONFIG_PREEMPT, use synchronize_rcu_tasks() to wait for all tasks preempted on trampoline code back on track. Since the jump optimized kprobes can replace multiple instructions, there can be tasks which are preempted on the 2nd (or 3rd) instructions. If the kprobe replaces those instructions by a jump instruction, when those tasks back to the preempted place, it is a middle of the jump instruction and causes a kernel panic. To avoid such tragedies in advance, kprobe optimizer prepare a detour route using normal kprobe (e.g. int3 breakpoint on x86), and wait for the tasks which is interrrupted on such place by synchronize_sched() when CONFIG_PREEMPT=n. If CONFIG_PREEMPT=y, things be more complicated, because such interrupted thread can be preempted (other thread can be scheduled in interrupt handler.) So, kprobes optimizer has to wait for those tasks scheduled normally. In this case we can use synchronize_rcu_tasks() which ensures that all preempted tasks back on track and schedule it. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> --- arch/Kconfig | 2 +- kernel/kprobes.c | 23 ++++++++++++++++++++++- 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 6c00e5b..2abb8de 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST config OPTPROBES def_bool y depends on KPROBES && HAVE_OPTPROBES - depends on !PREEMPT + select TASKS_RCU if PREEMPT config KPROBES_ON_FTRACE def_bool y diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 9f60567..6d69074 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) static bool kprobes_allow_optimization; /* + * Synchronizing wait on trampline code for interrupted tasks/threads. + * Since the threads running on dynamically allocated trampline code + * can be interrupted, kprobes has to wait for those tasks back on + * track and scheduled. If the kernel is preemptive, the thread can be + * preempted by other tasks on the trampoline too. For such case, this + * calls synchronize_rcu_tasks() to wait for those tasks back on track. + */ +static void synchronize_on_trampoline(void) +{ +#ifdef CONFIG_PREEMPT + synchronize_rcu_tasks(); +#else + synchronize_sched(); +#endif +} + +/* * Call all pre_handler on the list, but ignores its return value. * This must be called from arch-dep optimized caller. */ @@ -578,8 +595,12 @@ static void kprobe_optimizer(struct work_struct *work) * there is a chance that Nth instruction is interrupted. In that * case, running interrupt can return to 2nd-Nth byte of jump * instruction. This wait is for avoiding it. + * With CONFIG_PREEMPT, the interrupts can leads preemption. To wait + * for such thread, we will use synchronize_rcu_tasks() which ensures + * all preeempted tasks are scheduled normally. So we can ensure there + * is no threads running there. */ - synchronize_sched(); + synchronize_on_trampoline(); /* Step 3: Optimize kprobes after quiesence period */ do_optimize_kprobes(); ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-24 12:00 [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT Masami Hiramatsu @ 2017-05-24 14:47 ` Paul E. McKenney 2017-05-25 6:15 ` Ingo Molnar 1 sibling, 0 replies; 8+ messages in thread From: Paul E. McKenney @ 2017-05-24 14:47 UTC (permalink / raw) To: Masami Hiramatsu Cc: Ingo Molnar, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin On Wed, May 24, 2017 at 09:00:03PM +0900, Masami Hiramatsu wrote: > To enable jump optimized probe with CONFIG_PREEMPT, use > synchronize_rcu_tasks() to wait for all tasks preempted > on trampoline code back on track. > > Since the jump optimized kprobes can replace multiple > instructions, there can be tasks which are preempted > on the 2nd (or 3rd) instructions. If the kprobe > replaces those instructions by a jump instruction, > when those tasks back to the preempted place, it is > a middle of the jump instruction and causes a kernel > panic. > To avoid such tragedies in advance, kprobe optimizer > prepare a detour route using normal kprobe (e.g. > int3 breakpoint on x86), and wait for the tasks which > is interrrupted on such place by synchronize_sched() > when CONFIG_PREEMPT=n. > If CONFIG_PREEMPT=y, things be more complicated, because > such interrupted thread can be preempted (other thread > can be scheduled in interrupt handler.) So, kprobes > optimizer has to wait for those tasks scheduled normally. > In this case we can use synchronize_rcu_tasks() which > ensures that all preempted tasks back on track and > schedule it. > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > --- > arch/Kconfig | 2 +- > kernel/kprobes.c | 23 ++++++++++++++++++++++- > 2 files changed, 23 insertions(+), 2 deletions(-) > > diff --git a/arch/Kconfig b/arch/Kconfig > index 6c00e5b..2abb8de 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST > config OPTPROBES > def_bool y > depends on KPROBES && HAVE_OPTPROBES > - depends on !PREEMPT > + select TASKS_RCU if PREEMPT > > config KPROBES_ON_FTRACE > def_bool y > diff --git a/kernel/kprobes.c b/kernel/kprobes.c > index 9f60567..6d69074 100644 > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > static bool kprobes_allow_optimization; > > /* > + * Synchronizing wait on trampline code for interrupted tasks/threads. > + * Since the threads running on dynamically allocated trampline code > + * can be interrupted, kprobes has to wait for those tasks back on > + * track and scheduled. If the kernel is preemptive, the thread can be > + * preempted by other tasks on the trampoline too. For such case, this > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > + */ > +static void synchronize_on_trampoline(void) > +{ > +#ifdef CONFIG_PREEMPT > + synchronize_rcu_tasks(); > +#else > + synchronize_sched(); > +#endif > +} > + > +/* > * Call all pre_handler on the list, but ignores its return value. > * This must be called from arch-dep optimized caller. > */ > @@ -578,8 +595,12 @@ static void kprobe_optimizer(struct work_struct *work) > * there is a chance that Nth instruction is interrupted. In that > * case, running interrupt can return to 2nd-Nth byte of jump > * instruction. This wait is for avoiding it. > + * With CONFIG_PREEMPT, the interrupts can leads preemption. To wait > + * for such thread, we will use synchronize_rcu_tasks() which ensures > + * all preeempted tasks are scheduled normally. So we can ensure there > + * is no threads running there. > */ > - synchronize_sched(); > + synchronize_on_trampoline(); > > /* Step 3: Optimize kprobes after quiesence period */ > do_optimize_kprobes(); > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-24 12:00 [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT Masami Hiramatsu 2017-05-24 14:47 ` Paul E. McKenney @ 2017-05-25 6:15 ` Ingo Molnar 2017-05-25 9:04 ` Masami Hiramatsu 2017-05-25 15:14 ` Paul E. McKenney 1 sibling, 2 replies; 8+ messages in thread From: Ingo Molnar @ 2017-05-25 6:15 UTC (permalink / raw) To: Masami Hiramatsu Cc: Paul E . McKenney, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin * Masami Hiramatsu <mhiramat@kernel.org> wrote: > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > static bool kprobes_allow_optimization; > > /* > + * Synchronizing wait on trampline code for interrupted tasks/threads. > + * Since the threads running on dynamically allocated trampline code > + * can be interrupted, kprobes has to wait for those tasks back on > + * track and scheduled. If the kernel is preemptive, the thread can be > + * preempted by other tasks on the trampoline too. For such case, this > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > + */ > +static void synchronize_on_trampoline(void) > +{ > +#ifdef CONFIG_PREEMPT > + synchronize_rcu_tasks(); > +#else > + synchronize_sched(); > +#endif > +} So that's really unacceptably ugly. Paul, I still question the need to have tasks-RCU as a Kconfig distinction, _especially_ if its API usage results in such ugly secondary #ifdefs... Why isn't there a single synchronize_rcu_tasks() API function, which does what is expected, where the _RCU_ code figures out how to implement it? I.e.: - There should be no user configurable TASKS_RCU Kconfig setting - at most a helper Kconfig that is automatically selected by the RCU code itself. - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. Thanks, Ingo ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-25 6:15 ` Ingo Molnar @ 2017-05-25 9:04 ` Masami Hiramatsu 2017-05-25 15:14 ` Paul E. McKenney 1 sibling, 0 replies; 8+ messages in thread From: Masami Hiramatsu @ 2017-05-25 9:04 UTC (permalink / raw) To: Ingo Molnar Cc: Paul E . McKenney, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin On Thu, 25 May 2017 08:15:55 +0200 Ingo Molnar <mingo@kernel.org> wrote: > > * Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > --- a/kernel/kprobes.c > > +++ b/kernel/kprobes.c > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > > static bool kprobes_allow_optimization; > > > > /* > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > + * Since the threads running on dynamically allocated trampline code > > + * can be interrupted, kprobes has to wait for those tasks back on > > + * track and scheduled. If the kernel is preemptive, the thread can be > > + * preempted by other tasks on the trampoline too. For such case, this > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > + */ > > +static void synchronize_on_trampoline(void) > > +{ > > +#ifdef CONFIG_PREEMPT > > + synchronize_rcu_tasks(); > > +#else > > + synchronize_sched(); > > +#endif > > +} > > So that's really unacceptably ugly. > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > _especially_ if its API usage results in such ugly secondary #ifdefs... > > Why isn't there a single synchronize_rcu_tasks() API function, which does what is > expected, where the _RCU_ code figures out how to implement it? Hmm, if there are only 3 users, kprobes/ftrace/kpatch, and those use it same purpose (wait for tasks which preempted or interrupted), maybe we can switch the implementation of synchronize_rcu_tasks() in RCU level. > > I.e.: > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a > helper Kconfig that is automatically selected by the RCU code itself. TASKS_RCU kconfig is already a hidden setting. It is selected if CONFIG_PREEMPT=y && CONFIG_KPROBES=y && HAVE_OPTPROBES=y for kprobes. Thank you, > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. -- Masami Hiramatsu <mhiramat@kernel.org> ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-25 6:15 ` Ingo Molnar 2017-05-25 9:04 ` Masami Hiramatsu @ 2017-05-25 15:14 ` Paul E. McKenney 2017-05-26 1:16 ` Masami Hiramatsu 2017-05-26 6:10 ` Ingo Molnar 1 sibling, 2 replies; 8+ messages in thread From: Paul E. McKenney @ 2017-05-25 15:14 UTC (permalink / raw) To: Ingo Molnar Cc: Masami Hiramatsu, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > * Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > --- a/kernel/kprobes.c > > +++ b/kernel/kprobes.c > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > > static bool kprobes_allow_optimization; > > > > /* > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > + * Since the threads running on dynamically allocated trampline code > > + * can be interrupted, kprobes has to wait for those tasks back on > > + * track and scheduled. If the kernel is preemptive, the thread can be > > + * preempted by other tasks on the trampoline too. For such case, this > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > + */ > > +static void synchronize_on_trampoline(void) > > +{ > > +#ifdef CONFIG_PREEMPT > > + synchronize_rcu_tasks(); > > +#else > > + synchronize_sched(); > > +#endif > > +} > > So that's really unacceptably ugly. > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > _especially_ if its API usage results in such ugly secondary #ifdefs... > > Why isn't there a single synchronize_rcu_tasks() API function, which does what is > expected, where the _RCU_ code figures out how to implement it? > > I.e.: > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a > helper Kconfig that is automatically selected by the RCU code itself. > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > Thanks, How about the following (untested) patch? This is against -rcu's rcu/dev branch, FWIW. And I am also queuing patches with other cleanups, including do_exit(), enabled by this approach. Thanx, Paul ------------------------------------------------------------------------ commit 9ab47fba45cea06e223e524d392621b64c174720 Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Date: Thu May 25 08:05:00 2017 -0700 rcu: Drive TASKS_RCU directly off of PREEMPT The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched is used instead. This commit therefore makes synchronize_rcu_tasks() and call_rcu_tasks() available always, but mapped to synchronize_sched() and call_rcu_sched(), respectively, when !PREEMPT. This approach also allows some #ifdefs to be removed from rcutorture. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> --- include/linux/rcupdate.h | 6 ++++-- kernel/rcu/Kconfig | 3 +-- kernel/rcu/rcutorture.c | 17 +---------------- 3 files changed, 6 insertions(+), 20 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index f816fc72b51e..c3f380befdd7 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); void synchronize_sched(void); -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); -void synchronize_rcu_tasks(void); void rcu_barrier_tasks(void); #ifdef CONFIG_PREEMPT_RCU @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; rcu_all_qs(); \ rcu_note_voluntary_context_switch_lite(t); \ } while (0) +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); +void synchronize_rcu_tasks(void); #else /* #ifdef CONFIG_TASKS_RCU */ #define TASKS_RCU(x) do { } while (0) #define rcu_note_voluntary_context_switch_lite(t) do { } while (0) #define rcu_note_voluntary_context_switch(t) rcu_all_qs() +#define call_rcu_tasks call_rcu_sched +#define synchronize_rcu_tasks synchronize_sched #endif /* #else #ifdef CONFIG_TASKS_RCU */ /** diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index be90c945063f..9210379c0353 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -69,8 +69,7 @@ config TREE_SRCU This option selects the full-fledged version of SRCU. config TASKS_RCU - bool - default n + def_bool PREEMPT select SRCU help This option enables a task-based RCU implementation that uses diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index aedc8f2ad955..273032dc8f2d 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = { .name = "sched" }; -#ifdef CONFIG_TASKS_RCU - /* * Definitions for RCU-tasks torture testing. */ @@ -698,24 +696,11 @@ static struct rcu_torture_ops tasks_ops = { .name = "tasks" }; -#define RCUTORTURE_TASKS_OPS &tasks_ops, - static bool __maybe_unused torturing_tasks(void) { return cur_ops == &tasks_ops; } -#else /* #ifdef CONFIG_TASKS_RCU */ - -#define RCUTORTURE_TASKS_OPS - -static bool __maybe_unused torturing_tasks(void) -{ - return false; -} - -#endif /* #else #ifdef CONFIG_TASKS_RCU */ - /* * RCU torture priority-boost testing. Runs one real-time thread per * CPU for moderate bursts, repeatedly registering RCU callbacks and @@ -1712,7 +1697,7 @@ rcu_torture_init(void) int firsterr = 0; static struct rcu_torture_ops *torture_ops[] = { &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, - &sched_ops, RCUTORTURE_TASKS_OPS + &sched_ops, &tasks_ops, }; if (!torture_init_begin(torture_type, verbose, &torture_runnable)) ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-25 15:14 ` Paul E. McKenney @ 2017-05-26 1:16 ` Masami Hiramatsu 2017-05-26 6:10 ` Ingo Molnar 1 sibling, 0 replies; 8+ messages in thread From: Masami Hiramatsu @ 2017-05-26 1:16 UTC (permalink / raw) To: paulmck Cc: Ingo Molnar, Masami Hiramatsu, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin On Thu, 25 May 2017 08:14:01 -0700 "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote: > On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > > > * Masami Hiramatsu <mhiramat@kernel.org> wrote: > > > > > --- a/kernel/kprobes.c > > > +++ b/kernel/kprobes.c > > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > > > static bool kprobes_allow_optimization; > > > > > > /* > > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > > + * Since the threads running on dynamically allocated trampline code > > > + * can be interrupted, kprobes has to wait for those tasks back on > > > + * track and scheduled. If the kernel is preemptive, the thread can be > > > + * preempted by other tasks on the trampoline too. For such case, this > > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > > + */ > > > +static void synchronize_on_trampoline(void) > > > +{ > > > +#ifdef CONFIG_PREEMPT > > > + synchronize_rcu_tasks(); > > > +#else > > > + synchronize_sched(); > > > +#endif > > > +} > > > > So that's really unacceptably ugly. > > > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > > _especially_ if its API usage results in such ugly secondary #ifdefs... > > > > Why isn't there a single synchronize_rcu_tasks() API function, which does what is > > expected, where the _RCU_ code figures out how to implement it? > > > > I.e.: > > > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a > > helper Kconfig that is automatically selected by the RCU code itself. > > > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > > > Thanks, > > How about the following (untested) patch? This patch is good to me for kprobes at least. Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Steve, how about ftrace usage? Thank you, > > This is against -rcu's rcu/dev branch, FWIW. > > And I am also queuing patches with other cleanups, including do_exit(), > enabled by this approach. > > Thanx, Paul > > ------------------------------------------------------------------------ > > commit 9ab47fba45cea06e223e524d392621b64c174720 > Author: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > Date: Thu May 25 08:05:00 2017 -0700 > > rcu: Drive TASKS_RCU directly off of PREEMPT > > The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched > is used instead. This commit therefore makes synchronize_rcu_tasks() > and call_rcu_tasks() available always, but mapped to synchronize_sched() > and call_rcu_sched(), respectively, when !PREEMPT. This approach also > allows some #ifdefs to be removed from rcutorture. > > Reported-by: Ingo Molnar <mingo@kernel.org> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > > --- > > include/linux/rcupdate.h | 6 ++++-- > kernel/rcu/Kconfig | 3 +-- > kernel/rcu/rcutorture.c | 17 +---------------- > 3 files changed, 6 insertions(+), 20 deletions(-) > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h > index f816fc72b51e..c3f380befdd7 100644 > --- a/include/linux/rcupdate.h > +++ b/include/linux/rcupdate.h > @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); > void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); > void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); > void synchronize_sched(void); > -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > -void synchronize_rcu_tasks(void); > void rcu_barrier_tasks(void); > > #ifdef CONFIG_PREEMPT_RCU > @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; > rcu_all_qs(); \ > rcu_note_voluntary_context_switch_lite(t); \ > } while (0) > +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > +void synchronize_rcu_tasks(void); > #else /* #ifdef CONFIG_TASKS_RCU */ > #define TASKS_RCU(x) do { } while (0) > #define rcu_note_voluntary_context_switch_lite(t) do { } while (0) > #define rcu_note_voluntary_context_switch(t) rcu_all_qs() > +#define call_rcu_tasks call_rcu_sched > +#define synchronize_rcu_tasks synchronize_sched > #endif /* #else #ifdef CONFIG_TASKS_RCU */ > > /** > diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig > index be90c945063f..9210379c0353 100644 > --- a/kernel/rcu/Kconfig > +++ b/kernel/rcu/Kconfig > @@ -69,8 +69,7 @@ config TREE_SRCU > This option selects the full-fledged version of SRCU. > > config TASKS_RCU > - bool > - default n > + def_bool PREEMPT > select SRCU > help > This option enables a task-based RCU implementation that uses > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c > index aedc8f2ad955..273032dc8f2d 100644 > --- a/kernel/rcu/rcutorture.c > +++ b/kernel/rcu/rcutorture.c > @@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = { > .name = "sched" > }; > > -#ifdef CONFIG_TASKS_RCU > - > /* > * Definitions for RCU-tasks torture testing. > */ > @@ -698,24 +696,11 @@ static struct rcu_torture_ops tasks_ops = { > .name = "tasks" > }; > > -#define RCUTORTURE_TASKS_OPS &tasks_ops, > - > static bool __maybe_unused torturing_tasks(void) > { > return cur_ops == &tasks_ops; > } > > -#else /* #ifdef CONFIG_TASKS_RCU */ > - > -#define RCUTORTURE_TASKS_OPS > - > -static bool __maybe_unused torturing_tasks(void) > -{ > - return false; > -} > - > -#endif /* #else #ifdef CONFIG_TASKS_RCU */ > - > /* > * RCU torture priority-boost testing. Runs one real-time thread per > * CPU for moderate bursts, repeatedly registering RCU callbacks and > @@ -1712,7 +1697,7 @@ rcu_torture_init(void) > int firsterr = 0; > static struct rcu_torture_ops *torture_ops[] = { > &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, > - &sched_ops, RCUTORTURE_TASKS_OPS > + &sched_ops, &tasks_ops, > }; > > if (!torture_init_begin(torture_type, verbose, &torture_runnable)) > -- Masami Hiramatsu <mhiramat@kernel.org> ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-25 15:14 ` Paul E. McKenney 2017-05-26 1:16 ` Masami Hiramatsu @ 2017-05-26 6:10 ` Ingo Molnar 2017-05-26 15:43 ` Paul E. McKenney 1 sibling, 1 reply; 8+ messages in thread From: Ingo Molnar @ 2017-05-26 6:10 UTC (permalink / raw) To: Paul E. McKenney Cc: Masami Hiramatsu, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote: > How about the following (untested) patch? Looks good to me! Acked-by: Ingo Molnar <mingo@kernel.org> Thanks, Ingo ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT 2017-05-26 6:10 ` Ingo Molnar @ 2017-05-26 15:43 ` Paul E. McKenney 0 siblings, 0 replies; 8+ messages in thread From: Paul E. McKenney @ 2017-05-26 15:43 UTC (permalink / raw) To: Ingo Molnar Cc: Masami Hiramatsu, Steven Rostedt, linux-kernel, Peter Zijlstra, Ananth N Mavinakayanahalli, Thomas Gleixner, H . Peter Anvin On Fri, May 26, 2017 at 08:10:25AM +0200, Ingo Molnar wrote: > > * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote: > > > How about the following (untested) patch? > > Looks good to me! > > Acked-by: Ingo Molnar <mingo@kernel.org> Very good, I have applied Masami's Reviewed-by and your Acked-by, thank you both! Thanx, Paul ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-05-26 15:44 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-05-24 12:00 [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT Masami Hiramatsu 2017-05-24 14:47 ` Paul E. McKenney 2017-05-25 6:15 ` Ingo Molnar 2017-05-25 9:04 ` Masami Hiramatsu 2017-05-25 15:14 ` Paul E. McKenney 2017-05-26 1:16 ` Masami Hiramatsu 2017-05-26 6:10 ` Ingo Molnar 2017-05-26 15:43 ` Paul E. McKenney
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.