From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3890C433EF for ; Thu, 10 Mar 2022 20:07:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244338AbiCJUI0 (ORCPT ); Thu, 10 Mar 2022 15:08:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243700AbiCJUIL (ORCPT ); Thu, 10 Mar 2022 15:08:11 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E80FD198EFC for ; Thu, 10 Mar 2022 12:07:08 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 52201B8270A for ; Thu, 10 Mar 2022 20:07:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D878C340E9; Thu, 10 Mar 2022 20:07:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1646942826; bh=Ut8PwZ+JnT4bkZemWFxMHdCnXE2MnDs28ihTsX5qLUc=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=Afk3h8Lg8X3HPF8I9kCVIxPIOrnT7IcYm+3Qq3IBIUsRGikJONy5B7MFjJ1INVlLf m7UGtc1XoRXAv4rrbY9ExBiWxSeDM5m2MTHtaGjp3q4yk1xO7foCFQ/lUDtI1YW66W Ta7Wep8HF7Q4z578Uv6EM5VkU3bAPeKBFkkRC56IKjD+Jch0S0QakR77a81Kb2dUPx QYhZTCKuQoDlksNLdzhY7P4hzvIDmFhrzkl2D9Gwkp2qr3zmXy6u+NJ3A/5u+1bdgn ggqExubudTm0zifbu6IG7WhuusVOjSYxStf1bdAd5u8Om3RzPEr8+AT/DjfkurL9V3 AhMDlywmciHjg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id BD5985C038D; Thu, 10 Mar 2022 12:07:05 -0800 (PST) Date: Thu, 10 Mar 2022 12:07:05 -0800 From: "Paul E. McKenney" To: Frederic Weisbecker Cc: LKML , Peter Zijlstra , Phil Auld , Alex Belits , Nicolas Saenz Julienne , Xiongfeng Wang , Neeraj Upadhyay , Thomas Gleixner , Yu Liao , Boqun Feng , Marcelo Tosatti , Paul Gortmaker , Uladzislau Rezki , Joel Fernandes Subject: Re: [PATCH 14/19] rcu/context-tracking: Move RCU-dynticks internal functions to context_tracking Message-ID: <20220310200705.GA4285@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20220302154810.42308-1-frederic@kernel.org> <20220302154810.42308-15-frederic@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220302154810.42308-15-frederic@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 02, 2022 at 04:48:05PM +0100, Frederic Weisbecker wrote: > Move the core RCU eqs/dynticks functions to context tracking so that > we can later merge all that code within context tracking. > > Signed-off-by: Frederic Weisbecker I am not sure that you want rcu_dynticks_task_enter() and friends in context tracking, but I have no objection to them living there. ;-) Acked-by: Paul E. McKenney > Cc: Paul E. McKenney > Cc: Peter Zijlstra > Cc: Thomas Gleixner > Cc: Neeraj Upadhyay > Cc: Uladzislau Rezki > Cc: Joel Fernandes > Cc: Boqun Feng > Cc: Nicolas Saenz Julienne > Cc: Marcelo Tosatti > Cc: Xiongfeng Wang > Cc: Yu Liao > Cc: Phil Auld > Cc: Paul Gortmaker > Cc: Alex Belits > --- > include/linux/context_tracking.h | 12 ++ > include/linux/rcutree.h | 3 + > kernel/context_tracking.c | 347 +++++++++++++++++++++++++++++++ > kernel/rcu/tree.c | 326 +---------------------------- > kernel/rcu/tree.h | 5 - > kernel/rcu/tree_plugin.h | 36 +--- > 6 files changed, 366 insertions(+), 363 deletions(-) > > diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h > index 52a2e23d5107..086546569d14 100644 > --- a/include/linux/context_tracking.h > +++ b/include/linux/context_tracking.h > @@ -122,6 +122,18 @@ static inline void context_tracking_init(void) { } > #ifdef CONFIG_CONTEXT_TRACKING > extern void ct_idle_enter(void); > extern void ct_idle_exit(void); > +extern unsigned long rcu_dynticks_inc(int incby); > + > +/* > + * Is the current CPU in an extended quiescent state? > + * > + * No ordering, as we are sampling CPU-local information. > + */ > +static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) > +{ > + return !(arch_atomic_read(this_cpu_ptr(&context_tracking.dynticks)) & 0x1); > +} > + > #else > static inline void ct_idle_enter(void) { } > static inline void ct_idle_exit(void) { } > diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h > index 6d111a3c0cc0..408435ff7a06 100644 > --- a/include/linux/rcutree.h > +++ b/include/linux/rcutree.h > @@ -59,6 +59,9 @@ void rcu_irq_exit_check_preempt(void); > static inline void rcu_irq_exit_check_preempt(void) { } > #endif > > +struct task_struct; > +void rcu_preempt_deferred_qs(struct task_struct *t); > + > void exit_rcu(void); > > void rcu_scheduler_starting(void); > diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c > index 7be7a2044d3a..dc24a9782bbd 100644 > --- a/kernel/context_tracking.c > +++ b/kernel/context_tracking.c > @@ -21,6 +21,353 @@ > #include > #include > #include > +#include > + > +#define TPS(x) tracepoint_string(x) > + > +/* Record the current task on dyntick-idle entry. */ > +static __always_inline void rcu_dynticks_task_enter(void) > +{ > +#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) > + WRITE_ONCE(current->rcu_tasks_idle_cpu, smp_processor_id()); > +#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ > +} > + > +/* Record no current task on dyntick-idle exit. */ > +static __always_inline void rcu_dynticks_task_exit(void) > +{ > +#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) > + WRITE_ONCE(current->rcu_tasks_idle_cpu, -1); > +#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ > +} > + > +/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */ > +static __always_inline void rcu_dynticks_task_trace_enter(void) > +{ > +#ifdef CONFIG_TASKS_TRACE_RCU > + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) > + current->trc_reader_special.b.need_mb = true; > +#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ > +} > + > +/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */ > +static __always_inline void rcu_dynticks_task_trace_exit(void) > +{ > +#ifdef CONFIG_TASKS_TRACE_RCU > + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) > + current->trc_reader_special.b.need_mb = false; > +#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ > +} > + > +/* > + * Increment the current CPU's context_tracking structure's ->dynticks field > + * with ordering. Return the new value. > + */ > +noinstr unsigned long rcu_dynticks_inc(int incby) > +{ > + return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.dynticks)); > +} > + > +/* > + * Record entry into an extended quiescent state. This is only to be > + * called when not already in an extended quiescent state, that is, > + * RCU is watching prior to the call to this function and is no longer > + * watching upon return. > + */ > +static noinstr void rcu_dynticks_eqs_enter(void) > +{ > + int seq; > + > + /* > + * CPUs seeing atomic_add_return() must see prior RCU read-side > + * critical sections, and we also must force ordering with the > + * next idle sojourn. > + */ > + rcu_dynticks_task_trace_enter(); // Before ->dynticks update! > + seq = rcu_dynticks_inc(1); > + // RCU is no longer watching. Better be in extended quiescent state! > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1)); > +} > + > +/* > + * Record exit from an extended quiescent state. This is only to be > + * called from an extended quiescent state, that is, RCU is not watching > + * prior to the call to this function and is watching upon return. > + */ > +static noinstr void rcu_dynticks_eqs_exit(void) > +{ > + int seq; > + > + /* > + * CPUs seeing atomic_add_return() must see prior idle sojourns, > + * and we also must force ordering with the next RCU read-side > + * critical section. > + */ > + seq = rcu_dynticks_inc(1); > + // RCU is now watching. Better not be in an extended quiescent state! > + rcu_dynticks_task_trace_exit(); // After ->dynticks update! > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1)); > +} > + > +/* > + * Enter an RCU extended quiescent state, which can be either the > + * idle loop or adaptive-tickless usermode execution. > + * > + * We crowbar the ->dynticks_nmi_nesting field to zero to allow for > + * the possibility of usermode upcalls having messed up our count > + * of interrupt nesting level during the prior busy period. > + */ > +static noinstr void rcu_eqs_enter(bool user) > +{ > + struct context_tracking *ct = this_cpu_ptr(&context_tracking); > + > + WARN_ON_ONCE(ct->dynticks_nmi_nesting != DYNTICK_IRQ_NONIDLE); > + WRITE_ONCE(ct->dynticks_nmi_nesting, 0); > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && > + ct->dynticks_nesting == 0); > + if (ct->dynticks_nesting != 1) { > + // RCU will still be watching, so just do accounting and leave. > + ct->dynticks_nesting--; > + return; > + } > + > + lockdep_assert_irqs_disabled(); > + instrumentation_begin(); > + trace_rcu_dyntick(TPS("Start"), ct->dynticks_nesting, 0, atomic_read(&ct->dynticks)); > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); > + rcu_preempt_deferred_qs(current); > + > + // instrumentation for the noinstr rcu_dynticks_eqs_enter() > + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > + > + instrumentation_end(); > + WRITE_ONCE(ct->dynticks_nesting, 0); /* Avoid irq-access tearing. */ > + // RCU is watching here ... > + rcu_dynticks_eqs_enter(); > + // ... but is no longer watching here. > + rcu_dynticks_task_enter(); > +} > + > +/** > + * rcu_idle_enter - inform RCU that current CPU is entering idle > + * > + * Enter idle mode, in other words, -leave- the mode in which RCU > + * read-side critical sections can occur. (Though RCU read-side > + * critical sections can occur in irq handlers in idle, a possibility > + * handled by irq_enter() and irq_exit().) > + * > + * If you add or remove a call to rcu_idle_enter(), be sure to test with > + * CONFIG_RCU_EQS_DEBUG=y. > + */ > +void rcu_idle_enter(void) > +{ > + lockdep_assert_irqs_disabled(); > + rcu_eqs_enter(false); > +} > + > +#ifdef CONFIG_NO_HZ_FULL > +/** > + * rcu_user_enter - inform RCU that we are resuming userspace. > + * > + * Enter RCU idle mode right before resuming userspace. No use of RCU > + * is permitted between this call and rcu_user_exit(). This way the > + * CPU doesn't need to maintain the tick for RCU maintenance purposes > + * when the CPU runs in userspace. > + * > + * If you add or remove a call to rcu_user_enter(), be sure to test with > + * CONFIG_RCU_EQS_DEBUG=y. > + */ > +noinstr void rcu_user_enter(void) > +{ > + rcu_eqs_enter(true); > +} > +#endif /* CONFIG_NO_HZ_FULL */ > + > +/** > + * rcu_nmi_exit - inform RCU of exit from NMI context > + * > + * If we are returning from the outermost NMI handler that interrupted an > + * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting > + * to let the RCU grace-period handling know that the CPU is back to > + * being RCU-idle. > + * > + * If you add or remove a call to rcu_nmi_exit(), be sure to test > + * with CONFIG_RCU_EQS_DEBUG=y. > + */ > +noinstr void rcu_nmi_exit(void) > +{ > + struct context_tracking *ct = this_cpu_ptr(&context_tracking); > + > + instrumentation_begin(); > + /* > + * Check for ->dynticks_nmi_nesting underflow and bad ->dynticks. > + * (We are exiting an NMI handler, so RCU better be paying attention > + * to us!) > + */ > + WARN_ON_ONCE(ct->dynticks_nmi_nesting <= 0); > + WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()); > + > + /* > + * If the nesting level is not 1, the CPU wasn't RCU-idle, so > + * leave it in non-RCU-idle state. > + */ > + if (ct->dynticks_nmi_nesting != 1) { > + trace_rcu_dyntick(TPS("--="), ct->dynticks_nmi_nesting, ct->dynticks_nmi_nesting - 2, > + atomic_read(&ct->dynticks)); > + WRITE_ONCE(ct->dynticks_nmi_nesting, /* No store tearing. */ > + ct->dynticks_nmi_nesting - 2); > + instrumentation_end(); > + return; > + } > + > + /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ > + trace_rcu_dyntick(TPS("Startirq"), ct->dynticks_nmi_nesting, 0, atomic_read(&ct->dynticks)); > + WRITE_ONCE(ct->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ > + > + // instrumentation for the noinstr rcu_dynticks_eqs_enter() > + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > + instrumentation_end(); > + > + // RCU is watching here ... > + rcu_dynticks_eqs_enter(); > + // ... but is no longer watching here. > + > + if (!in_nmi()) > + rcu_dynticks_task_enter(); > +} > + > +/* > + * Exit an RCU extended quiescent state, which can be either the > + * idle loop or adaptive-tickless usermode execution. > + * > + * We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to > + * allow for the possibility of usermode upcalls messing up our count of > + * interrupt nesting level during the busy period that is just now starting. > + */ > +static void noinstr rcu_eqs_exit(bool user) > +{ > + struct context_tracking *ct = this_cpu_ptr(&context_tracking); > + long oldval; > + > + lockdep_assert_irqs_disabled(); > + oldval = ct->dynticks_nesting; > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); > + if (oldval) { > + // RCU was already watching, so just do accounting and leave. > + ct->dynticks_nesting++; > + return; > + } > + rcu_dynticks_task_exit(); > + // RCU is not watching here ... > + rcu_dynticks_eqs_exit(); > + // ... but is watching here. > + instrumentation_begin(); > + > + // instrumentation for the noinstr rcu_dynticks_eqs_exit() > + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > + > + trace_rcu_dyntick(TPS("End"), ct->dynticks_nesting, 1, atomic_read(&ct->dynticks)); > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); > + WRITE_ONCE(ct->dynticks_nesting, 1); > + WARN_ON_ONCE(ct->dynticks_nmi_nesting); > + WRITE_ONCE(ct->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); > + instrumentation_end(); > +} > + > +/** > + * rcu_idle_exit - inform RCU that current CPU is leaving idle > + * > + * Exit idle mode, in other words, -enter- the mode in which RCU > + * read-side critical sections can occur. > + * > + * If you add or remove a call to rcu_idle_exit(), be sure to test with > + * CONFIG_RCU_EQS_DEBUG=y. > + */ > +void rcu_idle_exit(void) > +{ > + unsigned long flags; > + > + local_irq_save(flags); > + rcu_eqs_exit(false); > + local_irq_restore(flags); > +} > +EXPORT_SYMBOL_GPL(rcu_idle_exit); > + > +#ifdef CONFIG_NO_HZ_FULL > +/** > + * rcu_user_exit - inform RCU that we are exiting userspace. > + * > + * Exit RCU idle mode while entering the kernel because it can > + * run a RCU read side critical section anytime. > + * > + * If you add or remove a call to rcu_user_exit(), be sure to test with > + * CONFIG_RCU_EQS_DEBUG=y. > + */ > +void noinstr rcu_user_exit(void) > +{ > + rcu_eqs_exit(true); > +} > +#endif /* ifdef CONFIG_NO_HZ_FULL */ > + > +/** > + * rcu_nmi_enter - inform RCU of entry to NMI context > + * > + * If the CPU was idle from RCU's viewpoint, update ct->dynticks and > + * ct->dynticks_nmi_nesting to let the RCU grace-period handling know > + * that the CPU is active. This implementation permits nested NMIs, as > + * long as the nesting level does not overflow an int. (You will probably > + * run out of stack space first.) > + * > + * If you add or remove a call to rcu_nmi_enter(), be sure to test > + * with CONFIG_RCU_EQS_DEBUG=y. > + */ > +noinstr void rcu_nmi_enter(void) > +{ > + long incby = 2; > + struct context_tracking *ct = this_cpu_ptr(&context_tracking); > + > + /* Complain about underflow. */ > + WARN_ON_ONCE(ct->dynticks_nmi_nesting < 0); > + > + /* > + * If idle from RCU viewpoint, atomically increment ->dynticks > + * to mark non-idle and increment ->dynticks_nmi_nesting by one. > + * Otherwise, increment ->dynticks_nmi_nesting by two. This means > + * if ->dynticks_nmi_nesting is equal to one, we are guaranteed > + * to be in the outermost NMI handler that interrupted an RCU-idle > + * period (observation due to Andy Lutomirski). > + */ > + if (rcu_dynticks_curr_cpu_in_eqs()) { > + > + if (!in_nmi()) > + rcu_dynticks_task_exit(); > + > + // RCU is not watching here ... > + rcu_dynticks_eqs_exit(); > + // ... but is watching here. > + > + instrumentation_begin(); > + // instrumentation for the noinstr rcu_dynticks_curr_cpu_in_eqs() > + instrument_atomic_read(&ct->dynticks, sizeof(ct->dynticks)); > + // instrumentation for the noinstr rcu_dynticks_eqs_exit() > + instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > + > + incby = 1; > + } else if (!in_nmi()) { > + instrumentation_begin(); > + rcu_irq_enter_check_tick(); > + } else { > + instrumentation_begin(); > + } > + > + trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="), > + ct->dynticks_nmi_nesting, > + ct->dynticks_nmi_nesting + incby, atomic_read(&ct->dynticks)); > + instrumentation_end(); > + WRITE_ONCE(ct->dynticks_nmi_nesting, /* Prevent store tearing. */ > + ct->dynticks_nmi_nesting + incby); > + barrier(); > +} > > #ifdef CONFIG_CONTEXT_TRACKING_USER > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 938537958c27..e55a44ed19b6 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -62,6 +62,7 @@ > #include > #include > #include > +#include > #include "../time/tick-internal.h" > > #include "tree.h" > @@ -259,56 +260,6 @@ void rcu_softirq_qs(void) > rcu_tasks_qs(current, false); > } > > -/* > - * Increment the current CPU's rcu_data structure's ->dynticks field > - * with ordering. Return the new value. > - */ > -static noinline noinstr unsigned long rcu_dynticks_inc(int incby) > -{ > - return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.dynticks)); > -} > - > -/* > - * Record entry into an extended quiescent state. This is only to be > - * called when not already in an extended quiescent state, that is, > - * RCU is watching prior to the call to this function and is no longer > - * watching upon return. > - */ > -static noinstr void rcu_dynticks_eqs_enter(void) > -{ > - int seq; > - > - /* > - * CPUs seeing atomic_add_return() must see prior RCU read-side > - * critical sections, and we also must force ordering with the > - * next idle sojourn. > - */ > - rcu_dynticks_task_trace_enter(); // Before ->dynticks update! > - seq = rcu_dynticks_inc(1); > - // RCU is no longer watching. Better be in extended quiescent state! > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && (seq & 0x1)); > -} > - > -/* > - * Record exit from an extended quiescent state. This is only to be > - * called from an extended quiescent state, that is, RCU is not watching > - * prior to the call to this function and is watching upon return. > - */ > -static noinstr void rcu_dynticks_eqs_exit(void) > -{ > - int seq; > - > - /* > - * CPUs seeing atomic_add_return() must see prior idle sojourns, > - * and we also must force ordering with the next RCU read-side > - * critical section. > - */ > - seq = rcu_dynticks_inc(1); > - // RCU is now watching. Better not be in an extended quiescent state! > - rcu_dynticks_task_trace_exit(); // After ->dynticks update! > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & 0x1)); > -} > - > /* > * Reset the current CPU's ->dynticks counter to indicate that the > * newly onlined CPU is no longer in an extended quiescent state. > @@ -328,16 +279,6 @@ static void rcu_dynticks_eqs_online(void) > rcu_dynticks_inc(1); > } > > -/* > - * Is the current CPU in an extended quiescent state? > - * > - * No ordering, as we are sampling CPU-local information. > - */ > -static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) > -{ > - return !(arch_atomic_read(this_cpu_ptr(&context_tracking.dynticks)) & 0x1); > -} > - > /* > * Snapshot the ->dynticks counter with full ordering so as to allow > * stable comparison of this counter with past and future snapshots. > @@ -606,65 +547,7 @@ void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, > } > EXPORT_SYMBOL_GPL(rcutorture_get_gp_data); > > -/* > - * Enter an RCU extended quiescent state, which can be either the > - * idle loop or adaptive-tickless usermode execution. > - * > - * We crowbar the ->dynticks_nmi_nesting field to zero to allow for > - * the possibility of usermode upcalls having messed up our count > - * of interrupt nesting level during the prior busy period. > - */ > -static noinstr void rcu_eqs_enter(bool user) > -{ > - struct context_tracking *ct = this_cpu_ptr(&context_tracking); > - > - WARN_ON_ONCE(ct->dynticks_nmi_nesting != DYNTICK_IRQ_NONIDLE); > - WRITE_ONCE(ct->dynticks_nmi_nesting, 0); > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && > - ct->dynticks_nesting == 0); > - if (ct->dynticks_nesting != 1) { > - // RCU will still be watching, so just do accounting and leave. > - ct->dynticks_nesting--; > - return; > - } > - > - lockdep_assert_irqs_disabled(); > - instrumentation_begin(); > - trace_rcu_dyntick(TPS("Start"), ct->dynticks_nesting, 0, atomic_read(&ct->dynticks)); > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); > - rcu_preempt_deferred_qs(current); > - > - // instrumentation for the noinstr rcu_dynticks_eqs_enter() > - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > - > - instrumentation_end(); > - WRITE_ONCE(ct->dynticks_nesting, 0); /* Avoid irq-access tearing. */ > - // RCU is watching here ... > - rcu_dynticks_eqs_enter(); > - // ... but is no longer watching here. > - rcu_dynticks_task_enter(); > -} > - > -/** > - * rcu_idle_enter - inform RCU that current CPU is entering idle > - * > - * Enter idle mode, in other words, -leave- the mode in which RCU > - * read-side critical sections can occur. (Though RCU read-side > - * critical sections can occur in irq handlers in idle, a possibility > - * handled by irq_enter() and irq_exit().) > - * > - * If you add or remove a call to rcu_idle_enter(), be sure to test with > - * CONFIG_RCU_EQS_DEBUG=y. > - */ > -void rcu_idle_enter(void) > -{ > - lockdep_assert_irqs_disabled(); > - rcu_eqs_enter(false); > -} > - > -#ifdef CONFIG_NO_HZ_FULL > - > -#if !defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK) > +#if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK)) > /* > * An empty function that will trigger a reschedule on > * IRQ tail once IRQs get re-enabled on userspace/guest resume. > @@ -702,78 +585,7 @@ noinstr void rcu_irq_work_resched(void) > } > instrumentation_end(); > } > -#endif /* #if !defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK) */ > - > -/** > - * rcu_user_enter - inform RCU that we are resuming userspace. > - * > - * Enter RCU idle mode right before resuming userspace. No use of RCU > - * is permitted between this call and rcu_user_exit(). This way the > - * CPU doesn't need to maintain the tick for RCU maintenance purposes > - * when the CPU runs in userspace. > - * > - * If you add or remove a call to rcu_user_enter(), be sure to test with > - * CONFIG_RCU_EQS_DEBUG=y. > - */ > -noinstr void rcu_user_enter(void) > -{ > - rcu_eqs_enter(true); > -} > - > -#endif /* CONFIG_NO_HZ_FULL */ > - > -/** > - * rcu_nmi_exit - inform RCU of exit from NMI context > - * > - * If we are returning from the outermost NMI handler that interrupted an > - * RCU-idle period, update ct->dynticks and ct->dynticks_nmi_nesting > - * to let the RCU grace-period handling know that the CPU is back to > - * being RCU-idle. > - * > - * If you add or remove a call to rcu_nmi_exit(), be sure to test > - * with CONFIG_RCU_EQS_DEBUG=y. > - */ > -noinstr void rcu_nmi_exit(void) > -{ > - struct context_tracking *ct = this_cpu_ptr(&context_tracking); > - > - instrumentation_begin(); > - /* > - * Check for ->dynticks_nmi_nesting underflow and bad ->dynticks. > - * (We are exiting an NMI handler, so RCU better be paying attention > - * to us!) > - */ > - WARN_ON_ONCE(ct->dynticks_nmi_nesting <= 0); > - WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()); > - > - /* > - * If the nesting level is not 1, the CPU wasn't RCU-idle, so > - * leave it in non-RCU-idle state. > - */ > - if (ct->dynticks_nmi_nesting != 1) { > - trace_rcu_dyntick(TPS("--="), ct->dynticks_nmi_nesting, ct->dynticks_nmi_nesting - 2, > - atomic_read(&ct->dynticks)); > - WRITE_ONCE(ct->dynticks_nmi_nesting, /* No store tearing. */ > - ct->dynticks_nmi_nesting - 2); > - instrumentation_end(); > - return; > - } > - > - /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ > - trace_rcu_dyntick(TPS("Startirq"), ct->dynticks_nmi_nesting, 0, atomic_read(&ct->dynticks)); > - WRITE_ONCE(ct->dynticks_nmi_nesting, 0); /* Avoid store tearing. */ > - > - // instrumentation for the noinstr rcu_dynticks_eqs_enter() > - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > - instrumentation_end(); > - > - // RCU is watching here ... > - rcu_dynticks_eqs_enter(); > - // ... but is no longer watching here. > - > - if (!in_nmi()) > - rcu_dynticks_task_enter(); > -} > +#endif /* #if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK)) */ > > #ifdef CONFIG_PROVE_RCU > /** > @@ -793,77 +605,6 @@ void rcu_irq_exit_check_preempt(void) > } > #endif /* #ifdef CONFIG_PROVE_RCU */ > > -/* > - * Exit an RCU extended quiescent state, which can be either the > - * idle loop or adaptive-tickless usermode execution. > - * > - * We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to > - * allow for the possibility of usermode upcalls messing up our count of > - * interrupt nesting level during the busy period that is just now starting. > - */ > -static void noinstr rcu_eqs_exit(bool user) > -{ > - struct context_tracking *ct = this_cpu_ptr(&context_tracking); > - long oldval; > - > - lockdep_assert_irqs_disabled(); > - oldval = ct->dynticks_nesting; > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); > - if (oldval) { > - // RCU was already watching, so just do accounting and leave. > - ct->dynticks_nesting++; > - return; > - } > - rcu_dynticks_task_exit(); > - // RCU is not watching here ... > - rcu_dynticks_eqs_exit(); > - // ... but is watching here. > - instrumentation_begin(); > - > - // instrumentation for the noinstr rcu_dynticks_eqs_exit() > - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > - > - trace_rcu_dyntick(TPS("End"), ct->dynticks_nesting, 1, atomic_read(&ct->dynticks)); > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); > - WRITE_ONCE(ct->dynticks_nesting, 1); > - WARN_ON_ONCE(ct->dynticks_nmi_nesting); > - WRITE_ONCE(ct->dynticks_nmi_nesting, DYNTICK_IRQ_NONIDLE); > - instrumentation_end(); > -} > - > -/** > - * rcu_idle_exit - inform RCU that current CPU is leaving idle > - * > - * Exit idle mode, in other words, -enter- the mode in which RCU > - * read-side critical sections can occur. > - * > - * If you add or remove a call to rcu_idle_exit(), be sure to test with > - * CONFIG_RCU_EQS_DEBUG=y. > - */ > -void rcu_idle_exit(void) > -{ > - unsigned long flags; > - > - local_irq_save(flags); > - rcu_eqs_exit(false); > - local_irq_restore(flags); > -} > - > -#ifdef CONFIG_NO_HZ_FULL > -/** > - * rcu_user_exit - inform RCU that we are exiting userspace. > - * > - * Exit RCU idle mode while entering the kernel because it can > - * run a RCU read side critical section anytime. > - * > - * If you add or remove a call to rcu_user_exit(), be sure to test with > - * CONFIG_RCU_EQS_DEBUG=y. > - */ > -void noinstr rcu_user_exit(void) > -{ > - rcu_eqs_exit(true); > -} > - > /** > * __rcu_irq_enter_check_tick - Enable scheduler tick on CPU if RCU needs it. > * > @@ -924,67 +665,6 @@ void __rcu_irq_enter_check_tick(void) > } > raw_spin_unlock_rcu_node(rdp->mynode); > } > -#endif /* CONFIG_NO_HZ_FULL */ > - > -/** > - * rcu_nmi_enter - inform RCU of entry to NMI context > - * > - * If the CPU was idle from RCU's viewpoint, update ct->dynticks and > - * ct->dynticks_nmi_nesting to let the RCU grace-period handling know > - * that the CPU is active. This implementation permits nested NMIs, as > - * long as the nesting level does not overflow an int. (You will probably > - * run out of stack space first.) > - * > - * If you add or remove a call to rcu_nmi_enter(), be sure to test > - * with CONFIG_RCU_EQS_DEBUG=y. > - */ > -noinstr void rcu_nmi_enter(void) > -{ > - long incby = 2; > - struct context_tracking *ct = this_cpu_ptr(&context_tracking); > - > - /* Complain about underflow. */ > - WARN_ON_ONCE(ct->dynticks_nmi_nesting < 0); > - > - /* > - * If idle from RCU viewpoint, atomically increment ->dynticks > - * to mark non-idle and increment ->dynticks_nmi_nesting by one. > - * Otherwise, increment ->dynticks_nmi_nesting by two. This means > - * if ->dynticks_nmi_nesting is equal to one, we are guaranteed > - * to be in the outermost NMI handler that interrupted an RCU-idle > - * period (observation due to Andy Lutomirski). > - */ > - if (rcu_dynticks_curr_cpu_in_eqs()) { > - > - if (!in_nmi()) > - rcu_dynticks_task_exit(); > - > - // RCU is not watching here ... > - rcu_dynticks_eqs_exit(); > - // ... but is watching here. > - > - instrumentation_begin(); > - // instrumentation for the noinstr rcu_dynticks_curr_cpu_in_eqs() > - instrument_atomic_read(&ct->dynticks, sizeof(ct->dynticks)); > - // instrumentation for the noinstr rcu_dynticks_eqs_exit() > - instrument_atomic_write(&ct->dynticks, sizeof(ct->dynticks)); > - > - incby = 1; > - } else if (!in_nmi()) { > - instrumentation_begin(); > - rcu_irq_enter_check_tick(); > - } else { > - instrumentation_begin(); > - } > - > - trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="), > - ct->dynticks_nmi_nesting, > - ct->dynticks_nmi_nesting + incby, atomic_read(&ct->dynticks)); > - instrumentation_end(); > - WRITE_ONCE(ct->dynticks_nmi_nesting, /* Prevent store tearing. */ > - ct->dynticks_nmi_nesting + incby); > - barrier(); > -} > > /* > * Check to see if any future non-offloaded RCU-related work will need > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h > index 56d38568292b..a42c2a737e24 100644 > --- a/kernel/rcu/tree.h > +++ b/kernel/rcu/tree.h > @@ -426,7 +426,6 @@ static void rcu_cpu_kthread_setup(unsigned int cpu); > static void rcu_spawn_one_boost_kthread(struct rcu_node *rnp); > static bool rcu_preempt_has_tasks(struct rcu_node *rnp); > static bool rcu_preempt_need_deferred_qs(struct task_struct *t); > -static void rcu_preempt_deferred_qs(struct task_struct *t); > static void zero_cpu_stall_ticks(struct rcu_data *rdp); > static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); > static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq); > @@ -466,10 +465,6 @@ do { \ > > static void rcu_bind_gp_kthread(void); > static bool rcu_nohz_full_cpu(void); > -static void rcu_dynticks_task_enter(void); > -static void rcu_dynticks_task_exit(void); > -static void rcu_dynticks_task_trace_enter(void); > -static void rcu_dynticks_task_trace_exit(void); > > /* Forward declarations for tree_stall.h */ > static void record_gp_stall_check_time(void); > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index 6b9bcd45c7b2..be4b74b46109 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -595,7 +595,7 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t) > * evaluate safety in terms of interrupt, softirq, and preemption > * disabling. > */ > -static void rcu_preempt_deferred_qs(struct task_struct *t) > +void rcu_preempt_deferred_qs(struct task_struct *t) > { > unsigned long flags; > > @@ -1283,37 +1283,3 @@ static void rcu_bind_gp_kthread(void) > return; > housekeeping_affine(current, HK_FLAG_RCU); > } > - > -/* Record the current task on dyntick-idle entry. */ > -static __always_inline void rcu_dynticks_task_enter(void) > -{ > -#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) > - WRITE_ONCE(current->rcu_tasks_idle_cpu, smp_processor_id()); > -#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ > -} > - > -/* Record no current task on dyntick-idle exit. */ > -static __always_inline void rcu_dynticks_task_exit(void) > -{ > -#if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) > - WRITE_ONCE(current->rcu_tasks_idle_cpu, -1); > -#endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ > -} > - > -/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */ > -static __always_inline void rcu_dynticks_task_trace_enter(void) > -{ > -#ifdef CONFIG_TASKS_TRACE_RCU > - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) > - current->trc_reader_special.b.need_mb = true; > -#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ > -} > - > -/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */ > -static __always_inline void rcu_dynticks_task_trace_exit(void) > -{ > -#ifdef CONFIG_TASKS_TRACE_RCU > - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) > - current->trc_reader_special.b.need_mb = false; > -#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ > -} > -- > 2.25.1 >