* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-13 12:46 ` Peter Zijlstra
@ 2017-03-13 15:55 ` Paul E. McKenney
2017-03-14 16:24 ` Paul E. McKenney
2017-03-15 22:58 ` Paul E. McKenney
2017-03-23 9:10 ` [tip:sched/urgent] sched/clock: Fix clear_sched_clock_stable() preempt wobbly tip-bot for Peter Zijlstra
2 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-13 15:55 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo, fweisbec
On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > And it does pass light testing. I will hammer it harder this evening.
> >
> > So please send a formal patch!
>
> Changed it a bit...
>
> ---
> Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
>
> Paul reported two independent problems with clear_sched_clock_stable().
>
> - if we tickle it during hotplug (even though the sched_clock was
> already marked unstable) we'll attempt to schedule_work() and
> this explodes because RCU isn't watching the new CPU quite yet.
>
> - since we run all of __clear_sched_clock_stable() from workqueue
> context, there's a preempt problem.
>
> Cure both by only doing the static_branch_disable() from a workqueue,
> and only when it's still stable.
>
> This leaves the problem what to do about hotplug actually wrecking TSC
> though, because if it was stable and now isn't, then we will want to run
> that work, which then will prod RCU the wrong way. Bloody hotplug.
>
> Reported-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
This passes initial testing. I will hammer it harder overnight, but
in the meantime:
Tested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> ---
> kernel/sched/clock.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> index a08795e21628..fec0f58c8dee 100644
> --- a/kernel/sched/clock.c
> +++ b/kernel/sched/clock.c
> @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
> tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
> }
>
> -static void __clear_sched_clock_stable(struct work_struct *work)
> +static void __sched_clock_work(struct work_struct *work)
> +{
> + static_branch_disable(&__sched_clock_stable);
> +}
> +
> +static DECLARE_WORK(sched_clock_work, __sched_clock_work);
> +
> +static void __clear_sched_clock_stable(void)
> {
> struct sched_clock_data *scd = this_scd();
>
> @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
> scd->tick_gtod, gtod_offset,
> scd->tick_raw, raw_offset);
>
> - static_branch_disable(&__sched_clock_stable);
> tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
> -}
>
> -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
> + if (sched_clock_stable())
> + schedule_work(&sched_clock_work);
> +}
>
> void clear_sched_clock_stable(void)
> {
> @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
> smp_mb(); /* matches sched_clock_init_late() */
>
> if (sched_clock_running == 2)
> - schedule_work(&sched_clock_work);
> + __clear_sched_clock_stable();
> }
>
> void sched_clock_init_late(void)
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-13 15:55 ` Paul E. McKenney
@ 2017-03-14 16:24 ` Paul E. McKenney
0 siblings, 0 replies; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-14 16:24 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo, fweisbec
On Mon, Mar 13, 2017 at 08:55:21AM -0700, Paul E. McKenney wrote:
> On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > > And it does pass light testing. I will hammer it harder this evening.
> > >
> > > So please send a formal patch!
> >
> > Changed it a bit...
> >
> > ---
> > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
> >
> > Paul reported two independent problems with clear_sched_clock_stable().
> >
> > - if we tickle it during hotplug (even though the sched_clock was
> > already marked unstable) we'll attempt to schedule_work() and
> > this explodes because RCU isn't watching the new CPU quite yet.
> >
> > - since we run all of __clear_sched_clock_stable() from workqueue
> > context, there's a preempt problem.
> >
> > Cure both by only doing the static_branch_disable() from a workqueue,
> > and only when it's still stable.
> >
> > This leaves the problem what to do about hotplug actually wrecking TSC
> > though, because if it was stable and now isn't, then we will want to run
> > that work, which then will prod RCU the wrong way. Bloody hotplug.
> >
> > Reported-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> This passes initial testing. I will hammer it harder overnight, but
> in the meantime:
And it did just fine overnight, thank you!
> Tested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
I am guessing that you will be pushing this one for -rc3, but please
let me know.
Thanx, Paul
> > ---
> > kernel/sched/clock.c | 17 ++++++++++++-----
> > 1 file changed, 12 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> > index a08795e21628..fec0f58c8dee 100644
> > --- a/kernel/sched/clock.c
> > +++ b/kernel/sched/clock.c
> > @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
> > tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
> > }
> >
> > -static void __clear_sched_clock_stable(struct work_struct *work)
> > +static void __sched_clock_work(struct work_struct *work)
> > +{
> > + static_branch_disable(&__sched_clock_stable);
> > +}
> > +
> > +static DECLARE_WORK(sched_clock_work, __sched_clock_work);
> > +
> > +static void __clear_sched_clock_stable(void)
> > {
> > struct sched_clock_data *scd = this_scd();
> >
> > @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
> > scd->tick_gtod, gtod_offset,
> > scd->tick_raw, raw_offset);
> >
> > - static_branch_disable(&__sched_clock_stable);
> > tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
> > -}
> >
> > -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
> > + if (sched_clock_stable())
> > + schedule_work(&sched_clock_work);
> > +}
> >
> > void clear_sched_clock_stable(void)
> > {
> > @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
> > smp_mb(); /* matches sched_clock_init_late() */
> >
> > if (sched_clock_running == 2)
> > - schedule_work(&sched_clock_work);
> > + __clear_sched_clock_stable();
> > }
> >
> > void sched_clock_init_late(void)
> >
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-13 12:46 ` Peter Zijlstra
2017-03-13 15:55 ` Paul E. McKenney
@ 2017-03-15 22:58 ` Paul E. McKenney
2017-03-16 15:53 ` Peter Zijlstra
2017-03-23 9:10 ` [tip:sched/urgent] sched/clock: Fix clear_sched_clock_stable() preempt wobbly tip-bot for Peter Zijlstra
2 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-15 22:58 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo, fweisbec
On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > And it does pass light testing. I will hammer it harder this evening.
> >
> > So please send a formal patch!
>
> Changed it a bit...
>
> ---
> Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
>
> Paul reported two independent problems with clear_sched_clock_stable().
>
> - if we tickle it during hotplug (even though the sched_clock was
> already marked unstable) we'll attempt to schedule_work() and
> this explodes because RCU isn't watching the new CPU quite yet.
>
> - since we run all of __clear_sched_clock_stable() from workqueue
> context, there's a preempt problem.
>
> Cure both by only doing the static_branch_disable() from a workqueue,
> and only when it's still stable.
>
> This leaves the problem what to do about hotplug actually wrecking TSC
> though, because if it was stable and now isn't, then we will want to run
> that work, which then will prod RCU the wrong way. Bloody hotplug.
Would it help to do the same trick tglx applied to the hot-unplug path,
that is IPIing some other CPU to schedule the workqueue?
Thanx, Paul
> Reported-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> kernel/sched/clock.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> index a08795e21628..fec0f58c8dee 100644
> --- a/kernel/sched/clock.c
> +++ b/kernel/sched/clock.c
> @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
> tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
> }
>
> -static void __clear_sched_clock_stable(struct work_struct *work)
> +static void __sched_clock_work(struct work_struct *work)
> +{
> + static_branch_disable(&__sched_clock_stable);
> +}
> +
> +static DECLARE_WORK(sched_clock_work, __sched_clock_work);
> +
> +static void __clear_sched_clock_stable(void)
> {
> struct sched_clock_data *scd = this_scd();
>
> @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
> scd->tick_gtod, gtod_offset,
> scd->tick_raw, raw_offset);
>
> - static_branch_disable(&__sched_clock_stable);
> tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
> -}
>
> -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
> + if (sched_clock_stable())
> + schedule_work(&sched_clock_work);
> +}
>
> void clear_sched_clock_stable(void)
> {
> @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
> smp_mb(); /* matches sched_clock_init_late() */
>
> if (sched_clock_running == 2)
> - schedule_work(&sched_clock_work);
> + __clear_sched_clock_stable();
> }
>
> void sched_clock_init_late(void)
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-15 22:58 ` Paul E. McKenney
@ 2017-03-16 15:53 ` Peter Zijlstra
2017-03-16 16:57 ` Paul E. McKenney
0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2017-03-16 15:53 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: linux-kernel, mingo, fweisbec
On Wed, Mar 15, 2017 at 03:58:17PM -0700, Paul E. McKenney wrote:
> On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > > And it does pass light testing. I will hammer it harder this evening.
> > >
> > > So please send a formal patch!
> >
> > Changed it a bit...
> >
> > ---
> > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
> >
> > Paul reported two independent problems with clear_sched_clock_stable().
> >
> > - if we tickle it during hotplug (even though the sched_clock was
> > already marked unstable) we'll attempt to schedule_work() and
> > this explodes because RCU isn't watching the new CPU quite yet.
> >
> > - since we run all of __clear_sched_clock_stable() from workqueue
> > context, there's a preempt problem.
> >
> > Cure both by only doing the static_branch_disable() from a workqueue,
> > and only when it's still stable.
> >
> > This leaves the problem what to do about hotplug actually wrecking TSC
> > though, because if it was stable and now isn't, then we will want to run
> > that work, which then will prod RCU the wrong way. Bloody hotplug.
>
> Would it help to do the same trick tglx applied to the hot-unplug path,
> that is IPIing some other CPU to schedule the workqueue?
>
So I've been looking again; and I don't think its a problem anymore.
The problem you reported here:
https://lkml.kernel.org/r/20170308221656.GA11949@linux.vnet.ibm.com
Should not happen after commit:
f94c8d116997 ("sched/clock, x86/tsc: Rework the x86 'unstable' sched_clock() interface")
which landed around 4.11-rc2; so _after_ your kernel (which reported
itself as -rc1).
Because since that commit we'll never call clear_sched_clock_stable() if
tsc_unstable is set.
So I'll have to amend the Changelog somewhat.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-16 15:53 ` Peter Zijlstra
@ 2017-03-16 16:57 ` Paul E. McKenney
2017-03-16 21:00 ` Paul E. McKenney
0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-16 16:57 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo, fweisbec
On Thu, Mar 16, 2017 at 04:53:10PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 15, 2017 at 03:58:17PM -0700, Paul E. McKenney wrote:
> > On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> > > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > > > And it does pass light testing. I will hammer it harder this evening.
> > > >
> > > > So please send a formal patch!
> > >
> > > Changed it a bit...
> > >
> > > ---
> > > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
> > >
> > > Paul reported two independent problems with clear_sched_clock_stable().
> > >
> > > - if we tickle it during hotplug (even though the sched_clock was
> > > already marked unstable) we'll attempt to schedule_work() and
> > > this explodes because RCU isn't watching the new CPU quite yet.
> > >
> > > - since we run all of __clear_sched_clock_stable() from workqueue
> > > context, there's a preempt problem.
> > >
> > > Cure both by only doing the static_branch_disable() from a workqueue,
> > > and only when it's still stable.
> > >
> > > This leaves the problem what to do about hotplug actually wrecking TSC
> > > though, because if it was stable and now isn't, then we will want to run
> > > that work, which then will prod RCU the wrong way. Bloody hotplug.
> >
> > Would it help to do the same trick tglx applied to the hot-unplug path,
> > that is IPIing some other CPU to schedule the workqueue?
>
> So I've been looking again; and I don't think its a problem anymore.
>
> The problem you reported here:
>
> https://lkml.kernel.org/r/20170308221656.GA11949@linux.vnet.ibm.com
>
> Should not happen after commit:
>
> f94c8d116997 ("sched/clock, x86/tsc: Rework the x86 'unstable' sched_clock() interface")
>
> which landed around 4.11-rc2; so _after_ your kernel (which reported
> itself as -rc1).
Very good! I am re-runnning rcutorture on -rc2 (without your patch)
and will let you know how it goes.
> Because since that commit we'll never call clear_sched_clock_stable() if
> tsc_unstable is set.
>
> So I'll have to amend the Changelog somewhat.
Sounds good!
Thanx, Paul
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-16 16:57 ` Paul E. McKenney
@ 2017-03-16 21:00 ` Paul E. McKenney
2017-03-16 21:06 ` Paul E. McKenney
0 siblings, 1 reply; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-16 21:00 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo, fweisbec
On Thu, Mar 16, 2017 at 09:57:48AM -0700, Paul E. McKenney wrote:
> On Thu, Mar 16, 2017 at 04:53:10PM +0100, Peter Zijlstra wrote:
> > On Wed, Mar 15, 2017 at 03:58:17PM -0700, Paul E. McKenney wrote:
> > > On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> > > > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > > > > And it does pass light testing. I will hammer it harder this evening.
> > > > >
> > > > > So please send a formal patch!
> > > >
> > > > Changed it a bit...
> > > >
> > > > ---
> > > > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
> > > >
> > > > Paul reported two independent problems with clear_sched_clock_stable().
> > > >
> > > > - if we tickle it during hotplug (even though the sched_clock was
> > > > already marked unstable) we'll attempt to schedule_work() and
> > > > this explodes because RCU isn't watching the new CPU quite yet.
> > > >
> > > > - since we run all of __clear_sched_clock_stable() from workqueue
> > > > context, there's a preempt problem.
> > > >
> > > > Cure both by only doing the static_branch_disable() from a workqueue,
> > > > and only when it's still stable.
> > > >
> > > > This leaves the problem what to do about hotplug actually wrecking TSC
> > > > though, because if it was stable and now isn't, then we will want to run
> > > > that work, which then will prod RCU the wrong way. Bloody hotplug.
> > >
> > > Would it help to do the same trick tglx applied to the hot-unplug path,
> > > that is IPIing some other CPU to schedule the workqueue?
> >
> > So I've been looking again; and I don't think its a problem anymore.
> >
> > The problem you reported here:
> >
> > https://lkml.kernel.org/r/20170308221656.GA11949@linux.vnet.ibm.com
> >
> > Should not happen after commit:
> >
> > f94c8d116997 ("sched/clock, x86/tsc: Rework the x86 'unstable' sched_clock() interface")
> >
> > which landed around 4.11-rc2; so _after_ your kernel (which reported
> > itself as -rc1).
>
> Very good! I am re-runnning rcutorture on -rc2 (without your patch)
> and will let you know how it goes.
And -rc2 works just fine on rcutorture, thank you again!
Thanx, Paul
> > Because since that commit we'll never call clear_sched_clock_stable() if
> > tsc_unstable is set.
> >
> > So I'll have to amend the Changelog somewhat.
>
> Sounds good!
>
> Thanx, Paul
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug
2017-03-16 21:00 ` Paul E. McKenney
@ 2017-03-16 21:06 ` Paul E. McKenney
0 siblings, 0 replies; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-16 21:06 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, mingo, fweisbec
On Thu, Mar 16, 2017 at 02:00:02PM -0700, Paul E. McKenney wrote:
> On Thu, Mar 16, 2017 at 09:57:48AM -0700, Paul E. McKenney wrote:
> > On Thu, Mar 16, 2017 at 04:53:10PM +0100, Peter Zijlstra wrote:
> > > On Wed, Mar 15, 2017 at 03:58:17PM -0700, Paul E. McKenney wrote:
> > > > On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> > > > > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > > > > > And it does pass light testing. I will hammer it harder this evening.
> > > > > >
> > > > > > So please send a formal patch!
> > > > >
> > > > > Changed it a bit...
> > > > >
> > > > > ---
> > > > > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
> > > > >
> > > > > Paul reported two independent problems with clear_sched_clock_stable().
> > > > >
> > > > > - if we tickle it during hotplug (even though the sched_clock was
> > > > > already marked unstable) we'll attempt to schedule_work() and
> > > > > this explodes because RCU isn't watching the new CPU quite yet.
> > > > >
> > > > > - since we run all of __clear_sched_clock_stable() from workqueue
> > > > > context, there's a preempt problem.
> > > > >
> > > > > Cure both by only doing the static_branch_disable() from a workqueue,
> > > > > and only when it's still stable.
> > > > >
> > > > > This leaves the problem what to do about hotplug actually wrecking TSC
> > > > > though, because if it was stable and now isn't, then we will want to run
> > > > > that work, which then will prod RCU the wrong way. Bloody hotplug.
> > > >
> > > > Would it help to do the same trick tglx applied to the hot-unplug path,
> > > > that is IPIing some other CPU to schedule the workqueue?
> > >
> > > So I've been looking again; and I don't think its a problem anymore.
> > >
> > > The problem you reported here:
> > >
> > > https://lkml.kernel.org/r/20170308221656.GA11949@linux.vnet.ibm.com
> > >
> > > Should not happen after commit:
> > >
> > > f94c8d116997 ("sched/clock, x86/tsc: Rework the x86 'unstable' sched_clock() interface")
> > >
> > > which landed around 4.11-rc2; so _after_ your kernel (which reported
> > > itself as -rc1).
> >
> > Very good! I am re-runnning rcutorture on -rc2 (without your patch)
> > and will let you know how it goes.
>
> And -rc2 works just fine on rcutorture, thank you again!
>
> Thanx, Paul
>
> > > Because since that commit we'll never call clear_sched_clock_stable() if
> > > tsc_unstable is set.
> > >
> > > So I'll have to amend the Changelog somewhat.
> >
> > Sounds good!
Ah, and I dropped this patch when rebasing to v4.11-rc2 in anticipation
of your Changelog amendments:
sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
Please let me know if you would instead prefer me to keep carrying the
current version.
Thanx, Paul
^ permalink raw reply [flat|nested] 15+ messages in thread
* [tip:sched/urgent] sched/clock: Fix clear_sched_clock_stable() preempt wobbly
2017-03-13 12:46 ` Peter Zijlstra
2017-03-13 15:55 ` Paul E. McKenney
2017-03-15 22:58 ` Paul E. McKenney
@ 2017-03-23 9:10 ` tip-bot for Peter Zijlstra
2017-03-23 16:52 ` Paul E. McKenney
2 siblings, 1 reply; 15+ messages in thread
From: tip-bot for Peter Zijlstra @ 2017-03-23 9:10 UTC (permalink / raw)
To: linux-tip-commits
Cc: efault, linux-kernel, tglx, torvalds, paulmck, mingo, hpa, peterz
Commit-ID: 71fdb70eb48784c1f28cdf2e67c4c587dd7f2594
Gitweb: http://git.kernel.org/tip/71fdb70eb48784c1f28cdf2e67c4c587dd7f2594
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Mon, 13 Mar 2017 13:46:21 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 23 Mar 2017 07:31:48 +0100
sched/clock: Fix clear_sched_clock_stable() preempt wobbly
Paul reported a problems with clear_sched_clock_stable(). Since we run
all of __clear_sched_clock_stable() from workqueue context, there's a
preempt problem.
Solve it by only running the static_key_disable() from workqueue.
Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: fweisbec@gmail.com
Link: http://lkml.kernel.org/r/20170313124621.GA3328@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/clock.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index a08795e..fec0f58 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
}
-static void __clear_sched_clock_stable(struct work_struct *work)
+static void __sched_clock_work(struct work_struct *work)
+{
+ static_branch_disable(&__sched_clock_stable);
+}
+
+static DECLARE_WORK(sched_clock_work, __sched_clock_work);
+
+static void __clear_sched_clock_stable(void)
{
struct sched_clock_data *scd = this_scd();
@@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
scd->tick_gtod, gtod_offset,
scd->tick_raw, raw_offset);
- static_branch_disable(&__sched_clock_stable);
tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
-}
-static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
+ if (sched_clock_stable())
+ schedule_work(&sched_clock_work);
+}
void clear_sched_clock_stable(void)
{
@@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
smp_mb(); /* matches sched_clock_init_late() */
if (sched_clock_running == 2)
- schedule_work(&sched_clock_work);
+ __clear_sched_clock_stable();
}
void sched_clock_init_late(void)
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [tip:sched/urgent] sched/clock: Fix clear_sched_clock_stable() preempt wobbly
2017-03-23 9:10 ` [tip:sched/urgent] sched/clock: Fix clear_sched_clock_stable() preempt wobbly tip-bot for Peter Zijlstra
@ 2017-03-23 16:52 ` Paul E. McKenney
0 siblings, 0 replies; 15+ messages in thread
From: Paul E. McKenney @ 2017-03-23 16:52 UTC (permalink / raw)
To: efault, tglx, linux-kernel, torvalds, mingo, hpa, peterz
Cc: linux-tip-commits
On Thu, Mar 23, 2017 at 02:10:47AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID: 71fdb70eb48784c1f28cdf2e67c4c587dd7f2594
> Gitweb: http://git.kernel.org/tip/71fdb70eb48784c1f28cdf2e67c4c587dd7f2594
> Author: Peter Zijlstra <peterz@infradead.org>
> AuthorDate: Mon, 13 Mar 2017 13:46:21 +0100
> Committer: Ingo Molnar <mingo@kernel.org>
> CommitDate: Thu, 23 Mar 2017 07:31:48 +0100
>
> sched/clock: Fix clear_sched_clock_stable() preempt wobbly
>
> Paul reported a problems with clear_sched_clock_stable(). Since we run
> all of __clear_sched_clock_stable() from workqueue context, there's a
> preempt problem.
>
> Solve it by only running the static_key_disable() from workqueue.
>
> Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Mike Galbraith <efault@gmx.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: fweisbec@gmail.com
> Link: http://lkml.kernel.org/r/20170313124621.GA3328@twins.programming.kicks-ass.net
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
For whatever it is worth given that it is already in -tip:
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> kernel/sched/clock.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> index a08795e..fec0f58 100644
> --- a/kernel/sched/clock.c
> +++ b/kernel/sched/clock.c
> @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
> tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
> }
>
> -static void __clear_sched_clock_stable(struct work_struct *work)
> +static void __sched_clock_work(struct work_struct *work)
> +{
> + static_branch_disable(&__sched_clock_stable);
> +}
> +
> +static DECLARE_WORK(sched_clock_work, __sched_clock_work);
> +
> +static void __clear_sched_clock_stable(void)
> {
> struct sched_clock_data *scd = this_scd();
>
> @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
> scd->tick_gtod, gtod_offset,
> scd->tick_raw, raw_offset);
>
> - static_branch_disable(&__sched_clock_stable);
> tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
> -}
>
> -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
> + if (sched_clock_stable())
> + schedule_work(&sched_clock_work);
> +}
>
> void clear_sched_clock_stable(void)
> {
> @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
> smp_mb(); /* matches sched_clock_init_late() */
>
> if (sched_clock_running == 2)
> - schedule_work(&sched_clock_work);
> + __clear_sched_clock_stable();
> }
>
> void sched_clock_init_late(void)
>
^ permalink raw reply [flat|nested] 15+ messages in thread