All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
@ 2023-06-22 13:27 Phil Auld
  2023-06-22 13:44 ` Phil Auld
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Phil Auld @ 2023-06-22 13:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Juri Lelli, Ingo Molnar, Daniel Bristot de Oliveira,
	Peter Zijlstra, Vincent Guittot, Dietmar Eggemann,
	Valentin Schneider, Ben Segall, Steven Rostedt, Mel Gorman,
	Phil Auld

CFS bandwidth limits and NOHZ full don't play well together.  Tasks
can easily run well past their quotas before a remote tick does
accounting.  This leads to long, multi-period stalls before such
tasks can run again. Currentlyi, when presented with these conflicting
requirements the scheduler is favoring nohz_full and letting the tick
be stopped. However, nohz tick stopping is already best-effort, there
are a number of conditions that can prevent it, whereas cfs runtime
bandwidth is expected to be enforced.

Make the scheduler favor bandwidth over stopping the tick by setting
TICK_DEP_BIT_SCHED when the only running task is a cfs task with
runtime limit enabled.

Add sched_feat HZ_BW (off by default) to control this behavior.

Signed-off-by: Phil Auld <pauld@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Ben Segall <bsegall@google.com>
---
 kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
 kernel/sched/features.h |  2 ++
 2 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 373ff5f55884..880eadfac330 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 	rcu_read_unlock();
 }
 
+#ifdef CONFIG_NO_HZ_FULL
+/* called from pick_next_task_fair() */
+static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
+{
+	struct cfs_rq *cfs_rq = task_cfs_rq(p);
+	int cpu = cpu_of(rq);
+
+	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
+		return;
+
+	if (!tick_nohz_full_cpu(cpu))
+		return;
+
+	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
+		return;
+
+	/*
+	 *  We know there is only one task runnable and we've just picked it. The
+	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
+	 *  be otherwise able to stop the tick. Just need to check if we are using
+	 *  bandwidth control.
+	 */
+	if (cfs_rq->runtime_enabled)
+		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
+}
+#endif
+
 #else /* CONFIG_CFS_BANDWIDTH */
 
 static inline bool cfs_bandwidth_used(void)
@@ -6181,9 +6208,12 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
 static inline void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
 static inline void update_runtime_enabled(struct rq *rq) {}
 static inline void unthrottle_offline_cfs_rqs(struct rq *rq) {}
-
 #endif /* CONFIG_CFS_BANDWIDTH */
 
+#if !defined(CONFIG_CFS_BANDWIDTH) || !defined(CONFIG_NO_HZ_FULL)
+static inline void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p) {}
+#endif
+
 /**************************************************
  * CFS operations on tasks:
  */
@@ -8097,6 +8127,7 @@ done: __maybe_unused;
 		hrtick_start_fair(rq, p);
 
 	update_misfit_status(p, rq);
+	sched_fair_update_stop_tick(rq, p);
 
 	return p;
 
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index ee7f23c76bd3..6fdf1fdf6b17 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -101,3 +101,5 @@ SCHED_FEAT(LATENCY_WARN, false)
 
 SCHED_FEAT(ALT_PERIOD, true)
 SCHED_FEAT(BASE_SLICE, true)
+
+SCHED_FEAT(HZ_BW, false)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-22 13:27 [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use Phil Auld
@ 2023-06-22 13:44 ` Phil Auld
  2023-06-22 14:22 ` Steven Rostedt
  2023-06-22 20:49 ` Benjamin Segall
  2 siblings, 0 replies; 9+ messages in thread
From: Phil Auld @ 2023-06-22 13:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Juri Lelli, Ingo Molnar, Daniel Bristot de Oliveira,
	Peter Zijlstra, Vincent Guittot, Dietmar Eggemann,
	Valentin Schneider, Ben Segall, Steven Rostedt, Mel Gorman

On Thu, Jun 22, 2023 at 09:27:51AM -0400 Phil Auld wrote:
> CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> can easily run well past their quotas before a remote tick does
> accounting.  This leads to long, multi-period stalls before such
> tasks can run again. Currentlyi, when presented with these conflicting
> requirements the scheduler is favoring nohz_full and letting the tick
> be stopped. However, nohz tick stopping is already best-effort, there
> are a number of conditions that can prevent it, whereas cfs runtime
> bandwidth is expected to be enforced.
> 
> Make the scheduler favor bandwidth over stopping the tick by setting
> TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> runtime limit enabled.
> 
> Add sched_feat HZ_BW (off by default) to control this behavior.

This is instead of the previous HRTICK version. The problem addressed
is causing significant issues for conainterized telco systems so I'm
trying a different approach. Maybe it will get more traction.

This leaves the sched tick running, but won't require a full
pass through schedule().  As Ben pointed out the HRTICK version
would basically fire every 5ms so depending on your HZ value it
might not have bought much uninterrupted runtime anyway. 


Thanks for taking a look. 


Cheers,
Phil

> 
> Signed-off-by: Phil Auld <pauld@redhat.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Ben Segall <bsegall@google.com>
> ---
>  kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
>  kernel/sched/features.h |  2 ++
>  2 files changed, 34 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 373ff5f55884..880eadfac330 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>  	rcu_read_unlock();
>  }
>  
> +#ifdef CONFIG_NO_HZ_FULL
> +/* called from pick_next_task_fair() */
> +static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
> +{
> +	struct cfs_rq *cfs_rq = task_cfs_rq(p);
> +	int cpu = cpu_of(rq);
> +
> +	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
> +		return;
> +
> +	if (!tick_nohz_full_cpu(cpu))
> +		return;
> +
> +	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
> +		return;
> +
> +	/*
> +	 *  We know there is only one task runnable and we've just picked it. The
> +	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
> +	 *  be otherwise able to stop the tick. Just need to check if we are using
> +	 *  bandwidth control.
> +	 */
> +	if (cfs_rq->runtime_enabled)
> +		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
> +}
> +#endif
> +
>  #else /* CONFIG_CFS_BANDWIDTH */
>  
>  static inline bool cfs_bandwidth_used(void)
> @@ -6181,9 +6208,12 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
>  static inline void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {}
>  static inline void update_runtime_enabled(struct rq *rq) {}
>  static inline void unthrottle_offline_cfs_rqs(struct rq *rq) {}
> -
>  #endif /* CONFIG_CFS_BANDWIDTH */
>  
> +#if !defined(CONFIG_CFS_BANDWIDTH) || !defined(CONFIG_NO_HZ_FULL)
> +static inline void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p) {}
> +#endif
> +
>  /**************************************************
>   * CFS operations on tasks:
>   */
> @@ -8097,6 +8127,7 @@ done: __maybe_unused;
>  		hrtick_start_fair(rq, p);
>  
>  	update_misfit_status(p, rq);
> +	sched_fair_update_stop_tick(rq, p);
>  
>  	return p;
>  
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index ee7f23c76bd3..6fdf1fdf6b17 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -101,3 +101,5 @@ SCHED_FEAT(LATENCY_WARN, false)
>  
>  SCHED_FEAT(ALT_PERIOD, true)
>  SCHED_FEAT(BASE_SLICE, true)
> +
> +SCHED_FEAT(HZ_BW, false)
> -- 
> 2.31.1
> 

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-22 13:27 [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use Phil Auld
  2023-06-22 13:44 ` Phil Auld
@ 2023-06-22 14:22 ` Steven Rostedt
  2023-06-22 15:44   ` Phil Auld
  2023-06-22 20:49 ` Benjamin Segall
  2 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2023-06-22 14:22 UTC (permalink / raw)
  To: Phil Auld
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Ben Segall, Mel Gorman

On Thu, 22 Jun 2023 09:27:51 -0400
Phil Auld <pauld@redhat.com> wrote:

> CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> can easily run well past their quotas before a remote tick does
> accounting.  This leads to long, multi-period stalls before such
> tasks can run again. Currentlyi, when presented with these conflicting
> requirements the scheduler is favoring nohz_full and letting the tick
> be stopped. However, nohz tick stopping is already best-effort, there
> are a number of conditions that can prevent it, whereas cfs runtime
> bandwidth is expected to be enforced.
> 
> Make the scheduler favor bandwidth over stopping the tick by setting
> TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> runtime limit enabled.
> 
> Add sched_feat HZ_BW (off by default) to control this behavior.

So the tl;dr; is: "If the current task has a bandwidth limit, do not disable
the tick" ?

-- Steve

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-22 14:22 ` Steven Rostedt
@ 2023-06-22 15:44   ` Phil Auld
  0 siblings, 0 replies; 9+ messages in thread
From: Phil Auld @ 2023-06-22 15:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Ben Segall, Mel Gorman

On Thu, Jun 22, 2023 at 10:22:16AM -0400 Steven Rostedt wrote:
> On Thu, 22 Jun 2023 09:27:51 -0400
> Phil Auld <pauld@redhat.com> wrote:
> 
> > CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> > can easily run well past their quotas before a remote tick does
> > accounting.  This leads to long, multi-period stalls before such
> > tasks can run again. Currentlyi, when presented with these conflicting
> > requirements the scheduler is favoring nohz_full and letting the tick
> > be stopped. However, nohz tick stopping is already best-effort, there
> > are a number of conditions that can prevent it, whereas cfs runtime
> > bandwidth is expected to be enforced.
> > 
> > Make the scheduler favor bandwidth over stopping the tick by setting
> > TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> > runtime limit enabled.
> > 
> > Add sched_feat HZ_BW (off by default) to control this behavior.
> 
> So the tl;dr; is: "If the current task has a bandwidth limit, do not disable
> the tick" ?
>

Yes.   W/o the tick we can't reliably support/enforce the bandwidth limit.


Cheers,
Phil

> -- Steve
> 

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-22 13:27 [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use Phil Auld
  2023-06-22 13:44 ` Phil Auld
  2023-06-22 14:22 ` Steven Rostedt
@ 2023-06-22 20:49 ` Benjamin Segall
  2023-06-22 21:37   ` Phil Auld
  2 siblings, 1 reply; 9+ messages in thread
From: Benjamin Segall @ 2023-06-22 20:49 UTC (permalink / raw)
  To: Phil Auld
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Steven Rostedt, Mel Gorman

Phil Auld <pauld@redhat.com> writes:

> CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> can easily run well past their quotas before a remote tick does
> accounting.  This leads to long, multi-period stalls before such
> tasks can run again. Currentlyi, when presented with these conflicting
> requirements the scheduler is favoring nohz_full and letting the tick
> be stopped. However, nohz tick stopping is already best-effort, there
> are a number of conditions that can prevent it, whereas cfs runtime
> bandwidth is expected to be enforced.
>
> Make the scheduler favor bandwidth over stopping the tick by setting
> TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> runtime limit enabled.
>
> Add sched_feat HZ_BW (off by default) to control this behavior.
>
> Signed-off-by: Phil Auld <pauld@redhat.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Valentin Schneider <vschneid@redhat.com>
> Cc: Ben Segall <bsegall@google.com>
> ---
>  kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
>  kernel/sched/features.h |  2 ++
>  2 files changed, 34 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 373ff5f55884..880eadfac330 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>  	rcu_read_unlock();
>  }
>  
> +#ifdef CONFIG_NO_HZ_FULL
> +/* called from pick_next_task_fair() */
> +static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
> +{
> +	struct cfs_rq *cfs_rq = task_cfs_rq(p);
> +	int cpu = cpu_of(rq);
> +
> +	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
> +		return;
> +
> +	if (!tick_nohz_full_cpu(cpu))
> +		return;
> +
> +	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
> +		return;
> +
> +	/*
> +	 *  We know there is only one task runnable and we've just picked it. The
> +	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
> +	 *  be otherwise able to stop the tick. Just need to check if we are using
> +	 *  bandwidth control.
> +	 */
> +	if (cfs_rq->runtime_enabled)
> +		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
> +}
> +#endif

So from a CFS_BANDWIDTH pov runtime_enabled && nr_running == 1 seems
fine. But working around sched_can_stop_tick instead of with it seems
sketchy in general, and in an edge case like "migrate a task onto the
cpu and then off again" you'd get sched_update_tick_dependency resetting
the TICK_DEP_BIT and then not call PNT (ie a task wakes up onto this cpu
without preempting, and then another cpu goes idle and pulls it, causing
this cpu to go into nohz_full).

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-22 20:49 ` Benjamin Segall
@ 2023-06-22 21:37   ` Phil Auld
  2023-06-23 13:08     ` Phil Auld
  0 siblings, 1 reply; 9+ messages in thread
From: Phil Auld @ 2023-06-22 21:37 UTC (permalink / raw)
  To: Benjamin Segall
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Steven Rostedt, Mel Gorman

On Thu, Jun 22, 2023 at 01:49:52PM -0700 Benjamin Segall wrote:
> Phil Auld <pauld@redhat.com> writes:
> 
> > CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> > can easily run well past their quotas before a remote tick does
> > accounting.  This leads to long, multi-period stalls before such
> > tasks can run again. Currentlyi, when presented with these conflicting
> > requirements the scheduler is favoring nohz_full and letting the tick
> > be stopped. However, nohz tick stopping is already best-effort, there
> > are a number of conditions that can prevent it, whereas cfs runtime
> > bandwidth is expected to be enforced.
> >
> > Make the scheduler favor bandwidth over stopping the tick by setting
> > TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> > runtime limit enabled.
> >
> > Add sched_feat HZ_BW (off by default) to control this behavior.
> >
> > Signed-off-by: Phil Auld <pauld@redhat.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Juri Lelli <juri.lelli@redhat.com>
> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Cc: Valentin Schneider <vschneid@redhat.com>
> > Cc: Ben Segall <bsegall@google.com>
> > ---
> >  kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
> >  kernel/sched/features.h |  2 ++
> >  2 files changed, 34 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 373ff5f55884..880eadfac330 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> >  	rcu_read_unlock();
> >  }
> >  
> > +#ifdef CONFIG_NO_HZ_FULL
> > +/* called from pick_next_task_fair() */
> > +static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
> > +{
> > +	struct cfs_rq *cfs_rq = task_cfs_rq(p);
> > +	int cpu = cpu_of(rq);
> > +
> > +	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
> > +		return;
> > +
> > +	if (!tick_nohz_full_cpu(cpu))
> > +		return;
> > +
> > +	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
> > +		return;
> > +
> > +	/*
> > +	 *  We know there is only one task runnable and we've just picked it. The
> > +	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
> > +	 *  be otherwise able to stop the tick. Just need to check if we are using
> > +	 *  bandwidth control.
> > +	 */
> > +	if (cfs_rq->runtime_enabled)
> > +		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
> > +}
> > +#endif
> 
> So from a CFS_BANDWIDTH pov runtime_enabled && nr_running == 1 seems
> fine. But working around sched_can_stop_tick instead of with it seems
> sketchy in general, and in an edge case like "migrate a task onto the
> cpu and then off again" you'd get sched_update_tick_dependency resetting
> the TICK_DEP_BIT and then not call PNT (ie a task wakes up onto this cpu
> without preempting, and then another cpu goes idle and pulls it, causing
> this cpu to go into nohz_full).
> 

The information to make these tests is not available in sched_can_stop_tick.
I did start there. When that is called, and we are likely to go nohz_full,
curr is null so it's hard to find the right cfs_rq to make that
runtime_enabled test against.  We could, maybe, plumb the task being enqueued
in but it would not be valid for the dequeue path and would be a bit messy.

But yes, I suppose you could end up in a state that is just as bad as today.

Maybe I could add a redundant check in sched_can_stop_tick for when
nr_running == 1 and curr is not null and make sure the bit does not get
cleared. I'll look into that.


Thanks,
Phil

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-22 21:37   ` Phil Auld
@ 2023-06-23 13:08     ` Phil Auld
  2023-06-23 18:59       ` Benjamin Segall
  0 siblings, 1 reply; 9+ messages in thread
From: Phil Auld @ 2023-06-23 13:08 UTC (permalink / raw)
  To: Benjamin Segall
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Steven Rostedt, Mel Gorman

On Thu, Jun 22, 2023 at 05:37:30PM -0400 Phil Auld wrote:
> On Thu, Jun 22, 2023 at 01:49:52PM -0700 Benjamin Segall wrote:
> > Phil Auld <pauld@redhat.com> writes:
> > 
> > > CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> > > can easily run well past their quotas before a remote tick does
> > > accounting.  This leads to long, multi-period stalls before such
> > > tasks can run again. Currentlyi, when presented with these conflicting
> > > requirements the scheduler is favoring nohz_full and letting the tick
> > > be stopped. However, nohz tick stopping is already best-effort, there
> > > are a number of conditions that can prevent it, whereas cfs runtime
> > > bandwidth is expected to be enforced.
> > >
> > > Make the scheduler favor bandwidth over stopping the tick by setting
> > > TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> > > runtime limit enabled.
> > >
> > > Add sched_feat HZ_BW (off by default) to control this behavior.
> > >
> > > Signed-off-by: Phil Auld <pauld@redhat.com>
> > > Cc: Ingo Molnar <mingo@redhat.com>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > > Cc: Juri Lelli <juri.lelli@redhat.com>
> > > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > > Cc: Valentin Schneider <vschneid@redhat.com>
> > > Cc: Ben Segall <bsegall@google.com>
> > > ---
> > >  kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
> > >  kernel/sched/features.h |  2 ++
> > >  2 files changed, 34 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 373ff5f55884..880eadfac330 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> > >  	rcu_read_unlock();
> > >  }
> > >  
> > > +#ifdef CONFIG_NO_HZ_FULL
> > > +/* called from pick_next_task_fair() */
> > > +static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
> > > +{
> > > +	struct cfs_rq *cfs_rq = task_cfs_rq(p);
> > > +	int cpu = cpu_of(rq);
> > > +
> > > +	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
> > > +		return;
> > > +
> > > +	if (!tick_nohz_full_cpu(cpu))
> > > +		return;
> > > +
> > > +	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
> > > +		return;
> > > +
> > > +	/*
> > > +	 *  We know there is only one task runnable and we've just picked it. The
> > > +	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
> > > +	 *  be otherwise able to stop the tick. Just need to check if we are using
> > > +	 *  bandwidth control.
> > > +	 */
> > > +	if (cfs_rq->runtime_enabled)
> > > +		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
> > > +}
> > > +#endif
> > 
> > So from a CFS_BANDWIDTH pov runtime_enabled && nr_running == 1 seems
> > fine. But working around sched_can_stop_tick instead of with it seems
> > sketchy in general, and in an edge case like "migrate a task onto the
> > cpu and then off again" you'd get sched_update_tick_dependency resetting
> > the TICK_DEP_BIT and then not call PNT (ie a task wakes up onto this cpu
> > without preempting, and then another cpu goes idle and pulls it, causing
> > this cpu to go into nohz_full).
> > 
> 
> The information to make these tests is not available in sched_can_stop_tick.
> I did start there. When that is called, and we are likely to go nohz_full,
> curr is null so it's hard to find the right cfs_rq to make that
> runtime_enabled test against.  We could, maybe, plumb the task being enqueued
> in but it would not be valid for the dequeue path and would be a bit messy.
>

Sorry, mispoke... rq->curr == rq-idle not null. But still we don't have
access to the task and its cfs_rq which will have runtime_enabled set.

> But yes, I suppose you could end up in a state that is just as bad as today.
> 
> Maybe I could add a redundant check in sched_can_stop_tick for when
> nr_running == 1 and curr is not null and make sure the bit does not get
> cleared. I'll look into that.
> 
> 
> Thanks,
> Phil
> 
> -- 
> 

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-23 13:08     ` Phil Auld
@ 2023-06-23 18:59       ` Benjamin Segall
  2023-06-23 19:59         ` Phil Auld
  0 siblings, 1 reply; 9+ messages in thread
From: Benjamin Segall @ 2023-06-23 18:59 UTC (permalink / raw)
  To: Phil Auld
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Steven Rostedt, Mel Gorman

Phil Auld <pauld@redhat.com> writes:

> On Thu, Jun 22, 2023 at 05:37:30PM -0400 Phil Auld wrote:
>> On Thu, Jun 22, 2023 at 01:49:52PM -0700 Benjamin Segall wrote:
>> > Phil Auld <pauld@redhat.com> writes:
>> > 
>> > > CFS bandwidth limits and NOHZ full don't play well together.  Tasks
>> > > can easily run well past their quotas before a remote tick does
>> > > accounting.  This leads to long, multi-period stalls before such
>> > > tasks can run again. Currentlyi, when presented with these conflicting
>> > > requirements the scheduler is favoring nohz_full and letting the tick
>> > > be stopped. However, nohz tick stopping is already best-effort, there
>> > > are a number of conditions that can prevent it, whereas cfs runtime
>> > > bandwidth is expected to be enforced.
>> > >
>> > > Make the scheduler favor bandwidth over stopping the tick by setting
>> > > TICK_DEP_BIT_SCHED when the only running task is a cfs task with
>> > > runtime limit enabled.
>> > >
>> > > Add sched_feat HZ_BW (off by default) to control this behavior.
>> > >
>> > > Signed-off-by: Phil Auld <pauld@redhat.com>
>> > > Cc: Ingo Molnar <mingo@redhat.com>
>> > > Cc: Peter Zijlstra <peterz@infradead.org>
>> > > Cc: Vincent Guittot <vincent.guittot@linaro.org>
>> > > Cc: Juri Lelli <juri.lelli@redhat.com>
>> > > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
>> > > Cc: Valentin Schneider <vschneid@redhat.com>
>> > > Cc: Ben Segall <bsegall@google.com>
>> > > ---
>> > >  kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
>> > >  kernel/sched/features.h |  2 ++
>> > >  2 files changed, 34 insertions(+), 1 deletion(-)
>> > >
>> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > > index 373ff5f55884..880eadfac330 100644
>> > > --- a/kernel/sched/fair.c
>> > > +++ b/kernel/sched/fair.c
>> > > @@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>> > >  	rcu_read_unlock();
>> > >  }
>> > >  
>> > > +#ifdef CONFIG_NO_HZ_FULL
>> > > +/* called from pick_next_task_fair() */
>> > > +static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
>> > > +{
>> > > +	struct cfs_rq *cfs_rq = task_cfs_rq(p);
>> > > +	int cpu = cpu_of(rq);
>> > > +
>> > > +	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
>> > > +		return;
>> > > +
>> > > +	if (!tick_nohz_full_cpu(cpu))
>> > > +		return;
>> > > +
>> > > +	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
>> > > +		return;
>> > > +
>> > > +	/*
>> > > +	 *  We know there is only one task runnable and we've just picked it. The
>> > > +	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
>> > > +	 *  be otherwise able to stop the tick. Just need to check if we are using
>> > > +	 *  bandwidth control.
>> > > +	 */
>> > > +	if (cfs_rq->runtime_enabled)
>> > > +		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
>> > > +}
>> > > +#endif
>> > 
>> > So from a CFS_BANDWIDTH pov runtime_enabled && nr_running == 1 seems
>> > fine. But working around sched_can_stop_tick instead of with it seems
>> > sketchy in general, and in an edge case like "migrate a task onto the
>> > cpu and then off again" you'd get sched_update_tick_dependency resetting
>> > the TICK_DEP_BIT and then not call PNT (ie a task wakes up onto this cpu
>> > without preempting, and then another cpu goes idle and pulls it, causing
>> > this cpu to go into nohz_full).
>> > 
>> 
>> The information to make these tests is not available in sched_can_stop_tick.
>> I did start there. When that is called, and we are likely to go nohz_full,
>> curr is null so it's hard to find the right cfs_rq to make that
>> runtime_enabled test against.  We could, maybe, plumb the task being enqueued
>> in but it would not be valid for the dequeue path and would be a bit messy.
>>
>
> Sorry, mispoke... rq->curr == rq-idle not null. But still we don't have
> access to the task and its cfs_rq which will have runtime_enabled set.
>

That is unfortunate. I suppose then you'd wind up needing both this
extra bit in PNT to handle the switch into nr_running == 1 territory,
and a "HZ_BW && nr_running == 1 && curr is fair && curr->on_rq &&
curr->cfs_rq->runtime_enabled" check in sched_can_stop_tick to catch
edge cases. (I think that would be sufficient, if an annoyingly long set
of conditionals)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use
  2023-06-23 18:59       ` Benjamin Segall
@ 2023-06-23 19:59         ` Phil Auld
  0 siblings, 0 replies; 9+ messages in thread
From: Phil Auld @ 2023-06-23 19:59 UTC (permalink / raw)
  To: Benjamin Segall
  Cc: linux-kernel, Juri Lelli, Ingo Molnar,
	Daniel Bristot de Oliveira, Peter Zijlstra, Vincent Guittot,
	Dietmar Eggemann, Valentin Schneider, Steven Rostedt, Mel Gorman

On Fri, Jun 23, 2023 at 11:59:09AM -0700 Benjamin Segall wrote:
> Phil Auld <pauld@redhat.com> writes:
> 
> > On Thu, Jun 22, 2023 at 05:37:30PM -0400 Phil Auld wrote:
> >> On Thu, Jun 22, 2023 at 01:49:52PM -0700 Benjamin Segall wrote:
> >> > Phil Auld <pauld@redhat.com> writes:
> >> > 
> >> > > CFS bandwidth limits and NOHZ full don't play well together.  Tasks
> >> > > can easily run well past their quotas before a remote tick does
> >> > > accounting.  This leads to long, multi-period stalls before such
> >> > > tasks can run again. Currentlyi, when presented with these conflicting
> >> > > requirements the scheduler is favoring nohz_full and letting the tick
> >> > > be stopped. However, nohz tick stopping is already best-effort, there
> >> > > are a number of conditions that can prevent it, whereas cfs runtime
> >> > > bandwidth is expected to be enforced.
> >> > >
> >> > > Make the scheduler favor bandwidth over stopping the tick by setting
> >> > > TICK_DEP_BIT_SCHED when the only running task is a cfs task with
> >> > > runtime limit enabled.
> >> > >
> >> > > Add sched_feat HZ_BW (off by default) to control this behavior.
> >> > >
> >> > > Signed-off-by: Phil Auld <pauld@redhat.com>
> >> > > Cc: Ingo Molnar <mingo@redhat.com>
> >> > > Cc: Peter Zijlstra <peterz@infradead.org>
> >> > > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> >> > > Cc: Juri Lelli <juri.lelli@redhat.com>
> >> > > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> >> > > Cc: Valentin Schneider <vschneid@redhat.com>
> >> > > Cc: Ben Segall <bsegall@google.com>
> >> > > ---
> >> > >  kernel/sched/fair.c     | 33 ++++++++++++++++++++++++++++++++-
> >> > >  kernel/sched/features.h |  2 ++
> >> > >  2 files changed, 34 insertions(+), 1 deletion(-)
> >> > >
> >> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> > > index 373ff5f55884..880eadfac330 100644
> >> > > --- a/kernel/sched/fair.c
> >> > > +++ b/kernel/sched/fair.c
> >> > > @@ -6139,6 +6139,33 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> >> > >  	rcu_read_unlock();
> >> > >  }
> >> > >  
> >> > > +#ifdef CONFIG_NO_HZ_FULL
> >> > > +/* called from pick_next_task_fair() */
> >> > > +static void sched_fair_update_stop_tick(struct rq *rq, struct task_struct *p)
> >> > > +{
> >> > > +	struct cfs_rq *cfs_rq = task_cfs_rq(p);
> >> > > +	int cpu = cpu_of(rq);
> >> > > +
> >> > > +	if (!sched_feat(HZ_BW) || !cfs_bandwidth_used())
> >> > > +		return;
> >> > > +
> >> > > +	if (!tick_nohz_full_cpu(cpu))
> >> > > +		return;
> >> > > +
> >> > > +	if (rq->nr_running != 1 || !sched_can_stop_tick(rq))
> >> > > +		return;
> >> > > +
> >> > > +	/*
> >> > > +	 *  We know there is only one task runnable and we've just picked it. The
> >> > > +	 *  normal enqueue path will have cleared TICK_DEP_BIT_SCHED if we will
> >> > > +	 *  be otherwise able to stop the tick. Just need to check if we are using
> >> > > +	 *  bandwidth control.
> >> > > +	 */
> >> > > +	if (cfs_rq->runtime_enabled)
> >> > > +		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
> >> > > +}
> >> > > +#endif
> >> > 
> >> > So from a CFS_BANDWIDTH pov runtime_enabled && nr_running == 1 seems
> >> > fine. But working around sched_can_stop_tick instead of with it seems
> >> > sketchy in general, and in an edge case like "migrate a task onto the
> >> > cpu and then off again" you'd get sched_update_tick_dependency resetting
> >> > the TICK_DEP_BIT and then not call PNT (ie a task wakes up onto this cpu
> >> > without preempting, and then another cpu goes idle and pulls it, causing
> >> > this cpu to go into nohz_full).
> >> > 
> >> 
> >> The information to make these tests is not available in sched_can_stop_tick.
> >> I did start there. When that is called, and we are likely to go nohz_full,
> >> curr is null so it's hard to find the right cfs_rq to make that
> >> runtime_enabled test against.  We could, maybe, plumb the task being enqueued
> >> in but it would not be valid for the dequeue path and would be a bit messy.
> >>
> >
> > Sorry, mispoke... rq->curr == rq-idle not null. But still we don't have
> > access to the task and its cfs_rq which will have runtime_enabled set.
> >
> 
> That is unfortunate. I suppose then you'd wind up needing both this
> extra bit in PNT to handle the switch into nr_running == 1 territory,
> and a "HZ_BW && nr_running == 1 && curr is fair && curr->on_rq &&
> curr->cfs_rq->runtime_enabled" check in sched_can_stop_tick to catch
> edge cases. (I think that would be sufficient, if an annoyingly long set
> of conditionals)
>

Right. That's more or less what the version I'm testing now does.

Thanks again.


Cheers,
Phil

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-06-23 20:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-22 13:27 [PATCH] Sched/fair: Block nohz tick_stop when cfs bandwidth in use Phil Auld
2023-06-22 13:44 ` Phil Auld
2023-06-22 14:22 ` Steven Rostedt
2023-06-22 15:44   ` Phil Auld
2023-06-22 20:49 ` Benjamin Segall
2023-06-22 21:37   ` Phil Auld
2023-06-23 13:08     ` Phil Auld
2023-06-23 18:59       ` Benjamin Segall
2023-06-23 19:59         ` Phil Auld

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.