linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Reorder throttle_cfs_rq() path
@ 2020-11-10  2:11 Peng Wang
  2020-11-11 20:27 ` Benjamin Segall
  0 siblings, 1 reply; 2+ messages in thread
From: Peng Wang @ 2020-11-10  2:11 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, bristot
  Cc: linux-kernel

As commit 39f23ce07b93 ("sched/fair: Fix unthrottle_cfs_rq() for
leaf_cfs_rq list") does in unthrottle_cfs_rq(), throttle_cfs_rq()
can also use the same pattern as dequeue_task_fair().

There is no functional changes.

Signed-off-by: Peng Wang <rocking@linux.alibaba.com>
---
 kernel/sched/fair.c | 34 +++++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e3..27a69af 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4779,25 +4779,37 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
 		struct cfs_rq *qcfs_rq = cfs_rq_of(se);
 		/* throttled entity or throttle-on-deactivate */
 		if (!se->on_rq)
-			break;
+			goto done;
 
-		if (dequeue) {
-			dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
-		} else {
-			update_load_avg(qcfs_rq, se, 0);
-			se_update_runnable(se);
-		}
+		dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
 
 		qcfs_rq->h_nr_running -= task_delta;
 		qcfs_rq->idle_h_nr_running -= idle_task_delta;
 
-		if (qcfs_rq->load.weight)
-			dequeue = 0;
+		if (qcfs_rq->load.weight) {
+			/* Avoid re-evaluating load for this entity: */
+			se = parent_entity(se);
+			break;
+		}
 	}
 
-	if (!se)
-		sub_nr_running(rq, task_delta);
+	for_each_sched_entity(se) {
+		struct cfs_rq *qcfs_rq = cfs_rq_of(se);
+		/* throttled entity or throttle-on-deactivate */
+		if (!se->on_rq)
+			goto done;
+
+		update_load_avg(qcfs_rq, se, 0);
+		se_update_runnable(se);
 
+		qcfs_rq->h_nr_running -= task_delta;
+		qcfs_rq->idle_h_nr_running -= idle_task_delta;
+	}
+
+	/* At this point se is NULL and we are at root level*/
+	sub_nr_running(rq, task_delta);
+
+done:
 	/*
 	 * Note: distribution will already see us throttled via the
 	 * throttled-list.  rq->lock protects completion.
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] sched/fair: Reorder throttle_cfs_rq() path
  2020-11-10  2:11 [PATCH] sched/fair: Reorder throttle_cfs_rq() path Peng Wang
@ 2020-11-11 20:27 ` Benjamin Segall
  0 siblings, 0 replies; 2+ messages in thread
From: Benjamin Segall @ 2020-11-11 20:27 UTC (permalink / raw)
  To: Peng Wang
  Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bristot, linux-kernel

Peng Wang <rocking@linux.alibaba.com> writes:

> As commit 39f23ce07b93 ("sched/fair: Fix unthrottle_cfs_rq() for
> leaf_cfs_rq list") does in unthrottle_cfs_rq(), throttle_cfs_rq()
> can also use the same pattern as dequeue_task_fair().
>
> There is no functional changes.

It's generally a bit more hassle and less clear, but the parallel to
dequeue_task_fair probably makes up for it.

Reviewed-by: Ben Segall <bsegall@google.com>

>
> Signed-off-by: Peng Wang <rocking@linux.alibaba.com>
> ---
>  kernel/sched/fair.c | 34 +++++++++++++++++++++++-----------
>  1 file changed, 23 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e3..27a69af 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4779,25 +4779,37 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
>  		struct cfs_rq *qcfs_rq = cfs_rq_of(se);
>  		/* throttled entity or throttle-on-deactivate */
>  		if (!se->on_rq)
> -			break;
> +			goto done;
>  
> -		if (dequeue) {
> -			dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
> -		} else {
> -			update_load_avg(qcfs_rq, se, 0);
> -			se_update_runnable(se);
> -		}
> +		dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
>  
>  		qcfs_rq->h_nr_running -= task_delta;
>  		qcfs_rq->idle_h_nr_running -= idle_task_delta;
>  
> -		if (qcfs_rq->load.weight)
> -			dequeue = 0;
> +		if (qcfs_rq->load.weight) {
> +			/* Avoid re-evaluating load for this entity: */
> +			se = parent_entity(se);
> +			break;
> +		}
>  	}
>  
> -	if (!se)
> -		sub_nr_running(rq, task_delta);
> +	for_each_sched_entity(se) {
> +		struct cfs_rq *qcfs_rq = cfs_rq_of(se);
> +		/* throttled entity or throttle-on-deactivate */
> +		if (!se->on_rq)
> +			goto done;
> +
> +		update_load_avg(qcfs_rq, se, 0);
> +		se_update_runnable(se);
>  
> +		qcfs_rq->h_nr_running -= task_delta;
> +		qcfs_rq->idle_h_nr_running -= idle_task_delta;
> +	}
> +
> +	/* At this point se is NULL and we are at root level*/
> +	sub_nr_running(rq, task_delta);
> +
> +done:
>  	/*
>  	 * Note: distribution will already see us throttled via the
>  	 * throttled-list.  rq->lock protects completion.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-11-11 20:27 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10  2:11 [PATCH] sched/fair: Reorder throttle_cfs_rq() path Peng Wang
2020-11-11 20:27 ` Benjamin Segall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).