All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] sched/fair: enqueue_task_fair optimization
@ 2020-05-13 13:55 Vincent Guittot
       [not found] ` <BL0PR14MB3779D98D96FA0D30B968B4519ABF0@BL0PR14MB3779.namprd14.prod.outlook.com>
  2020-05-19 18:44 ` [tip: sched/core] sched/fair: Optimize enqueue_task_fair() tip-bot2 for Vincent Guittot
  0 siblings, 2 replies; 3+ messages in thread
From: Vincent Guittot @ 2020-05-13 13:55 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, linux-kernel
  Cc: pauld, ouwen210, pkondeti, Vincent Guittot

enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
throttled which means that se can't be NULL in such case and we can move
the label after the if (!se) statement. Futhermore, the latter can be
removed because se is always NULL when reaching this point.

Reviewed-by: Phil Auld <pauld@redhat.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---

v3 changes:
  - updated commit message 
  - removed an extra }

 kernel/sched/fair.c | 39 +++++++++++++++++++--------------------
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a58874ef104..4e586863827b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5512,28 +5512,27 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
                        list_add_leaf_cfs_rq(cfs_rq);
 	}
 
-enqueue_throttle:
-	if (!se) {
-		add_nr_running(rq, 1);
-		/*
-		 * Since new tasks are assigned an initial util_avg equal to
-		 * half of the spare capacity of their CPU, tiny tasks have the
-		 * ability to cross the overutilized threshold, which will
-		 * result in the load balancer ruining all the task placement
-		 * done by EAS. As a way to mitigate that effect, do not account
-		 * for the first enqueue operation of new tasks during the
-		 * overutilized flag detection.
-		 *
-		 * A better way of solving this problem would be to wait for
-		 * the PELT signals of tasks to converge before taking them
-		 * into account, but that is not straightforward to implement,
-		 * and the following generally works well enough in practice.
-		 */
-		if (flags & ENQUEUE_WAKEUP)
-			update_overutilized_status(rq);
+	/* At this point se is NULL and we are at root level*/
+	add_nr_running(rq, 1);
 
-	}
+	/*
+	 * Since new tasks are assigned an initial util_avg equal to
+	 * half of the spare capacity of their CPU, tiny tasks have the
+	 * ability to cross the overutilized threshold, which will
+	 * result in the load balancer ruining all the task placement
+	 * done by EAS. As a way to mitigate that effect, do not account
+	 * for the first enqueue operation of new tasks during the
+	 * overutilized flag detection.
+	 *
+	 * A better way of solving this problem would be to wait for
+	 * the PELT signals of tasks to converge before taking them
+	 * into account, but that is not straightforward to implement,
+	 * and the following generally works well enough in practice.
+	 */
+	if (flags & ENQUEUE_WAKEUP)
+		update_overutilized_status(rq);
 
+enqueue_throttle:
 	if (cfs_bandwidth_used()) {
 		/*
 		 * When bandwidth control is enabled; the cfs_rq_throttled()
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v3] sched/fair: enqueue_task_fair optimization
       [not found] ` <BL0PR14MB3779D98D96FA0D30B968B4519ABF0@BL0PR14MB3779.namprd14.prod.outlook.com>
@ 2020-05-13 16:02   ` Vincent Guittot
  0 siblings, 0 replies; 3+ messages in thread
From: Vincent Guittot @ 2020-05-13 16:02 UTC (permalink / raw)
  To: Tao Zhou
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, linux-kernel, Phil Auld,
	Pavan Kondeti

On Wed, 13 May 2020 at 17:51, Tao Zhou <ouwen210@hotmail.com> wrote:
>
> Hi Vincent,
>
> Sorry for the duplicate.
>
> On Wed, May 13, 2020 at 03:55:02PM +0200, Vincent Guittot wrote:
> > enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
> > throttled which means that se can't be NULL in such case and we can move
> > the label after the if (!se) statement. Futhermore, the latter can be
> > removed because se is always NULL when reaching this point.
> >
> > Reviewed-by: Phil Auld <pauld@redhat.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >
> > v3 changes:
> >   - updated commit message
> >   - removed an extra }
> >
> >  kernel/sched/fair.c | 39 +++++++++++++++++++--------------------
> >  1 file changed, 19 insertions(+), 20 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 9a58874ef104..4e586863827b 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5512,28 +5512,27 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >                         list_add_leaf_cfs_rq(cfs_rq);
> >       }
> >
> > -enqueue_throttle:
> > -     if (!se) {
> > -             add_nr_running(rq, 1);
> > -             /*
> > -              * Since new tasks are assigned an initial util_avg equal to
> > -              * half of the spare capacity of their CPU, tiny tasks have the
> > -              * ability to cross the overutilized threshold, which will
> > -              * result in the load balancer ruining all the task placement
> > -              * done by EAS. As a way to mitigate that effect, do not account
> > -              * for the first enqueue operation of new tasks during the
> > -              * overutilized flag detection.
> > -              *
> > -              * A better way of solving this problem would be to wait for
> > -              * the PELT signals of tasks to converge before taking them
> > -              * into account, but that is not straightforward to implement,
> > -              * and the following generally works well enough in practice.
> > -              */
> > -             if (flags & ENQUEUE_WAKEUP)
> > -                     update_overutilized_status(rq);
> > +     /* At this point se is NULL and we are at root level*/
>
> The same as another patch, lack one blank at the end of above comment.
>
> When apply v3 of 'sched/fair: Fix enqueue_task_fair warning some more'
> We need to edit to align to, I don't know why. When I tried to pull some
> thing to a function not done yet.

I forgot to mention that this patch has been done on top of Phil's one
20200512135222.GC2201@lorien.usersys.redhat.com and apply on top of it

>
> Thanks,
> Tao
>
> > +     add_nr_running(rq, 1);
> >
> > -     }
> > +     /*
> > +      * Since new tasks are assigned an initial util_avg equal to
> > +      * half of the spare capacity of their CPU, tiny tasks have the
> > +      * ability to cross the overutilized threshold, which will
> > +      * result in the load balancer ruining all the task placement
> > +      * done by EAS. As a way to mitigate that effect, do not account
> > +      * for the first enqueue operation of new tasks during the
> > +      * overutilized flag detection.
> > +      *
> > +      * A better way of solving this problem would be to wait for
> > +      * the PELT signals of tasks to converge before taking them
> > +      * into account, but that is not straightforward to implement,
> > +      * and the following generally works well enough in practice.
> > +      */
> > +     if (flags & ENQUEUE_WAKEUP)
> > +             update_overutilized_status(rq);
> >
> > +enqueue_throttle:
> >       if (cfs_bandwidth_used()) {
> >               /*
> >                * When bandwidth control is enabled; the cfs_rq_throttled()
> > --
> > 2.17.1

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tip: sched/core] sched/fair: Optimize enqueue_task_fair()
  2020-05-13 13:55 [PATCH v3] sched/fair: enqueue_task_fair optimization Vincent Guittot
       [not found] ` <BL0PR14MB3779D98D96FA0D30B968B4519ABF0@BL0PR14MB3779.namprd14.prod.outlook.com>
@ 2020-05-19 18:44 ` tip-bot2 for Vincent Guittot
  1 sibling, 0 replies; 3+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2020-05-19 18:44 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Peter Zijlstra (Intel), Phil Auld, x86, LKML

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     7d148be69e3a0eaa9d029a3c51b545e322116a99
Gitweb:        https://git.kernel.org/tip/7d148be69e3a0eaa9d029a3c51b545e322116a99
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Wed, 13 May 2020 15:55:02 +02:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 19 May 2020 20:34:13 +02:00

sched/fair: Optimize enqueue_task_fair()

enqueue_task_fair jumps to enqueue_throttle label when cfs_rq_of(se) is
throttled which means that se can't be NULL in such case and we can move
the label after the if (!se) statement. Futhermore, the latter can be
removed because se is always NULL when reaching this point.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
Link: https://lkml.kernel.org/r/20200513135502.4672-1-vincent.guittot@linaro.org
---
 kernel/sched/fair.c | 39 +++++++++++++++++++--------------------
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a58874..4e58686 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5512,28 +5512,27 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
                        list_add_leaf_cfs_rq(cfs_rq);
 	}
 
-enqueue_throttle:
-	if (!se) {
-		add_nr_running(rq, 1);
-		/*
-		 * Since new tasks are assigned an initial util_avg equal to
-		 * half of the spare capacity of their CPU, tiny tasks have the
-		 * ability to cross the overutilized threshold, which will
-		 * result in the load balancer ruining all the task placement
-		 * done by EAS. As a way to mitigate that effect, do not account
-		 * for the first enqueue operation of new tasks during the
-		 * overutilized flag detection.
-		 *
-		 * A better way of solving this problem would be to wait for
-		 * the PELT signals of tasks to converge before taking them
-		 * into account, but that is not straightforward to implement,
-		 * and the following generally works well enough in practice.
-		 */
-		if (flags & ENQUEUE_WAKEUP)
-			update_overutilized_status(rq);
+	/* At this point se is NULL and we are at root level*/
+	add_nr_running(rq, 1);
 
-	}
+	/*
+	 * Since new tasks are assigned an initial util_avg equal to
+	 * half of the spare capacity of their CPU, tiny tasks have the
+	 * ability to cross the overutilized threshold, which will
+	 * result in the load balancer ruining all the task placement
+	 * done by EAS. As a way to mitigate that effect, do not account
+	 * for the first enqueue operation of new tasks during the
+	 * overutilized flag detection.
+	 *
+	 * A better way of solving this problem would be to wait for
+	 * the PELT signals of tasks to converge before taking them
+	 * into account, but that is not straightforward to implement,
+	 * and the following generally works well enough in practice.
+	 */
+	if (flags & ENQUEUE_WAKEUP)
+		update_overutilized_status(rq);
 
+enqueue_throttle:
 	if (cfs_bandwidth_used()) {
 		/*
 		 * When bandwidth control is enabled; the cfs_rq_throttled()

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-05-19 18:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-13 13:55 [PATCH v3] sched/fair: enqueue_task_fair optimization Vincent Guittot
     [not found] ` <BL0PR14MB3779D98D96FA0D30B968B4519ABF0@BL0PR14MB3779.namprd14.prod.outlook.com>
2020-05-13 16:02   ` Vincent Guittot
2020-05-19 18:44 ` [tip: sched/core] sched/fair: Optimize enqueue_task_fair() tip-bot2 for Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.