All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
@ 2019-01-29 17:18 Vincent Guittot
  2019-01-30  5:22 ` [PATCH v2] " Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2019-01-29 17:18 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz; +Cc: tj, sargun, Vincent Guittot

Sargu reported a crash:
  "I picked up c40f7d74c741a907cfaeb73a7697081881c497d0 sched/fair: Fix
   infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
   and put it on top of 4.19.13. In addition to this, I uninlined
   list_add_leaf_cfs_rq for debugging.

   This revealed a new bug that we didn't get to because we kept getting
   crashes from the previous issue. When we are running with cgroups that
   are rapidly changing, with CFS bandwidth control, and in addition
   using the cpusets cgroup, we see this crash. Specifically, it seems to
   occur with cgroups that are throttled and we change the allowed
   cpuset."

The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
it will walk down to root the 1st time a cfs_rq is used and we will finish
to add either a cfs_rq without parent or a cfs_rq with a parent that is
already on the list. But this is not always true in presence of throttling.
Because a cfs_rq can be throttled even if it has never been used but other CPUS
of the cgroup have already used all the bandwdith, we are not sure to go down to
the root and add all cfs_rq in the list.

Ensure that all cfs_rq will be added in the list even if they are throttled.

Reported-by: Sargun Dhillon <sargun@sargun.me>
Fixes: 9c2791f936ef ("Fix hierarchical order in rq->leaf_cfs_rq_list")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---

This patch doesn't fix:
  a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
which has been reverted in v5.0-rc1. I'm working on an additonal patch
that should be similar to this one to fix a9e7f6544b9c.

 kernel/sched/fair.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e2ff4b6..bf6b6c1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -352,6 +352,20 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 	}
 }
 
+static inline void list_add_branch_cfs_rq(struct sched_entity *se, struct rq *rq)
+{
+	struct cfs_rq *cfs_rq;
+
+	for_each_sched_entity(se) {
+		cfs_rq = cfs_rq_of(se);
+		list_add_leaf_cfs_rq(cfs_rq);
+
+		/* If parent is already in the list, we can stop */
+		if (rq->tmp_alone_branch == &rq->leaf_cfs_rq_list)
+			break;
+	}
+}
+
 /* Iterate through all leaf cfs_rq's on a runqueue: */
 #define for_each_leaf_cfs_rq(rq, cfs_rq) \
 	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
@@ -5179,6 +5193,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 	}
 
+	/* Ensure that all cfs_rq have been added to the list */
+	list_add_branch_cfs_rq(se, rq);
+
 	hrtick_update(rq);
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-29 17:18 [PATCH] sched/fair: Fix insertion in rq->leaf_cfs_rq_list Vincent Guittot
@ 2019-01-30  5:22 ` Vincent Guittot
  2019-01-30 13:04   ` Peter Zijlstra
                     ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Vincent Guittot @ 2019-01-30  5:22 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz; +Cc: tj, sargun, Vincent Guittot

Sargun reported a crash:
  "I picked up c40f7d74c741a907cfaeb73a7697081881c497d0 sched/fair: Fix
   infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
   and put it on top of 4.19.13. In addition to this, I uninlined
   list_add_leaf_cfs_rq for debugging.

   This revealed a new bug that we didn't get to because we kept getting
   crashes from the previous issue. When we are running with cgroups that
   are rapidly changing, with CFS bandwidth control, and in addition
   using the cpusets cgroup, we see this crash. Specifically, it seems to
   occur with cgroups that are throttled and we change the allowed
   cpuset."

The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
it will walk down to root the 1st time a cfs_rq is used and we will finish
to add either a cfs_rq without parent or a cfs_rq with a parent that is
already on the list. But this is not always true in presence of throttling.
Because a cfs_rq can be throttled even if it has never been used but other CPUs
of the cgroup have already used all the bandwdith, we are not sure to go down to
the root and add all cfs_rq in the list.

Ensure that all cfs_rq will be added in the list even if they are throttled.

Reported-by: Sargun Dhillon <sargun@sargun.me>
Fixes: 9c2791f936ef ("Fix hierarchical order in rq->leaf_cfs_rq_list")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---

v2:
- Added dummy function for !CONFIG_FAIR_GROUP_SCHED

This patch doesn't fix:
  a9e7f6544b9c ("sched/fair: Fix O(nr_cgroups) in load balance path")
which has been reverted in v5.0-rc1. I'm working on an additonal patch
that should be similar to this one to fix a9e7f6544b9c.

 kernel/sched/fair.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e2ff4b6..826fbe5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -352,6 +352,20 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 	}
 }
 
+static inline void list_add_branch_cfs_rq(struct sched_entity *se, struct rq *rq)
+{
+	struct cfs_rq *cfs_rq;
+
+	for_each_sched_entity(se) {
+		cfs_rq = cfs_rq_of(se);
+		list_add_leaf_cfs_rq(cfs_rq);
+
+		/* If parent is already in the list, we can stop */
+		if (rq->tmp_alone_branch == &rq->leaf_cfs_rq_list)
+			break;
+	}
+}
+
 /* Iterate through all leaf cfs_rq's on a runqueue: */
 #define for_each_leaf_cfs_rq(rq, cfs_rq) \
 	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
@@ -446,6 +460,10 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 }
 
+static inline void list_add_branch_cfs_rq(struct sched_entity *se, struct rq *rq)
+{
+}
+
 #define for_each_leaf_cfs_rq(rq, cfs_rq)	\
 		for (cfs_rq = &rq->cfs; cfs_rq; cfs_rq = NULL)
 
@@ -5179,6 +5197,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 	}
 
+	/* Ensure that all cfs_rq have been added to the list */
+	list_add_branch_cfs_rq(se, rq);
+
 	hrtick_update(rq);
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30  5:22 ` [PATCH v2] " Vincent Guittot
@ 2019-01-30 13:04   ` Peter Zijlstra
  2019-01-30 13:06     ` Peter Zijlstra
  2019-01-30 14:01   ` Peter Zijlstra
  2019-02-04  9:03   ` [tip:sched/core] " tip-bot for Vincent Guittot
  2 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 13:04 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, mingo, tj, sargun

On Wed, Jan 30, 2019 at 06:22:47AM +0100, Vincent Guittot wrote:

> The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
> it will walk down to root the 1st time a cfs_rq is used and we will finish
> to add either a cfs_rq without parent or a cfs_rq with a parent that is
> already on the list. But this is not always true in presence of throttling.
> Because a cfs_rq can be throttled even if it has never been used but other CPUs
> of the cgroup have already used all the bandwdith, we are not sure to go down to
> the root and add all cfs_rq in the list.
> 
> Ensure that all cfs_rq will be added in the list even if they are throttled.

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e2ff4b6..826fbe5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -352,6 +352,20 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
>  	}
>  }
>  
> +static inline void list_add_branch_cfs_rq(struct sched_entity *se, struct rq *rq)
> +{
> +	struct cfs_rq *cfs_rq;
> +
> +	for_each_sched_entity(se) {
> +		cfs_rq = cfs_rq_of(se);
> +		list_add_leaf_cfs_rq(cfs_rq);
> +
> +		/* If parent is already in the list, we can stop */
> +		if (rq->tmp_alone_branch == &rq->leaf_cfs_rq_list)
> +			break;
> +	}
> +}
> +
>  /* Iterate through all leaf cfs_rq's on a runqueue: */
>  #define for_each_leaf_cfs_rq(rq, cfs_rq) \
>  	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)

> @@ -5179,6 +5197,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>  
>  	}
>  
> +	/* Ensure that all cfs_rq have been added to the list */
> +	list_add_branch_cfs_rq(se, rq);
> +
>  	hrtick_update(rq);
>  }

So I don't much like this; at all. But maybe I misunderstand, this is
somewhat tricky stuff and I've not looked at it in a while.

So per normal we do:

	enqueue_task_fair()
	  for_each_sched_entity() {
	    if (se->on_rq)
	      break;
	    enqueue_entity()
	      list_add_leaf_cfs_rq();
	  }

This ensures that all parents are already enqueued, right? because this
is what enqueues those parents.

And in this case you add an unconditional second
for_each_sched_entity(); even though it is completely redundant, afaict.


The problem seems to stem from the whole throttled crud; which (also)
breaks the above enqueue loop on throttle state, and there the parent can
go missing.

So why doesn't this live in unthrottle_cfs_rq() ?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 13:04   ` Peter Zijlstra
@ 2019-01-30 13:06     ` Peter Zijlstra
  2019-01-30 13:27       ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 13:06 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, mingo, tj, sargun

On Wed, Jan 30, 2019 at 02:04:10PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 30, 2019 at 06:22:47AM +0100, Vincent Guittot wrote:
> 
> > The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
> > it will walk down to root the 1st time a cfs_rq is used and we will finish
> > to add either a cfs_rq without parent or a cfs_rq with a parent that is
> > already on the list. But this is not always true in presence of throttling.
> > Because a cfs_rq can be throttled even if it has never been used but other CPUs
> > of the cgroup have already used all the bandwdith, we are not sure to go down to
> > the root and add all cfs_rq in the list.
> > 
> > Ensure that all cfs_rq will be added in the list even if they are throttled.
> 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e2ff4b6..826fbe5 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -352,6 +352,20 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> >  	}
> >  }
> >  
> > +static inline void list_add_branch_cfs_rq(struct sched_entity *se, struct rq *rq)
> > +{
> > +	struct cfs_rq *cfs_rq;
> > +
> > +	for_each_sched_entity(se) {
> > +		cfs_rq = cfs_rq_of(se);
> > +		list_add_leaf_cfs_rq(cfs_rq);
> > +
> > +		/* If parent is already in the list, we can stop */
> > +		if (rq->tmp_alone_branch == &rq->leaf_cfs_rq_list)
> > +			break;
> > +	}
> > +}
> > +
> >  /* Iterate through all leaf cfs_rq's on a runqueue: */
> >  #define for_each_leaf_cfs_rq(rq, cfs_rq) \
> >  	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
> 
> > @@ -5179,6 +5197,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >  
> >  	}
> >  
> > +	/* Ensure that all cfs_rq have been added to the list */
> > +	list_add_branch_cfs_rq(se, rq);
> > +
> >  	hrtick_update(rq);
> >  }
> 
> So I don't much like this; at all. But maybe I misunderstand, this is
> somewhat tricky stuff and I've not looked at it in a while.
> 
> So per normal we do:
> 
> 	enqueue_task_fair()
> 	  for_each_sched_entity() {
> 	    if (se->on_rq)
> 	      break;
> 	    enqueue_entity()
> 	      list_add_leaf_cfs_rq();
> 	  }
> 
> This ensures that all parents are already enqueued, right? because this
> is what enqueues those parents.
> 
> And in this case you add an unconditional second
> for_each_sched_entity(); even though it is completely redundant, afaict.

Ah, it doesn't do a second iteration; it continues where the previous
two left off.

Still, why isn't this in unthrottle?

> The problem seems to stem from the whole throttled crud; which (also)
> breaks the above enqueue loop on throttle state, and there the parent can
> go missing.
> 
> So why doesn't this live in unthrottle_cfs_rq() ?
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 13:06     ` Peter Zijlstra
@ 2019-01-30 13:27       ` Peter Zijlstra
  2019-01-30 13:29         ` Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 13:27 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, mingo, tj, sargun

On Wed, Jan 30, 2019 at 02:06:20PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 30, 2019 at 02:04:10PM +0100, Peter Zijlstra wrote:

> > So I don't much like this; at all. But maybe I misunderstand, this is
> > somewhat tricky stuff and I've not looked at it in a while.
> > 
> > So per normal we do:
> > 
> > 	enqueue_task_fair()
> > 	  for_each_sched_entity() {
> > 	    if (se->on_rq)
> > 	      break;
> > 	    enqueue_entity()
> > 	      list_add_leaf_cfs_rq();
> > 	  }
> > 
> > This ensures that all parents are already enqueued, right? because this
> > is what enqueues those parents.
> > 
> > And in this case you add an unconditional second
> > for_each_sched_entity(); even though it is completely redundant, afaict.
> 
> Ah, it doesn't do a second iteration; it continues where the previous
> two left off.
> 
> Still, why isn't this in unthrottle?

Aah, I see, because we need:

  rq->tmp_alone_branch == &rq->lead_cfs_rq_list

at the end of enqueue_task_fair(); having had that assertion would've
saved some pain I suppose.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 13:27       ` Peter Zijlstra
@ 2019-01-30 13:29         ` Vincent Guittot
  2019-01-30 13:40           ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2019-01-30 13:29 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, 30 Jan 2019 at 14:27, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Jan 30, 2019 at 02:06:20PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 30, 2019 at 02:04:10PM +0100, Peter Zijlstra wrote:
>
> > > So I don't much like this; at all. But maybe I misunderstand, this is
> > > somewhat tricky stuff and I've not looked at it in a while.
> > >
> > > So per normal we do:
> > >
> > >     enqueue_task_fair()
> > >       for_each_sched_entity() {
> > >         if (se->on_rq)
> > >           break;
> > >         enqueue_entity()
> > >           list_add_leaf_cfs_rq();
> > >       }
> > >
> > > This ensures that all parents are already enqueued, right? because this
> > > is what enqueues those parents.
> > >
> > > And in this case you add an unconditional second
> > > for_each_sched_entity(); even though it is completely redundant, afaict.
> >
> > Ah, it doesn't do a second iteration; it continues where the previous
> > two left off.
> >
> > Still, why isn't this in unthrottle?
>
> Aah, I see, because we need:
>
>   rq->tmp_alone_branch == &rq->lead_cfs_rq_list
>
> at the end of enqueue_task_fair(); having had that assertion would've

Yes exactly.
You have been quicker than me to reply

> saved some pain I suppose.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 13:29         ` Vincent Guittot
@ 2019-01-30 13:40           ` Peter Zijlstra
  2019-01-30 15:48             ` Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 13:40 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, Jan 30, 2019 at 02:29:42PM +0100, Vincent Guittot wrote:
> On Wed, 30 Jan 2019 at 14:27, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Wed, Jan 30, 2019 at 02:06:20PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 30, 2019 at 02:04:10PM +0100, Peter Zijlstra wrote:
> >
> > > > So I don't much like this; at all. But maybe I misunderstand, this is
> > > > somewhat tricky stuff and I've not looked at it in a while.
> > > >
> > > > So per normal we do:
> > > >
> > > >     enqueue_task_fair()
> > > >       for_each_sched_entity() {
> > > >         if (se->on_rq)
> > > >           break;
> > > >         enqueue_entity()
> > > >           list_add_leaf_cfs_rq();
> > > >       }
> > > >
> > > > This ensures that all parents are already enqueued, right? because this
> > > > is what enqueues those parents.
> > > >
> > > > And in this case you add an unconditional second
> > > > for_each_sched_entity(); even though it is completely redundant, afaict.
> > >
> > > Ah, it doesn't do a second iteration; it continues where the previous
> > > two left off.
> > >
> > > Still, why isn't this in unthrottle?
> >
> > Aah, I see, because we need:
> >
> >   rq->tmp_alone_branch == &rq->lead_cfs_rq_list
> >
> > at the end of enqueue_task_fair(); having had that assertion would've
> 
> Yes exactly.

How's this ?

---
 kernel/sched/fair.c | 125 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 69 insertions(+), 56 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e2ff4b69dcf6..747976ca84ea 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -284,64 +284,66 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
 
 static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
-	if (!cfs_rq->on_list) {
-		struct rq *rq = rq_of(cfs_rq);
-		int cpu = cpu_of(rq);
+	struct rq *rq = rq_of(cfs_rq);
+	int cpu = cpu_of(rq);
+
+	if (cfs_rq->on_list)
+		return;
+
+	/*
+	 * Ensure we either appear before our parent (if already
+	 * enqueued) or force our parent to appear after us when it is
+	 * enqueued. The fact that we always enqueue bottom-up
+	 * reduces this to two cases and a special case for the root
+	 * cfs_rq. Furthermore, it also means that we will always reset
+	 * tmp_alone_branch either when the branch is connected
+	 * to a tree or when we reach the top of the tree
+	 */
+	if (cfs_rq->tg->parent &&
+	    cfs_rq->tg->parent->cfs_rq[cpu]->on_list) {
 		/*
-		 * Ensure we either appear before our parent (if already
-		 * enqueued) or force our parent to appear after us when it is
-		 * enqueued. The fact that we always enqueue bottom-up
-		 * reduces this to two cases and a special case for the root
-		 * cfs_rq. Furthermore, it also means that we will always reset
-		 * tmp_alone_branch either when the branch is connected
-		 * to a tree or when we reach the beg of the tree
+		 * If parent is already on the list, we add the child
+		 * just before. Thanks to circular linked property of
+		 * the list, this means to put the child at the tail
+		 * of the list that starts by parent.
 		 */
-		if (cfs_rq->tg->parent &&
-		    cfs_rq->tg->parent->cfs_rq[cpu]->on_list) {
-			/*
-			 * If parent is already on the list, we add the child
-			 * just before. Thanks to circular linked property of
-			 * the list, this means to put the child at the tail
-			 * of the list that starts by parent.
-			 */
-			list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
-				&(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list));
-			/*
-			 * The branch is now connected to its tree so we can
-			 * reset tmp_alone_branch to the beginning of the
-			 * list.
-			 */
-			rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-		} else if (!cfs_rq->tg->parent) {
-			/*
-			 * cfs rq without parent should be put
-			 * at the tail of the list.
-			 */
-			list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
-				&rq->leaf_cfs_rq_list);
-			/*
-			 * We have reach the beg of a tree so we can reset
-			 * tmp_alone_branch to the beginning of the list.
-			 */
-			rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-		} else {
-			/*
-			 * The parent has not already been added so we want to
-			 * make sure that it will be put after us.
-			 * tmp_alone_branch points to the beg of the branch
-			 * where we will add parent.
-			 */
-			list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
-				rq->tmp_alone_branch);
-			/*
-			 * update tmp_alone_branch to points to the new beg
-			 * of the branch
-			 */
-			rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
-		}
-
-		cfs_rq->on_list = 1;
+		list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
+			&(cfs_rq->tg->parent->cfs_rq[cpu]->leaf_cfs_rq_list));
+		/*
+		 * The branch is now connected to its tree so we can
+		 * reset tmp_alone_branch to the beginning of the
+		 * list.
+		 */
+		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
+	} else if (!cfs_rq->tg->parent) {
+		/*
+		 * cfs rq without parent should be put
+		 * at the tail of the list.
+		 */
+		list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
+			&rq->leaf_cfs_rq_list);
+		/*
+		 * We have reach the top of a tree so we can reset
+		 * tmp_alone_branch to the beginning of the list.
+		 */
+		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
+	} else {
+		/*
+		 * The parent has not already been added so we want to
+		 * make sure that it will be put after us.
+		 * tmp_alone_branch points to the begin of the branch
+		 * where we will add parent.
+		 */
+		list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
+			rq->tmp_alone_branch);
+		/*
+		 * update tmp_alone_branch to points to the new begin
+		 * of the branch
+		 */
+		rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
 	}
+
+	cfs_rq->on_list = 1;
 }
 
 static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
@@ -352,7 +354,12 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 	}
 }
 
-/* Iterate through all leaf cfs_rq's on a runqueue: */
+static inline void assert_list_leaf_cfs_rq(struct rq *rq)
+{
+	SCHED_WARN_ON(rq->tmp_alone_branch != &rq->lead_cfs_rq_list);
+}
+
+/* Iterate through all cfs_rq's on a runqueue in bottom-up order */
 #define for_each_leaf_cfs_rq(rq, cfs_rq) \
 	list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
 
@@ -446,6 +453,10 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 }
 
+static inline void assert_list_leaf_cfs_rq(struct rq *rq)
+{
+}
+
 #define for_each_leaf_cfs_rq(rq, cfs_rq)	\
 		for (cfs_rq = &rq->cfs; cfs_rq; cfs_rq = NULL)
 
@@ -5179,6 +5190,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 	}
 
+	assert_list_leaf_cfs_rq(rq);
+
 	hrtick_update(rq);
 }
 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30  5:22 ` [PATCH v2] " Vincent Guittot
  2019-01-30 13:04   ` Peter Zijlstra
@ 2019-01-30 14:01   ` Peter Zijlstra
  2019-01-30 14:01     ` Peter Zijlstra
  2019-01-30 14:30     ` Vincent Guittot
  2019-02-04  9:03   ` [tip:sched/core] " tip-bot for Vincent Guittot
  2 siblings, 2 replies; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 14:01 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, mingo, tj, sargun

On Wed, Jan 30, 2019 at 06:22:47AM +0100, Vincent Guittot wrote:
> Sargun reported a crash:
>   "I picked up c40f7d74c741a907cfaeb73a7697081881c497d0 sched/fair: Fix
>    infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
>    and put it on top of 4.19.13. In addition to this, I uninlined
>    list_add_leaf_cfs_rq for debugging.
> 
>    This revealed a new bug that we didn't get to because we kept getting
>    crashes from the previous issue. When we are running with cgroups that
>    are rapidly changing, with CFS bandwidth control, and in addition
>    using the cpusets cgroup, we see this crash. Specifically, it seems to
>    occur with cgroups that are throttled and we change the allowed
>    cpuset."
> 
> The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
> it will walk down to root the 1st time a cfs_rq is used and we will finish
> to add either a cfs_rq without parent or a cfs_rq with a parent that is
> already on the list. But this is not always true in presence of throttling.
> Because a cfs_rq can be throttled even if it has never been used but other CPUs
> of the cgroup have already used all the bandwdith, we are not sure to go down to
> the root and add all cfs_rq in the list.
> 
> Ensure that all cfs_rq will be added in the list even if they are throttled.
> 
> Reported-by: Sargun Dhillon <sargun@sargun.me>
> Fixes: 9c2791f936ef ("Fix hierarchical order in rq->leaf_cfs_rq_list")
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Given my previous patch; how about something like so instead?

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -282,13 +282,15 @@ static inline struct cfs_rq *group_cfs_r
 	return grp->my_q;
 }
 
-static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 	struct rq *rq = rq_of(cfs_rq);
 	int cpu = cpu_of(rq);
 
 	if (cfs_rq->on_list)
-		return;
+		return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
+
+	cfs_rq->on_list = 1;
 
 	/*
 	 * Ensure we either appear before our parent (if already
@@ -315,7 +317,10 @@ static inline void list_add_leaf_cfs_rq(
 		 * list.
 		 */
 		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-	} else if (!cfs_rq->tg->parent) {
+		return true;
+	}
+
+	if (!cfs_rq->tg->parent) {
 		/*
 		 * cfs rq without parent should be put
 		 * at the tail of the list.
@@ -327,23 +332,22 @@ static inline void list_add_leaf_cfs_rq(
 		 * tmp_alone_branch to the beginning of the list.
 		 */
 		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-	} else {
-		/*
-		 * The parent has not already been added so we want to
-		 * make sure that it will be put after us.
-		 * tmp_alone_branch points to the begin of the branch
-		 * where we will add parent.
-		 */
-		list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
-			rq->tmp_alone_branch);
-		/*
-		 * update tmp_alone_branch to points to the new begin
-		 * of the branch
-		 */
-		rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
+		return true;
 	}
 
-	cfs_rq->on_list = 1;
+	/*
+	 * The parent has not already been added so we want to
+	 * make sure that it will be put after us.
+	 * tmp_alone_branch points to the begin of the branch
+	 * where we will add parent.
+	 */
+	list_add_rcu(&cfs_rq->leaf_cfs_rq_list, rq->tmp_alone_branch);
+	/*
+	 * update tmp_alone_branch to points to the new begin
+	 * of the branch
+	 */
+	rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
+	return false;
 }
 
 static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
@@ -4999,6 +5003,12 @@ static void __maybe_unused unthrottle_of
 }
 
 #else /* CONFIG_CFS_BANDWIDTH */
+
+static inline bool cfs_bandwidth_used(void)
+{
+	return false;
+}
+
 static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
 {
 	return rq_clock_task(rq_of(cfs_rq));
@@ -5190,6 +5200,21 @@ enqueue_task_fair(struct rq *rq, struct
 
 	}
 
+	if (cfs_bandwidth_used()) {
+		/*
+		 * When bandwidth control is enabled; the cfs_rq_throttled()
+		 * breaks in the above iteration can result in incomplete
+		 * leaf list maintenance, resulting in triggering the assertion
+		 * below.
+		 */
+		for_each_sched_entity(se) {
+			cfs_rq = cfs_rq_of(se);
+
+			if (list_add_leaf_cfs_rq(cfs_rq))
+				break;
+		}
+	}
+
 	assert_list_leaf_cfs_rq(rq);
 
 	hrtick_update(rq);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 14:01   ` Peter Zijlstra
@ 2019-01-30 14:01     ` Peter Zijlstra
  2019-01-30 14:27       ` Vincent Guittot
  2019-01-30 14:30     ` Vincent Guittot
  1 sibling, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 14:01 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, mingo, tj, sargun

On Wed, Jan 30, 2019 at 03:01:04PM +0100, Peter Zijlstra wrote:
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -282,13 +282,15 @@ static inline struct cfs_rq *group_cfs_r
>  	return grp->my_q;
>  }
>  
> -static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> +static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
>  {
>  	struct rq *rq = rq_of(cfs_rq);
>  	int cpu = cpu_of(rq);
>  
>  	if (cfs_rq->on_list)
> -		return;
> +		return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;

And I'm almost certain that can be: return true, but got my brain in a
twist.

> +
> +	cfs_rq->on_list = 1;
>  
>  	/*
>  	 * Ensure we either appear before our parent (if already

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 14:01     ` Peter Zijlstra
@ 2019-01-30 14:27       ` Vincent Guittot
  2019-01-30 17:40         ` Vincent Guittot
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2019-01-30 14:27 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, 30 Jan 2019 at 15:01, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Jan 30, 2019 at 03:01:04PM +0100, Peter Zijlstra wrote:
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -282,13 +282,15 @@ static inline struct cfs_rq *group_cfs_r
> >       return grp->my_q;
> >  }
> >
> > -static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> > +static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> >  {
> >       struct rq *rq = rq_of(cfs_rq);
> >       int cpu = cpu_of(rq);
> >
> >       if (cfs_rq->on_list)
> > -             return;
> > +             return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
>
> And I'm almost certain that can be: return true, but got my brain in a
> twist.

Yes this can return true

If cfs_rq->on_list) then a child not already on the list used the path :

if (cfs_rq->tg->parent &&
           cfs_rq->tg->parent->cfs_rq[cpu]->on_list) {

which does rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;

>
> > +
> > +     cfs_rq->on_list = 1;
> >
> >       /*
> >        * Ensure we either appear before our parent (if already

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 14:01   ` Peter Zijlstra
  2019-01-30 14:01     ` Peter Zijlstra
@ 2019-01-30 14:30     ` Vincent Guittot
  1 sibling, 0 replies; 15+ messages in thread
From: Vincent Guittot @ 2019-01-30 14:30 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, 30 Jan 2019 at 15:01, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Jan 30, 2019 at 06:22:47AM +0100, Vincent Guittot wrote:
> > Sargun reported a crash:
> >   "I picked up c40f7d74c741a907cfaeb73a7697081881c497d0 sched/fair: Fix
> >    infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
> >    and put it on top of 4.19.13. In addition to this, I uninlined
> >    list_add_leaf_cfs_rq for debugging.
> >
> >    This revealed a new bug that we didn't get to because we kept getting
> >    crashes from the previous issue. When we are running with cgroups that
> >    are rapidly changing, with CFS bandwidth control, and in addition
> >    using the cpusets cgroup, we see this crash. Specifically, it seems to
> >    occur with cgroups that are throttled and we change the allowed
> >    cpuset."
> >
> > The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
> > it will walk down to root the 1st time a cfs_rq is used and we will finish
> > to add either a cfs_rq without parent or a cfs_rq with a parent that is
> > already on the list. But this is not always true in presence of throttling.
> > Because a cfs_rq can be throttled even if it has never been used but other CPUs
> > of the cgroup have already used all the bandwdith, we are not sure to go down to
> > the root and add all cfs_rq in the list.
> >
> > Ensure that all cfs_rq will be added in the list even if they are throttled.
> >
> > Reported-by: Sargun Dhillon <sargun@sargun.me>
> > Fixes: 9c2791f936ef ("Fix hierarchical order in rq->leaf_cfs_rq_list")
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>
> Given my previous patch; how about something like so instead?

I still have to run some tests but this looks good to me

>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -282,13 +282,15 @@ static inline struct cfs_rq *group_cfs_r
>         return grp->my_q;
>  }
>
> -static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> +static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
>  {
>         struct rq *rq = rq_of(cfs_rq);
>         int cpu = cpu_of(rq);
>
>         if (cfs_rq->on_list)
> -               return;
> +               return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
> +
> +       cfs_rq->on_list = 1;
>
>         /*
>          * Ensure we either appear before our parent (if already
> @@ -315,7 +317,10 @@ static inline void list_add_leaf_cfs_rq(
>                  * list.
>                  */
>                 rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
> -       } else if (!cfs_rq->tg->parent) {
> +               return true;
> +       }
> +
> +       if (!cfs_rq->tg->parent) {
>                 /*
>                  * cfs rq without parent should be put
>                  * at the tail of the list.
> @@ -327,23 +332,22 @@ static inline void list_add_leaf_cfs_rq(
>                  * tmp_alone_branch to the beginning of the list.
>                  */
>                 rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
> -       } else {
> -               /*
> -                * The parent has not already been added so we want to
> -                * make sure that it will be put after us.
> -                * tmp_alone_branch points to the begin of the branch
> -                * where we will add parent.
> -                */
> -               list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
> -                       rq->tmp_alone_branch);
> -               /*
> -                * update tmp_alone_branch to points to the new begin
> -                * of the branch
> -                */
> -               rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
> +               return true;
>         }
>
> -       cfs_rq->on_list = 1;
> +       /*
> +        * The parent has not already been added so we want to
> +        * make sure that it will be put after us.
> +        * tmp_alone_branch points to the begin of the branch
> +        * where we will add parent.
> +        */
> +       list_add_rcu(&cfs_rq->leaf_cfs_rq_list, rq->tmp_alone_branch);
> +       /*
> +        * update tmp_alone_branch to points to the new begin
> +        * of the branch
> +        */
> +       rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
> +       return false;
>  }
>
>  static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> @@ -4999,6 +5003,12 @@ static void __maybe_unused unthrottle_of
>  }
>
>  #else /* CONFIG_CFS_BANDWIDTH */
> +
> +static inline bool cfs_bandwidth_used(void)
> +{
> +       return false;
> +}
> +
>  static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
>  {
>         return rq_clock_task(rq_of(cfs_rq));
> @@ -5190,6 +5200,21 @@ enqueue_task_fair(struct rq *rq, struct
>
>         }
>
> +       if (cfs_bandwidth_used()) {
> +               /*
> +                * When bandwidth control is enabled; the cfs_rq_throttled()
> +                * breaks in the above iteration can result in incomplete
> +                * leaf list maintenance, resulting in triggering the assertion
> +                * below.
> +                */
> +               for_each_sched_entity(se) {
> +                       cfs_rq = cfs_rq_of(se);
> +
> +                       if (list_add_leaf_cfs_rq(cfs_rq))
> +                               break;
> +               }
> +       }
> +
>         assert_list_leaf_cfs_rq(rq);
>
>         hrtick_update(rq);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 13:40           ` Peter Zijlstra
@ 2019-01-30 15:48             ` Vincent Guittot
  2019-01-30 16:58               ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Vincent Guittot @ 2019-01-30 15:48 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, 30 Jan 2019 at 14:40, Peter Zijlstra <peterz@infradead.org> wrote:
>

>
>  static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> @@ -352,7 +354,12 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
>         }
>  }
>
> -/* Iterate through all leaf cfs_rq's on a runqueue: */
> +static inline void assert_list_leaf_cfs_rq(struct rq *rq)
> +{
> +       SCHED_WARN_ON(rq->tmp_alone_branch != &rq->lead_cfs_rq_list);

small typo : s/lead_cfs_rq_list/leaf_cfs_rq_list/

> +}
> +
> +/* Iterate through all cfs_rq's on a runqueue in bottom-up order */
>  #define for_each_leaf_cfs_rq(rq, cfs_rq) \
>         list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
>
> @@ -446,6 +453,10 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
>  {
>  }
>
> +static inline void assert_list_leaf_cfs_rq(struct rq *rq)
> +{
> +}
> +
>  #define for_each_leaf_cfs_rq(rq, cfs_rq)       \
>                 for (cfs_rq = &rq->cfs; cfs_rq; cfs_rq = NULL)
>
> @@ -5179,6 +5190,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>
>         }
>
> +       assert_list_leaf_cfs_rq(rq);
> +
>         hrtick_update(rq);
>  }
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 15:48             ` Vincent Guittot
@ 2019-01-30 16:58               ` Peter Zijlstra
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2019-01-30 16:58 UTC (permalink / raw)
  To: Vincent Guittot; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, Jan 30, 2019 at 04:48:47PM +0100, Vincent Guittot wrote:
> On Wed, 30 Jan 2019 at 14:40, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> 
> >
> >  static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> > @@ -352,7 +354,12 @@ static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> >         }
> >  }
> >
> > -/* Iterate through all leaf cfs_rq's on a runqueue: */
> > +static inline void assert_list_leaf_cfs_rq(struct rq *rq)
> > +{
> > +       SCHED_WARN_ON(rq->tmp_alone_branch != &rq->lead_cfs_rq_list);
> 
> small typo : s/lead_cfs_rq_list/leaf_cfs_rq_list/

D'0h, thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30 14:27       ` Vincent Guittot
@ 2019-01-30 17:40         ` Vincent Guittot
  0 siblings, 0 replies; 15+ messages in thread
From: Vincent Guittot @ 2019-01-30 17:40 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Ingo Molnar, Tejun Heo, Sargun Dhillon

On Wed, 30 Jan 2019 at 15:27, Vincent Guittot
<vincent.guittot@linaro.org> wrote:
>
> On Wed, 30 Jan 2019 at 15:01, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Wed, Jan 30, 2019 at 03:01:04PM +0100, Peter Zijlstra wrote:
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -282,13 +282,15 @@ static inline struct cfs_rq *group_cfs_r
> > >       return grp->my_q;
> > >  }
> > >
> > > -static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> > > +static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
> > >  {
> > >       struct rq *rq = rq_of(cfs_rq);
> > >       int cpu = cpu_of(rq);
> > >
> > >       if (cfs_rq->on_list)
> > > -             return;
> > > +             return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
> >
> > And I'm almost certain that can be: return true, but got my brain in a
> > twist.
>
> Yes this can return true
>
> If cfs_rq->on_list) then a child not already on the list used the path :
>
> if (cfs_rq->tg->parent &&
>            cfs_rq->tg->parent->cfs_rq[cpu]->on_list) {
>
> which does rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;

In fact tests show that we must keep:
  return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;

Because the 1st sched_entity that will be used in the newly added
for_each_sched_entity loop, can be the sched_entityof the cfs_rq that
we just added in the list so cfs_rq->on_list == 1 but we must continue
to add parent

Apart from that, tests are ok

>
> >
> > > +
> > > +     cfs_rq->on_list = 1;
> > >
> > >       /*
> > >        * Ensure we either appear before our parent (if already

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [tip:sched/core] sched/fair: Fix insertion in rq->leaf_cfs_rq_list
  2019-01-30  5:22 ` [PATCH v2] " Vincent Guittot
  2019-01-30 13:04   ` Peter Zijlstra
  2019-01-30 14:01   ` Peter Zijlstra
@ 2019-02-04  9:03   ` tip-bot for Vincent Guittot
  2 siblings, 0 replies; 15+ messages in thread
From: tip-bot for Vincent Guittot @ 2019-02-04  9:03 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: vincent.guittot, hpa, sargun, mingo, efault, linux-kernel,
	peterz, torvalds, tglx

Commit-ID:  f6783319737f28e4436a69611853a5a098cbe974
Gitweb:     https://git.kernel.org/tip/f6783319737f28e4436a69611853a5a098cbe974
Author:     Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate: Wed, 30 Jan 2019 06:22:47 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 4 Feb 2019 09:14:48 +0100

sched/fair: Fix insertion in rq->leaf_cfs_rq_list

Sargun reported a crash:

  "I picked up c40f7d74c741a907cfaeb73a7697081881c497d0 sched/fair: Fix
   infinite loop in update_blocked_averages() by reverting a9e7f6544b9c
   and put it on top of 4.19.13. In addition to this, I uninlined
   list_add_leaf_cfs_rq for debugging.

   This revealed a new bug that we didn't get to because we kept getting
   crashes from the previous issue. When we are running with cgroups that
   are rapidly changing, with CFS bandwidth control, and in addition
   using the cpusets cgroup, we see this crash. Specifically, it seems to
   occur with cgroups that are throttled and we change the allowed
   cpuset."

The algorithm used to order cfs_rq in rq->leaf_cfs_rq_list assumes that
it will walk down to root the 1st time a cfs_rq is used and we will finish
to add either a cfs_rq without parent or a cfs_rq with a parent that is
already on the list. But this is not always true in presence of throttling.
Because a cfs_rq can be throttled even if it has never been used but other CPUs
of the cgroup have already used all the bandwdith, we are not sure to go down to
the root and add all cfs_rq in the list.

Ensure that all cfs_rq will be added in the list even if they are throttled.

[ mingo: Fix !CGROUPS build. ]

Reported-by: Sargun Dhillon <sargun@sargun.me>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: tj@kernel.org
Fixes: 9c2791f936ef ("Fix hierarchical order in rq->leaf_cfs_rq_list")
Link: https://lkml.kernel.org/r/1548825767-10799-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 33 ++++++++++++++++++++++++++++-----
 1 file changed, 28 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d6a536dec0ca..ffd1ae7237e7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -275,13 +275,13 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
 	return grp->my_q;
 }
 
-static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
 	struct rq *rq = rq_of(cfs_rq);
 	int cpu = cpu_of(rq);
 
 	if (cfs_rq->on_list)
-		return;
+		return rq->tmp_alone_branch == &rq->leaf_cfs_rq_list;
 
 	cfs_rq->on_list = 1;
 
@@ -310,7 +310,7 @@ static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 		 * list.
 		 */
 		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-		return;
+		return true;
 	}
 
 	if (!cfs_rq->tg->parent) {
@@ -325,7 +325,7 @@ static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 		 * tmp_alone_branch to the beginning of the list.
 		 */
 		rq->tmp_alone_branch = &rq->leaf_cfs_rq_list;
-		return;
+		return true;
 	}
 
 	/*
@@ -340,6 +340,7 @@ static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 	 * of the branch
 	 */
 	rq->tmp_alone_branch = &cfs_rq->leaf_cfs_rq_list;
+	return false;
 }
 
 static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
@@ -435,8 +436,9 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp)
 	return NULL;
 }
 
-static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
+static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
 {
+	return true;
 }
 
 static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq)
@@ -4995,6 +4997,12 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 }
 
 #else /* CONFIG_CFS_BANDWIDTH */
+
+static inline bool cfs_bandwidth_used(void)
+{
+	return false;
+}
+
 static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
 {
 	return rq_clock_task(rq_of(cfs_rq));
@@ -5186,6 +5194,21 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 	}
 
+	if (cfs_bandwidth_used()) {
+		/*
+		 * When bandwidth control is enabled; the cfs_rq_throttled()
+		 * breaks in the above iteration can result in incomplete
+		 * leaf list maintenance, resulting in triggering the assertion
+		 * below.
+		 */
+		for_each_sched_entity(se) {
+			cfs_rq = cfs_rq_of(se);
+
+			if (list_add_leaf_cfs_rq(cfs_rq))
+				break;
+		}
+	}
+
 	assert_list_leaf_cfs_rq(rq);
 
 	hrtick_update(rq);

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-02-04  9:04 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-29 17:18 [PATCH] sched/fair: Fix insertion in rq->leaf_cfs_rq_list Vincent Guittot
2019-01-30  5:22 ` [PATCH v2] " Vincent Guittot
2019-01-30 13:04   ` Peter Zijlstra
2019-01-30 13:06     ` Peter Zijlstra
2019-01-30 13:27       ` Peter Zijlstra
2019-01-30 13:29         ` Vincent Guittot
2019-01-30 13:40           ` Peter Zijlstra
2019-01-30 15:48             ` Vincent Guittot
2019-01-30 16:58               ` Peter Zijlstra
2019-01-30 14:01   ` Peter Zijlstra
2019-01-30 14:01     ` Peter Zijlstra
2019-01-30 14:27       ` Vincent Guittot
2019-01-30 17:40         ` Vincent Guittot
2019-01-30 14:30     ` Vincent Guittot
2019-02-04  9:03   ` [tip:sched/core] " tip-bot for Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.