linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* sched: allow resubmits to queue_balance_callback()
@ 2021-03-18 19:57 Barret Rhoden
  2021-03-22 10:41 ` Peter Zijlstra
  0 siblings, 1 reply; 3+ messages in thread
From: Barret Rhoden @ 2021-03-18 19:57 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel

Prior to this commit, if you submitted the same callback_head twice, it
would be enqueued twice, but only if it was the last callback on the
list.  The first time it was submitted, rq->balance_callback was NULL,
so head->next is NULL.  That defeated the check in
queue_balance_callback().

This commit changes the callback list such that whenever an item is on
the list, its head->next is not NULL.  The last element (first inserted)
will point to itself.  This allows us to detect and ignore any attempt
to reenqueue a callback_head.

Signed-off-by: Barret Rhoden <brho@google.com>
---

i might be missing something here, but this was my interpretation of
what the "if (unlikely(head->next))" check was for in
queue_balance_callback.

 kernel/sched/core.c  | 3 ++-
 kernel/sched/sched.h | 6 +++++-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d95dc3f4644..6322975032ea 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3669,7 +3669,8 @@ static void __balance_callback(struct rq *rq)
 	rq->balance_callback = NULL;
 	while (head) {
 		func = (void (*)(struct rq *))head->func;
-		next = head->next;
+		/* The last element pointed to itself */
+		next = head->next == head ? NULL : head->next;
 		head->next = NULL;
 		head = next;
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 28709f6b0975..42629e153f83 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1389,11 +1389,15 @@ queue_balance_callback(struct rq *rq,
 {
 	lockdep_assert_held(&rq->lock);
 
+	/*
+	 * The last element on the list points to itself, so we can always
+	 * detect if head is already enqueued.
+	 */
 	if (unlikely(head->next))
 		return;
 
 	head->func = (void (*)(struct callback_head *))func;
-	head->next = rq->balance_callback;
+	head->next = rq->balance_callback ?: NULL;
 	rq->balance_callback = head;
 }
 
-- 
2.31.0.rc2.261.g7f71774620-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: sched: allow resubmits to queue_balance_callback()
  2021-03-18 19:57 sched: allow resubmits to queue_balance_callback() Barret Rhoden
@ 2021-03-22 10:41 ` Peter Zijlstra
  2021-03-22 18:43   ` [PATCH v2] " Barret Rhoden
  0 siblings, 1 reply; 3+ messages in thread
From: Peter Zijlstra @ 2021-03-22 10:41 UTC (permalink / raw)
  To: Barret Rhoden
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel

On Thu, Mar 18, 2021 at 03:57:34PM -0400, Barret Rhoden wrote:
> Prior to this commit, if you submitted the same callback_head twice, it
> would be enqueued twice, but only if it was the last callback on the
> list.  The first time it was submitted, rq->balance_callback was NULL,
> so head->next is NULL.  That defeated the check in
> queue_balance_callback().
> 
> This commit changes the callback list such that whenever an item is on
> the list, its head->next is not NULL.  The last element (first inserted)
> will point to itself.  This allows us to detect and ignore any attempt
> to reenqueue a callback_head.
> 
> Signed-off-by: Barret Rhoden <brho@google.com>

AFAICT you're patching dead code, please check a current tree.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH v2] sched: allow resubmits to queue_balance_callback()
  2021-03-22 10:41 ` Peter Zijlstra
@ 2021-03-22 18:43   ` Barret Rhoden
  0 siblings, 0 replies; 3+ messages in thread
From: Barret Rhoden @ 2021-03-22 18:43 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel

Prior to this commit, if you submitted the same callback_head twice, it
would be enqueued twice, but only if it was the last callback on the
list.  The first time it was submitted, rq->balance_callback was NULL,
so head->next is NULL.  That defeated the check in
queue_balance_callback().

This commit changes the callback list such that whenever an item is on
the list, its head->next is not NULL.  The last element (first inserted)
will point to itself.  This allows us to detect and ignore any attempt
to reenqueue a callback_head.

Signed-off-by: Barret Rhoden <brho@google.com>
---

sorry about the old version.  i updated to linus's branch, plus a minor
fix from v1.

this should work with the balance_push_callback stuff you added a few
months ago.  in this commit, head->next can equal null *or* head and it
will be treated as null.  (balance_push_callback's next == NULL).

thanks,
barret


 kernel/sched/core.c  | 3 ++-
 kernel/sched/sched.h | 6 +++++-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 98191218d891..c5a1a225d0b4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3986,7 +3986,8 @@ static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
 
 	while (head) {
 		func = (void (*)(struct rq *))head->func;
-		next = head->next;
+		/* The last element pointed to itself */
+		next = head->next == head ? NULL : head->next;
 		head->next = NULL;
 		head = next;
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 10a1522b1e30..66e1a9e5a1af 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1418,11 +1418,15 @@ queue_balance_callback(struct rq *rq,
 {
 	lockdep_assert_held(&rq->lock);
 
+	/*
+	 * The last element on the list points to itself, so we can always
+	 * detect if head is already enqueued.
+	 */
 	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
 		return;
 
 	head->func = (void (*)(struct callback_head *))func;
-	head->next = rq->balance_callback;
+	head->next = rq->balance_callback ?: head;
 	rq->balance_callback = head;
 }
 
-- 
2.31.0.rc2.261.g7f71774620-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-03-22 18:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-18 19:57 sched: allow resubmits to queue_balance_callback() Barret Rhoden
2021-03-22 10:41 ` Peter Zijlstra
2021-03-22 18:43   ` [PATCH v2] " Barret Rhoden

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).