* [PATCH] sched/fair: remove the spin_lock operations
@ 2020-10-30 14:46 Hui Su
2020-10-30 15:42 ` Phil Auld
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Hui Su @ 2020-10-30 14:46 UTC (permalink / raw)
To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, bsegall, mgorman, bristot, linux-kernel, sh_def
Since 'ab93a4bc955b ("sched/fair: Remove
distribute_running fromCFS bandwidth")',there is
nothing to protect between raw_spin_lock_irqsave/store()
in do_sched_cfs_slack_timer().
So remove it.
Signed-off-by: Hui Su <sh_def@163.com>
---
kernel/sched/fair.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e38378c..5ecbf5e63198 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
return;
distribute_cfs_runtime(cfs_b);
-
- raw_spin_lock_irqsave(&cfs_b->lock, flags);
- raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
}
/*
--
2.29.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: remove the spin_lock operations
2020-10-30 14:46 [PATCH] sched/fair: remove the spin_lock operations Hui Su
@ 2020-10-30 15:42 ` Phil Auld
2020-10-30 18:48 ` Benjamin Segall
2020-11-11 8:23 ` [tip: sched/core] sched/fair: Remove superfluous lock section in do_sched_cfs_slack_timer() tip-bot2 for Hui Su
2 siblings, 0 replies; 7+ messages in thread
From: Phil Auld @ 2020-10-30 15:42 UTC (permalink / raw)
To: Hui Su
Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, bsegall, mgorman, bristot, linux-kernel
On Fri, Oct 30, 2020 at 10:46:21PM +0800 Hui Su wrote:
> Since 'ab93a4bc955b ("sched/fair: Remove
> distribute_running fromCFS bandwidth")',there is
> nothing to protect between raw_spin_lock_irqsave/store()
> in do_sched_cfs_slack_timer().
>
> So remove it.
>
> Signed-off-by: Hui Su <sh_def@163.com>
> ---
> kernel/sched/fair.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..5ecbf5e63198 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
> return;
>
> distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> - raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
> }
>
> /*
> --
> 2.29.0
>
>
Nice :)
Reviewed-by: Phil Auld <pauld@redhat.com>
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: remove the spin_lock operations
2020-10-30 14:46 [PATCH] sched/fair: remove the spin_lock operations Hui Su
2020-10-30 15:42 ` Phil Auld
@ 2020-10-30 18:48 ` Benjamin Segall
2020-10-30 22:16 ` David Laight
2020-11-11 8:23 ` [tip: sched/core] sched/fair: Remove superfluous lock section in do_sched_cfs_slack_timer() tip-bot2 for Hui Su
2 siblings, 1 reply; 7+ messages in thread
From: Benjamin Segall @ 2020-10-30 18:48 UTC (permalink / raw)
To: Hui Su
Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, mgorman, bristot, linux-kernel
Hui Su <sh_def@163.com> writes:
> Since 'ab93a4bc955b ("sched/fair: Remove
> distribute_running fromCFS bandwidth")',there is
> nothing to protect between raw_spin_lock_irqsave/store()
> in do_sched_cfs_slack_timer().
>
> So remove it.
Reviewed-by: Ben Segall <bsegall@google.com>
(I might nitpick the subject to be clear that it should be trivial
because the lock area is empty, or call them dead or something, but it's
not all that important)
>
> Signed-off-by: Hui Su <sh_def@163.com>
> ---
> kernel/sched/fair.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..5ecbf5e63198 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
> return;
>
> distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(&cfs_b->lock, flags);
> - raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
> }
>
> /*
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH] sched/fair: remove the spin_lock operations
2020-10-30 18:48 ` Benjamin Segall
@ 2020-10-30 22:16 ` David Laight
2020-11-02 13:53 ` Phil Auld
0 siblings, 1 reply; 7+ messages in thread
From: David Laight @ 2020-10-30 22:16 UTC (permalink / raw)
To: 'Benjamin Segall', Hui Su
Cc: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
rostedt, mgorman, bristot, linux-kernel
From: Benjamin Segall
> Sent: 30 October 2020 18:48
>
> Hui Su <sh_def@163.com> writes:
>
> > Since 'ab93a4bc955b ("sched/fair: Remove
> > distribute_running fromCFS bandwidth")',there is
> > nothing to protect between raw_spin_lock_irqsave/store()
> > in do_sched_cfs_slack_timer().
> >
> > So remove it.
>
> Reviewed-by: Ben Segall <bsegall@google.com>
>
> (I might nitpick the subject to be clear that it should be trivial
> because the lock area is empty, or call them dead or something, but it's
> not all that important)
I don't know about this case, but a lock+unlock can be used
to ensure that nothing else holds the lock when acquiring
the lock requires another lock be held.
So if the normal sequence is:
lock(table)
# lookup item
lock(item)
unlock(table)
....
unlock(item)
Then it can make sense to do:
lock(table)
lock(item)
unlock(item)
....
unlock(table)
although that ought to deserve a comment.
avid
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: remove the spin_lock operations
2020-10-30 22:16 ` David Laight
@ 2020-11-02 13:53 ` Phil Auld
2020-11-02 14:10 ` Hui Su
0 siblings, 1 reply; 7+ messages in thread
From: Phil Auld @ 2020-11-02 13:53 UTC (permalink / raw)
To: David Laight
Cc: 'Benjamin Segall',
Hui Su, mingo, peterz, juri.lelli, vincent.guittot,
dietmar.eggemann, rostedt, mgorman, bristot, linux-kernel
On Fri, Oct 30, 2020 at 10:16:29PM +0000 David Laight wrote:
> From: Benjamin Segall
> > Sent: 30 October 2020 18:48
> >
> > Hui Su <sh_def@163.com> writes:
> >
> > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > distribute_running fromCFS bandwidth")',there is
> > > nothing to protect between raw_spin_lock_irqsave/store()
> > > in do_sched_cfs_slack_timer().
> > >
> > > So remove it.
> >
> > Reviewed-by: Ben Segall <bsegall@google.com>
> >
> > (I might nitpick the subject to be clear that it should be trivial
> > because the lock area is empty, or call them dead or something, but it's
> > not all that important)
>
> I don't know about this case, but a lock+unlock can be used
> to ensure that nothing else holds the lock when acquiring
> the lock requires another lock be held.
>
> So if the normal sequence is:
> lock(table)
> # lookup item
> lock(item)
> unlock(table)
> ....
> unlock(item)
>
> Then it can make sense to do:
> lock(table)
> lock(item)
> unlock(item)
> ....
> unlock(table)
>
> although that ought to deserve a comment.
>
Nah, this one used to be like this :
raw_spin_lock_irqsave(&cfs_b->lock, flags);
lsub_positive(&cfs_b->runtime, runtime);
cfs_b->distribute_running = 0;
raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
It's just a leftover. I agree that if it was there for some other
purpose that it would really need a comment. In this case, it's an
artifact of patch-based development I think.
Cheers,
Phil
> avid
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
>
--
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched/fair: remove the spin_lock operations
2020-11-02 13:53 ` Phil Auld
@ 2020-11-02 14:10 ` Hui Su
0 siblings, 0 replies; 7+ messages in thread
From: Hui Su @ 2020-11-02 14:10 UTC (permalink / raw)
To: Phil Auld, David.Laight, linux-kernel, bsegall, mingo
On Mon, Nov 02, 2020 at 08:53:41AM -0500, Phil Auld wrote:
> On Fri, Oct 30, 2020 at 10:16:29PM +0000 David Laight wrote:
> > From: Benjamin Segall
> > > Sent: 30 October 2020 18:48
> > >
> > > Hui Su <sh_def@163.com> writes:
> > >
> > > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > > distribute_running fromCFS bandwidth")',there is
> > > > nothing to protect between raw_spin_lock_irqsave/store()
> > > > in do_sched_cfs_slack_timer().
> > > >
> > > > So remove it.
> > >
> > > Reviewed-by: Ben Segall <bsegall@google.com>
> > >
> > > (I might nitpick the subject to be clear that it should be trivial
> > > because the lock area is empty, or call them dead or something, but it's
> > > not all that important)
> >
> > I don't know about this case, but a lock+unlock can be used
> > to ensure that nothing else holds the lock when acquiring
> > the lock requires another lock be held.
> >
> > So if the normal sequence is:
> > lock(table)
> > # lookup item
> > lock(item)
> > unlock(table)
> > ....
> > unlock(item)
> >
> > Then it can make sense to do:
> > lock(table)
> > lock(item)
> > unlock(item)
> > ....
> > unlock(table)
> >
> > although that ought to deserve a comment.
> >
>
> Nah, this one used to be like this :
>
> raw_spin_lock_irqsave(&cfs_b->lock, flags);
> lsub_positive(&cfs_b->runtime, runtime);
> cfs_b->distribute_running = 0;
> raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
>
> It's just a leftover. I agree that if it was there for some other
> purpose that it would really need a comment. In this case, it's an
> artifact of patch-based development I think.
>
>
> Cheers,
> Phil
>
Yeah, thanks for your explanation, Phil.
It is just a leftover.
^ permalink raw reply [flat|nested] 7+ messages in thread
* [tip: sched/core] sched/fair: Remove superfluous lock section in do_sched_cfs_slack_timer()
2020-10-30 14:46 [PATCH] sched/fair: remove the spin_lock operations Hui Su
2020-10-30 15:42 ` Phil Auld
2020-10-30 18:48 ` Benjamin Segall
@ 2020-11-11 8:23 ` tip-bot2 for Hui Su
2 siblings, 0 replies; 7+ messages in thread
From: tip-bot2 for Hui Su @ 2020-11-11 8:23 UTC (permalink / raw)
To: linux-tip-commits
Cc: Hui Su, Peter Zijlstra (Intel), Phil Auld, Ben Segall, x86, linux-kernel
The following commit has been merged into the sched/core branch of tip:
Commit-ID: cdb310474dece99985e4cdd2b96b1324e39c1c9d
Gitweb: https://git.kernel.org/tip/cdb310474dece99985e4cdd2b96b1324e39c1c9d
Author: Hui Su <sh_def@163.com>
AuthorDate: Fri, 30 Oct 2020 22:46:21 +08:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 10 Nov 2020 18:39:05 +01:00
sched/fair: Remove superfluous lock section in do_sched_cfs_slack_timer()
Since ab93a4bc955b ("sched/fair: Remove distribute_running fromCFS
bandwidth"), there is nothing to protect between
raw_spin_lock_irqsave/store() in do_sched_cfs_slack_timer().
Signed-off-by: Hui Su <sh_def@163.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Link: https://lkml.kernel.org/r/20201030144621.GA96974@rlk
---
kernel/sched/fair.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2755a7e..3e5d98f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5126,9 +5126,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
return;
distribute_cfs_runtime(cfs_b);
-
- raw_spin_lock_irqsave(&cfs_b->lock, flags);
- raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
}
/*
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-11-11 8:23 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-30 14:46 [PATCH] sched/fair: remove the spin_lock operations Hui Su
2020-10-30 15:42 ` Phil Auld
2020-10-30 18:48 ` Benjamin Segall
2020-10-30 22:16 ` David Laight
2020-11-02 13:53 ` Phil Auld
2020-11-02 14:10 ` Hui Su
2020-11-11 8:23 ` [tip: sched/core] sched/fair: Remove superfluous lock section in do_sched_cfs_slack_timer() tip-bot2 for Hui Su
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).