linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
@ 2020-10-09  9:25 qianjun.kernel
  2020-10-14 13:19 ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: qianjun.kernel @ 2020-10-09  9:25 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot
  Cc: linux-kernel, jun qian, Yafang Shao

From: jun qian <qianjun.kernel@gmail.com>

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 kernel/sched/fair.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..6f8ca0c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
 	if (!schedstat_enabled())
 		return;
 
+	/*
+	 * When the sched_schedstat changes from 0 to 1, some sched se
+	 * maybe already in the runqueue, the se->statistics.wait_start
+	 * will be 0.So it will let the delta wrong. We need to avoid this
+	 * scenario.
+	 */
+	if (unlikely(!schedstat_val(se->statistics.wait_start)))
+		return;
+
 	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
 
 	if (entity_is_task(se)) {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-09  9:25 [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics qianjun.kernel
@ 2020-10-14 13:19 ` Peter Zijlstra
  2020-10-15  6:06   ` Yafang Shao
  0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2020-10-14 13:19 UTC (permalink / raw)
  To: qianjun.kernel
  Cc: mingo, juri.lelli, vincent.guittot, linux-kernel, Yafang Shao

On Fri, Oct 09, 2020 at 05:25:30PM +0800, qianjun.kernel@gmail.com wrote:
> From: jun qian <qianjun.kernel@gmail.com>
> 
> When the sched_schedstat changes from 0 to 1, some sched se maybe
> already in the runqueue, the se->statistics.wait_start will be 0.
> So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> wrong. We need to avoid this scenario.
> 
> Signed-off-by: jun qian <qianjun.kernel@gmail.com>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

This SoB chain isn't valid. Did Yafang's tag need to a reviewed-by or
something?

> ---
>  kernel/sched/fair.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..6f8ca0c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
>  	if (!schedstat_enabled())
>  		return;
>  
> +	/*
> +	 * When the sched_schedstat changes from 0 to 1, some sched se
> +	 * maybe already in the runqueue, the se->statistics.wait_start
> +	 * will be 0.So it will let the delta wrong. We need to avoid this
> +	 * scenario.
> +	 */
> +	if (unlikely(!schedstat_val(se->statistics.wait_start)))
> +		return;
> +
>  	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
>  
>  	if (entity_is_task(se)) {
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-14 13:19 ` Peter Zijlstra
@ 2020-10-15  6:06   ` Yafang Shao
  0 siblings, 0 replies; 5+ messages in thread
From: Yafang Shao @ 2020-10-15  6:06 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: jun qian, Ingo Molnar, juri.lelli, Vincent Guittot, LKML

On Wed, Oct 14, 2020 at 9:19 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Oct 09, 2020 at 05:25:30PM +0800, qianjun.kernel@gmail.com wrote:
> > From: jun qian <qianjun.kernel@gmail.com>
> >
> > When the sched_schedstat changes from 0 to 1, some sched se maybe
> > already in the runqueue, the se->statistics.wait_start will be 0.
> > So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> > wrong. We need to avoid this scenario.
> >
> > Signed-off-by: jun qian <qianjun.kernel@gmail.com>
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> This SoB chain isn't valid. Did Yafang's tag need to a reviewed-by or
> something?
>

This patch improves the behavior when sched_schedstat is changed from
0 to 1, so it looks good to me.

Reviewed-by: Yafang Shao <laoar.shao@gmail.com>

> > ---
> >  kernel/sched/fair.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1a68a05..6f8ca0c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
> >       if (!schedstat_enabled())
> >               return;
> >
> > +     /*
> > +      * When the sched_schedstat changes from 0 to 1, some sched se
> > +      * maybe already in the runqueue, the se->statistics.wait_start
> > +      * will be 0.So it will let the delta wrong. We need to avoid this
> > +      * scenario.
> > +      */
> > +     if (unlikely(!schedstat_val(se->statistics.wait_start)))
> > +             return;
> > +
> >       delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
> >
> >       if (entity_is_task(se)) {
> > --
> > 1.8.3.1
> >



-- 
Thanks
Yafang

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-15  6:48 qianjun.kernel
@ 2020-10-15 11:02 ` Peter Zijlstra
  0 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2020-10-15 11:02 UTC (permalink / raw)
  To: qianjun.kernel
  Cc: mingo, juri.lelli, vincent.guittot, linux-kernel, Yafang Shao

On Thu, Oct 15, 2020 at 02:48:46PM +0800, qianjun.kernel@gmail.com wrote:
> From: jun qian <qianjun.kernel@gmail.com>
> 
> When the sched_schedstat changes from 0 to 1, some sched se maybe
> already in the runqueue, the se->statistics.wait_start will be 0.
> So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> wrong. We need to avoid this scenario.
> 
> Signed-off-by: jun qian <qianjun.kernel@gmail.com>
> Reviewed-by: Yafang Shao <laoar.shao@gmail.com>

Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
@ 2020-10-15  6:48 qianjun.kernel
  2020-10-15 11:02 ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: qianjun.kernel @ 2020-10-15  6:48 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, linux-kernel
  Cc: jun qian, Yafang Shao

From: jun qian <qianjun.kernel@gmail.com>

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
Reviewed-by: Yafang Shao <laoar.shao@gmail.com>
---
 kernel/sched/fair.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..6f8ca0c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
 	if (!schedstat_enabled())
 		return;
 
+	/*
+	 * When the sched_schedstat changes from 0 to 1, some sched se
+	 * maybe already in the runqueue, the se->statistics.wait_start
+	 * will be 0.So it will let the delta wrong. We need to avoid this
+	 * scenario.
+	 */
+	if (unlikely(!schedstat_val(se->statistics.wait_start)))
+		return;
+
 	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
 
 	if (entity_is_task(se)) {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-10-15 11:02 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-09  9:25 [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics qianjun.kernel
2020-10-14 13:19 ` Peter Zijlstra
2020-10-15  6:06   ` Yafang Shao
2020-10-15  6:48 qianjun.kernel
2020-10-15 11:02 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).