All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
@ 2020-10-15  6:48 qianjun.kernel
  2020-10-15 11:02 ` Peter Zijlstra
  2020-10-29 10:51 ` [tip: sched/core] sched/fair: " tip-bot2 for jun qian
  0 siblings, 2 replies; 6+ messages in thread
From: qianjun.kernel @ 2020-10-15  6:48 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, linux-kernel
  Cc: jun qian, Yafang Shao

From: jun qian <qianjun.kernel@gmail.com>

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
Reviewed-by: Yafang Shao <laoar.shao@gmail.com>
---
 kernel/sched/fair.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..6f8ca0c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
 	if (!schedstat_enabled())
 		return;
 
+	/*
+	 * When the sched_schedstat changes from 0 to 1, some sched se
+	 * maybe already in the runqueue, the se->statistics.wait_start
+	 * will be 0.So it will let the delta wrong. We need to avoid this
+	 * scenario.
+	 */
+	if (unlikely(!schedstat_val(se->statistics.wait_start)))
+		return;
+
 	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
 
 	if (entity_is_task(se)) {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-15  6:48 [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics qianjun.kernel
@ 2020-10-15 11:02 ` Peter Zijlstra
  2020-10-29 10:51 ` [tip: sched/core] sched/fair: " tip-bot2 for jun qian
  1 sibling, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2020-10-15 11:02 UTC (permalink / raw)
  To: qianjun.kernel
  Cc: mingo, juri.lelli, vincent.guittot, linux-kernel, Yafang Shao

On Thu, Oct 15, 2020 at 02:48:46PM +0800, qianjun.kernel@gmail.com wrote:
> From: jun qian <qianjun.kernel@gmail.com>
> 
> When the sched_schedstat changes from 0 to 1, some sched se maybe
> already in the runqueue, the se->statistics.wait_start will be 0.
> So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> wrong. We need to avoid this scenario.
> 
> Signed-off-by: jun qian <qianjun.kernel@gmail.com>
> Reviewed-by: Yafang Shao <laoar.shao@gmail.com>

Thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tip: sched/core] sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-15  6:48 [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics qianjun.kernel
  2020-10-15 11:02 ` Peter Zijlstra
@ 2020-10-29 10:51 ` tip-bot2 for jun qian
  1 sibling, 0 replies; 6+ messages in thread
From: tip-bot2 for jun qian @ 2020-10-29 10:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jun qian, Peter Zijlstra (Intel), Yafang Shao, x86, LKML

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b9c88f752268383beff0d56e50d52b8ae62a02f8
Gitweb:        https://git.kernel.org/tip/b9c88f752268383beff0d56e50d52b8ae62a02f8
Author:        jun qian <qianjun.kernel@gmail.com>
AuthorDate:    Thu, 15 Oct 2020 14:48:46 +08:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Thu, 29 Oct 2020 11:00:28 +01:00

sched/fair: Improve the accuracy of sched_stat_wait statistics

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lkml.kernel.org/r/20201015064846.19809-1-qianjun.kernel@gmail.com
---
 kernel/sched/fair.c |  9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e3..b9368d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	if (!schedstat_enabled())
 		return;
 
+	/*
+	 * When the sched_schedstat changes from 0 to 1, some sched se
+	 * maybe already in the runqueue, the se->statistics.wait_start
+	 * will be 0.So it will let the delta wrong. We need to avoid this
+	 * scenario.
+	 */
+	if (unlikely(!schedstat_val(se->statistics.wait_start)))
+		return;
+
 	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
 
 	if (entity_is_task(se)) {

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-14 13:19 ` Peter Zijlstra
@ 2020-10-15  6:06   ` Yafang Shao
  0 siblings, 0 replies; 6+ messages in thread
From: Yafang Shao @ 2020-10-15  6:06 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: jun qian, Ingo Molnar, juri.lelli, Vincent Guittot, LKML

On Wed, Oct 14, 2020 at 9:19 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Oct 09, 2020 at 05:25:30PM +0800, qianjun.kernel@gmail.com wrote:
> > From: jun qian <qianjun.kernel@gmail.com>
> >
> > When the sched_schedstat changes from 0 to 1, some sched se maybe
> > already in the runqueue, the se->statistics.wait_start will be 0.
> > So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> > wrong. We need to avoid this scenario.
> >
> > Signed-off-by: jun qian <qianjun.kernel@gmail.com>
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> This SoB chain isn't valid. Did Yafang's tag need to a reviewed-by or
> something?
>

This patch improves the behavior when sched_schedstat is changed from
0 to 1, so it looks good to me.

Reviewed-by: Yafang Shao <laoar.shao@gmail.com>

> > ---
> >  kernel/sched/fair.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1a68a05..6f8ca0c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
> >       if (!schedstat_enabled())
> >               return;
> >
> > +     /*
> > +      * When the sched_schedstat changes from 0 to 1, some sched se
> > +      * maybe already in the runqueue, the se->statistics.wait_start
> > +      * will be 0.So it will let the delta wrong. We need to avoid this
> > +      * scenario.
> > +      */
> > +     if (unlikely(!schedstat_val(se->statistics.wait_start)))
> > +             return;
> > +
> >       delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
> >
> >       if (entity_is_task(se)) {
> > --
> > 1.8.3.1
> >



-- 
Thanks
Yafang

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
  2020-10-09  9:25 [PATCH 1/1] Sched/fair: " qianjun.kernel
@ 2020-10-14 13:19 ` Peter Zijlstra
  2020-10-15  6:06   ` Yafang Shao
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2020-10-14 13:19 UTC (permalink / raw)
  To: qianjun.kernel
  Cc: mingo, juri.lelli, vincent.guittot, linux-kernel, Yafang Shao

On Fri, Oct 09, 2020 at 05:25:30PM +0800, qianjun.kernel@gmail.com wrote:
> From: jun qian <qianjun.kernel@gmail.com>
> 
> When the sched_schedstat changes from 0 to 1, some sched se maybe
> already in the runqueue, the se->statistics.wait_start will be 0.
> So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> wrong. We need to avoid this scenario.
> 
> Signed-off-by: jun qian <qianjun.kernel@gmail.com>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

This SoB chain isn't valid. Did Yafang's tag need to a reviewed-by or
something?

> ---
>  kernel/sched/fair.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..6f8ca0c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
>  	if (!schedstat_enabled())
>  		return;
>  
> +	/*
> +	 * When the sched_schedstat changes from 0 to 1, some sched se
> +	 * maybe already in the runqueue, the se->statistics.wait_start
> +	 * will be 0.So it will let the delta wrong. We need to avoid this
> +	 * scenario.
> +	 */
> +	if (unlikely(!schedstat_val(se->statistics.wait_start)))
> +		return;
> +
>  	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
>  
>  	if (entity_is_task(se)) {
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics
@ 2020-10-09  9:25 qianjun.kernel
  2020-10-14 13:19 ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: qianjun.kernel @ 2020-10-09  9:25 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot
  Cc: linux-kernel, jun qian, Yafang Shao

From: jun qian <qianjun.kernel@gmail.com>

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 kernel/sched/fair.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..6f8ca0c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
 	if (!schedstat_enabled())
 		return;
 
+	/*
+	 * When the sched_schedstat changes from 0 to 1, some sched se
+	 * maybe already in the runqueue, the se->statistics.wait_start
+	 * will be 0.So it will let the delta wrong. We need to avoid this
+	 * scenario.
+	 */
+	if (unlikely(!schedstat_val(se->statistics.wait_start)))
+		return;
+
 	delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);
 
 	if (entity_is_task(se)) {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-10-29 10:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-15  6:48 [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics qianjun.kernel
2020-10-15 11:02 ` Peter Zijlstra
2020-10-29 10:51 ` [tip: sched/core] sched/fair: " tip-bot2 for jun qian
  -- strict thread matches above, loose matches on Subject: below --
2020-10-09  9:25 [PATCH 1/1] Sched/fair: " qianjun.kernel
2020-10-14 13:19 ` Peter Zijlstra
2020-10-15  6:06   ` Yafang Shao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.