All of lore.kernel.org
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Aubrey Li <aubrey.li@linux.intel.com>
Cc: Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	Rik van Riel <riel@surriel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Gautham R Shenoy <ego@linux.vnet.ibm.com>,
	Parth Shah <parth@linux.ibm.com>
Subject: Re: [PATCH v2 6/8] sched/idle: Move busy_cpu accounting to idle callback
Date: Thu, 13 May 2021 13:01:12 +0530	[thread overview]
Message-ID: <20210513073112.GV2633526@linux.vnet.ibm.com> (raw)
In-Reply-To: <47d29f1d-cea6-492a-5125-85db6bce0fa7@linux.intel.com>

* Aubrey Li <aubrey.li@linux.intel.com> [2021-05-12 16:08:24]:

> On 5/7/21 12:45 AM, Srikar Dronamraju wrote:
> > Currently we account nr_busy_cpus in no_hz idle functions.
> > There is no reason why nr_busy_cpus should updated be in NO_HZ_COMMON
> > configs only. Also scheduler can mark a CPU as non-busy as soon as an
> > idle class task starts to run. Scheduler can then mark a CPU as busy
> > as soon as its woken up from idle or a new task is placed on it's
> > runqueue.
> 
> IIRC, we discussed this before, if a SCHED_IDLE task is placed on the
> CPU's runqueue, this CPU should be still taken as a wakeup target.
> 

Yes, this CPU is still a wakeup target, its only when this CPU is busy, that
we look at other CPUs

> Also, for those frequent context-switching tasks with very short idle,
> it's expensive for scheduler to mark idle/busy every time, that's why
> my patch only marks idle every time and marks busy ratelimited in
> scheduler tick.
> 

I have tried few tasks with very short idle times and updating nr_busy
everytime, doesnt seem to be impacting. Infact, it seems to help in picking
the idler-llc more often.

> Thanks,
> -Aubrey
> 
> > 
> > Cc: LKML <linux-kernel@vger.kernel.org>
> > Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
> > Cc: Parth Shah <parth@linux.ibm.com>
> > Cc: Ingo Molnar <mingo@kernel.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>
> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Cc: Mel Gorman <mgorman@techsingularity.net>
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Rik van Riel <riel@surriel.com>
> > Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> > ---
> >  kernel/sched/fair.c     |  6 ++++--
> >  kernel/sched/idle.c     | 29 +++++++++++++++++++++++++++--
> >  kernel/sched/sched.h    |  1 +
> >  kernel/sched/topology.c |  2 ++
> >  4 files changed, 34 insertions(+), 4 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index c30587631a24..4d3b0928fe98 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -10394,7 +10394,10 @@ static void set_cpu_sd_state_busy(int cpu)
> >  		goto unlock;
> >  	sd->nohz_idle = 0;
> >  
> > -	atomic_inc(&sd->shared->nr_busy_cpus);
> > +	if (sd && per_cpu(is_idle, cpu)) {
> > +		atomic_add_unless(&sd->shared->nr_busy_cpus, 1, per_cpu(sd_llc_size, cpu));
> > +		per_cpu(is_idle, cpu) = 0;
> > +	}
> >  unlock:
> >  	rcu_read_unlock();
> >  }
> > @@ -10424,7 +10427,6 @@ static void set_cpu_sd_state_idle(int cpu)
> >  		goto unlock;
> >  	sd->nohz_idle = 1;
> >  
> > -	atomic_dec(&sd->shared->nr_busy_cpus);
> >  unlock:
> >  	rcu_read_unlock();
> >  }
> > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> > index cc828f3efe71..e624a05e48bd 100644
> > --- a/kernel/sched/idle.c
> > +++ b/kernel/sched/idle.c
> > @@ -425,12 +425,25 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
> >  
> >  static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
> >  {
> > -#ifdef CONFIG_SCHED_SMT
> > +#ifdef CONFIG_SMP
> > +	struct sched_domain_shared *sds;
> >  	int cpu = rq->cpu;
> >  
> > +#ifdef CONFIG_SCHED_SMT
> >  	if (static_branch_likely(&sched_smt_present))
> >  		set_core_busy(cpu);
> >  #endif
> > +
> > +	rcu_read_lock();
> > +	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> > +	if (sds) {
> > +		if (per_cpu(is_idle, cpu)) {
> > +			atomic_inc(&sds->nr_busy_cpus);
> > +			per_cpu(is_idle, cpu) = 0;
> > +		}
> > +	}
> > +	rcu_read_unlock();
> > +#endif
> >  }
> >  
> >  static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
> > @@ -442,9 +455,21 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
> >  struct task_struct *pick_next_task_idle(struct rq *rq)
> >  {
> >  	struct task_struct *next = rq->idle;
> > +#ifdef CONFIG_SMP
> > +	struct sched_domain_shared *sds;
> > +	int cpu = rq->cpu;
> >  
> > -	set_next_task_idle(rq, next, true);
> > +	rcu_read_lock();
> > +	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> > +	if (sds) {
> > +		atomic_add_unless(&sds->nr_busy_cpus, -1, 0);
> > +		per_cpu(is_idle, cpu) = 1;
> > +	}
> >  
> > +	rcu_read_unlock();
> > +#endif
> > +
> > +	set_next_task_idle(rq, next, true);
> >  	return next;
> >  }
> >  
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 5c0bd4b0e73a..baf8d9a4cb26 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -1483,6 +1483,7 @@ DECLARE_PER_CPU(int, sd_llc_id);
> >  #ifdef CONFIG_SCHED_SMT
> >  DECLARE_PER_CPU(int, smt_id);
> >  #endif
> > +DECLARE_PER_CPU(int, is_idle);
> >  DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> >  DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> >  DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 8db40c8a6ad0..00e4669bb241 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -647,6 +647,7 @@ DEFINE_PER_CPU(int, sd_llc_id);
> >  #ifdef CONFIG_SCHED_SMT
> >  DEFINE_PER_CPU(int, smt_id);
> >  #endif
> > +DEFINE_PER_CPU(int, is_idle);
> >  DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> >  DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> >  DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> > @@ -673,6 +674,7 @@ static void update_top_cache_domain(int cpu)
> >  #ifdef CONFIG_SCHED_SMT
> >  	per_cpu(smt_id, cpu) = cpumask_first(cpu_smt_mask(cpu));
> >  #endif
> > +	per_cpu(is_idle, cpu) = 1;
> >  	rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
> >  
> >  	sd = lowest_flag_domain(cpu, SD_NUMA);
> > 
> 

-- 
Thanks and Regards
Srikar Dronamraju

  reply	other threads:[~2021-05-13  7:31 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-06 16:45 [PATCH v2 0/8] sched/fair: wake_affine improvements Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 1/8] sched/fair: Update affine statistics when needed Srikar Dronamraju
2021-05-07 16:08   ` Valentin Schneider
2021-05-07 17:05     ` Srikar Dronamraju
2021-05-11 11:51       ` Valentin Schneider
2021-05-11 16:22         ` Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 2/8] sched/fair: Maintain the identity of idle-core Srikar Dronamraju
2021-05-11 11:51   ` Valentin Schneider
2021-05-11 16:27     ` Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 3/8] sched/fair: Update idle-core more often Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 4/8] sched/fair: Prefer idle CPU to cache affinity Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 5/8] sched/fair: Use affine_idler_llc for wakeups across LLC Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 6/8] sched/idle: Move busy_cpu accounting to idle callback Srikar Dronamraju
2021-05-11 11:51   ` Valentin Schneider
2021-05-11 16:55     ` Srikar Dronamraju
2021-05-12  0:32     ` Aubrey Li
2021-05-12  8:08   ` Aubrey Li
2021-05-13  7:31     ` Srikar Dronamraju [this message]
2021-05-14  4:11       ` Aubrey Li
2021-05-17 10:40         ` Srikar Dronamraju
2021-05-17 12:48           ` Aubrey Li
2021-05-17 12:57             ` Srikar Dronamraju
2021-05-18  0:59               ` Aubrey Li
2021-05-18  4:00                 ` Srikar Dronamraju
2021-05-18  6:05                   ` Aubrey Li
2021-05-18  7:18                     ` Srikar Dronamraju
2021-05-19  9:43                       ` Aubrey Li
2021-05-19 17:34                         ` Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 7/8] sched/fair: Remove ifdefs in waker_affine_idler_llc Srikar Dronamraju
2021-05-06 16:45 ` [PATCH v2 8/8] sched/fair: Dont iterate if no idle CPUs Srikar Dronamraju
2021-05-06 16:53 ` [PATCH v2 0/8] sched/fair: wake_affine improvements Srikar Dronamraju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210513073112.GV2633526@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=ego@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=parth@linux.ibm.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=tglx@linutronix.de \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.