All of lore.kernel.org
 help / color / mirror / Atom feed
From: Josh Don <joshdon@google.com>
To: Abel Wu <wuyun.abel@bytedance.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Mel Gorman <mgorman@suse.de>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [RFC v2 1/2] sched/fair: filter out overloaded cpus in SIS
Date: Mon, 11 Apr 2022 18:23:20 -0700	[thread overview]
Message-ID: <CABk29NtBL2WovUVcxXW8cF7Nk+UM_AeEJaX_JbQ4Wue-qMvz_w@mail.gmail.com> (raw)
In-Reply-To: <20220409135104.3733193-2-wuyun.abel@bytedance.com>

Hi Abel,

Thanks for the patch, a few comments:

>  /*
> + * It would be very unlikely to find an unoccupied cpu when system is heavily
> + * overloaded. Even if we could, the cost might bury the benefit.
> + */
> +static inline bool sched_domain_overloaded(struct sched_domain *sd, int nr_overloaded)
> +{
> +       return nr_overloaded > sd->span_weight - (sd->span_weight >> 4);
> +}
> +
> +/*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
>   * average idle time for this rq (as found in rq->avg_idle).
> @@ -6291,7 +6300,7 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
>  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool has_idle_core, int target)
>  {
>         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -       int i, cpu, idle_cpu = -1, nr = INT_MAX;
> +       int i, cpu, idle_cpu = -1, nr = INT_MAX, nro;
>         struct rq *this_rq = this_rq();
>         int this = smp_processor_id();
>         struct sched_domain *this_sd;
> @@ -6301,7 +6310,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>         if (!this_sd)
>                 return -1;
>
> +       nro = atomic_read(&sd->shared->nr_overloaded);
> +       if (sched_domain_overloaded(sd, nro))
> +               return -1;

This early bail out doesn't seem to be related to the main idea of
your patch. Apart from deciding the exact heuristic value for what is
considered too unlikely to find an idle cpu, this doesn't work well
with tasks constrained by affinity; a task may have a small affinity
mask containing idle cpus it may wake onto, even if most of the node
is overloaded.

>         cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +       if (nro)
> +               cpumask_andnot(cpus, cpus, sdo_mask(sd->shared));

To prevent us from exhausting our search attempts too quickly, this
only needs to go under the sched_feat(SIS_PROP) && !has_idle_core case
below. But by doing this unconditionally here, I guess your secondary
goal is to reduce total search cost in both cases. Just wondering, did
you observe significant time spent here that you are trying to
optimize? By reducing our search space by the overload mask, it is
important that the mask is relatively up to date, or else we could
miss an opportunity to find an idle cpu.

>         if (sched_feat(SIS_PROP) && !has_idle_core) {
>                 u64 avg_cost, avg_idle, span_avg;
> @@ -7018,6 +7033,51 @@ balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
>
>         return newidle_balance(rq, rf) != 0;
>  }
> +
> +static inline bool cfs_rq_overloaded(struct rq *rq)
> +{
> +       return rq->cfs.h_nr_running - rq->cfs.idle_h_nr_running > 1;
> +}

Why > 1 instead of > 0? If a cpu is running 1 non-idle task and any
number of idle tasks, I'd think it is still "occupied" in the way
you've defined. We'd want to steer wakeups to cpus running 0 non-idle
tasks.

> +static void update_overload_status(struct rq *rq)
> +{
> +       struct sched_domain_shared *sds;
> +       bool overloaded = cfs_rq_overloaded(rq);
> +       int cpu = cpu_of(rq);
> +
> +       lockdep_assert_rq_held(rq);
> +
> +       if (rq->overloaded == overloaded)
> +               return;
> +
> +       rcu_read_lock();
> +       sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> +       if (unlikely(!sds))
> +               goto unlock;
> +
> +       if (overloaded) {
> +               cpumask_set_cpu(cpu, sdo_mask(sds));
> +               atomic_inc(&sds->nr_overloaded);
> +       } else {
> +               cpumask_clear_cpu(cpu, sdo_mask(sds));
> +               atomic_dec(&sds->nr_overloaded);
> +       }

Why are these cpu mask writes not atomic?

> +
> +       rq->overloaded = overloaded;
> +unlock:
> +       rcu_read_unlock();
> +}
> +
> +#else
> +
> +static inline void update_overload_status(struct rq *rq) { }
> +
>  #endif /* CONFIG_SMP */
>
>  static unsigned long wakeup_gran(struct sched_entity *se)
> @@ -7365,6 +7425,8 @@ done: __maybe_unused;
>         if (new_tasks > 0)
>                 goto again;
>
> +       update_overload_status(rq);
> +
>         /*
>          * rq is about to be idle, check if we need to update the
>          * lost_idle_time of clock_pelt
> @@ -11183,6 +11245,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>         if (static_branch_unlikely(&sched_numa_balancing))
>                 task_tick_numa(rq, curr);
>
> +       update_overload_status(rq);
>         update_misfit_status(curr, rq);
>         update_overutilized_status(task_rq(curr));

I'd caution about using task_tick and pick_next_task_fair as the
places we set and clear overload.

Some issues with task_tick:
- ticks may be disabled in NO_HZ_FULL (an issue if we define overload
as > 0 non-idle tasks)
- most ticks will have the same state, so somewhat redundant checking.
Could use an edge based trigger instead, such as enqueue/dequeue
(somewhat similar to rq->rd->overload).

With pick_next_task_fair():
- there's a window between a thread dequeuing, and then scheduler
running through to the end of pick_next_task_fair(), during which we
falsely observe the cpu as overloaded
- this breaks with core scheduling, since we may use pick_task_fair
rather than pick_next_task_fair

Thanks,
Josh

  reply	other threads:[~2022-04-12  1:23 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-09 13:51 [RFC v2 0/2] introduece sched-idle balance Abel Wu
2022-04-09 13:51 ` [RFC v2 1/2] sched/fair: filter out overloaded cpus in SIS Abel Wu
2022-04-12  1:23   ` Josh Don [this message]
2022-04-12 17:55     ` Abel Wu
2022-04-13 23:49       ` Josh Don
2022-04-14 15:36         ` Abel Wu
2022-04-15 23:21           ` Josh Don
2022-04-25  7:02   ` [sched/fair] 6b433275e3: stress-ng.sock.ops_per_sec 16.2% improvement kernel test robot
2022-04-25  7:02     ` kernel test robot
2022-04-09 13:51 ` [RFC v2 2/2] sched/fair: introduce sched-idle balance Abel Wu
2022-04-12  1:59   ` Josh Don
2022-04-12 17:56     ` Abel Wu
2022-04-14  0:08       ` Josh Don
2022-04-14 15:38         ` Abel Wu
2022-04-27 13:15   ` [sched/fair] ae44f2177f: reaim.jobs_per_min 2.3% improvement kernel test robot
2022-04-27 13:15     ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABk29NtBL2WovUVcxXW8cF7Nk+UM_AeEJaX_JbQ4Wue-qMvz_w@mail.gmail.com \
    --to=joshdon@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=peterz@infradead.org \
    --cc=vincent.guittot@linaro.org \
    --cc=wuyun.abel@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.