linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: "Li, Aubrey" <aubrey.li@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>,
	kernel test robot <rong.a.chen@intel.com>,
	0day robot <lkp@intel.com>, Mel Gorman <mgorman@suse.de>,
	Qais Yousef <qais.yousef@arm.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Jiang Biao <benbjiang@gmail.com>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	lkp@lists.01.org, "Huang, Ying" <ying.huang@intel.com>,
	"Tang, Feng" <feng.tang@intel.com>,
	zhengjun.xing@intel.com, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Aubrey Li <aubrey.li@intel.com>,
	Chen Yu <yu.c.chen@intel.com>
Subject: Re: [sched/fair] 8d86968ac3: netperf.Throughput_tps -29.5% regression
Date: Wed, 2 Dec 2020 17:22:59 +0100	[thread overview]
Message-ID: <CAKfTPtCydzVv45qbsDTG2XDS=4EF4KuuYg5mjnDDF_81B5p2kA@mail.gmail.com> (raw)
In-Reply-To: <b45171de-cb74-bf35-91bf-967dbd5567d1@linux.intel.com>

On Wed, 2 Dec 2020 at 15:30, Li, Aubrey <aubrey.li@linux.intel.com> wrote:
>
> Hi Mel,
>
> On 2020/11/26 20:13, Mel Gorman wrote:
> > On Thu, Nov 26, 2020 at 02:57:07PM +0800, Li, Aubrey wrote:
> >> Hi Robot,
> >>
> >> On 2020/11/25 17:09, kernel test robot wrote:
> >>> Greeting,
> >>>
> >>> FYI, we noticed a -29.5% regression of netperf.Throughput_tps due to commit:
> >>>
> >>>
> >>> commit: 8d86968ac36ea5bff487f70b5ffc252a87d44c51 ("[RFC PATCH v4] sched/fair: select idle cpu from idle cpumask for task wakeup")
> >>> url: https://github.com/0day-ci/linux/commits/Aubrey-Li/sched-fair-select-idle-cpu-from-idle-cpumask-for-task-wakeup/20201118-115145
> >>> base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 09162bc32c880a791c6c0668ce0745cf7958f576
> >>
> >> I tried to replicate this on my side on a 192 threads(with SMT) machine as well and didn't see the regression.
> >>
> >> nr_threads           v5.9.8          +patch
> >> 96(50%)                      1 (+/- 2.499%)  1.007672(+/- 3.0872%)
> >>
> >> I also tested another 100% case and see similar improvement as what I saw on uperf benchmark
> >>
> >> nr_threads           v5.9.8          +patch
> >> 192(100%)            1 (+/- 45.32%)  1.864917(+/- 23.29%)
> >>
> >> My base is v5.9.8 BTW.
> >>
> >>>     ip: ipv4
> >>>     runtime: 300s
> >>>     nr_threads: 50%
> >>>     cluster: cs-localhost
> >>>     test: UDP_RR
> >>>     cpufreq_governor: performance
> >>>     ucode: 0x5003003
> >>>
> >
> > Note that I suspect that regressions with this will be tricky to reproduce
> > because it'll depend on the timing of when the idle mask gets updated. With
> > this configuration there are 50% "threads" which likely gets translates
> > into 1 client/server per thread or 100% of CPUs active but as it's a
> > ping-pong workload, the pairs are rapidly idling for very short periods.
>
> I tried to replicate this regression but no solid fruit found. I tried 30 times
> 300s 50%.netperf running, all the data are better than the default data. The only
> interesting thing I found is an option CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_32B=y,
> but it performs different on different machines. In case anything I missed,
> do you have any suggestions to replicate this regression?
>
> >
> > If the idle mask is not getting cleared then select_idle_cpu() is
> > probably returning immediately. select_idle_core() is almost certainly
> > failing so that just leaves select_idle_smt() to find a potentially idle
> > CPU. That's a limited search space so tasks may be getting stacked and
> > missing CPUs that are idling for short periods.
>
> Vincent suggested we decouple idle cpumask from short idle(stop tick) and
> set it every time the CPU enters idle, I'll make this change in V6.

This v6 behavior is much more conservative regarding the idle  cpumask
and should restore the regression that appeared with V4

>
> >
> > On the flip side, I expect cases like hackbench to benefit because it
> > can saturate a machine to such a degree that select_idle_cpu() is a waste
> > of time.
>
> Yes, I believe that's also why I saw uperf/netperf improvement at high
> load levels.
>
> >
> > That said, I haven't followed the different versions closely. I know v5
> > got a lot of feedback so will take a closer look at v6. Fundamentally
> > though I expect that using the idle mask will be a mixed bag. At low
> > utilisation or over-saturation, it'll be a benefit. At the point where
> > the machine is almost fully busy, some workloads will benefit (lightly
> > communicating workloads that occasionally migrate) and others will not
> > (ping-pong workloads looking for CPUs that are idle for very brief
> > periods).
>
> Do you have any interested workload [matrix] I can do the measurement?
>
> >
> > It's tricky enough that it might benefit from a sched_feat() check that
> > is default true so it gets tested. For regressions that show up, it'll
> > be easy enough to ask for the feature to be disabled to see if it fixes
> > it. Over time, that might give an idea of exactly what sort of workloads
> > benefit and what suffers.
>
> Okay, I'll add a sched_feat() for this feature.
>
> >
> > Note that the cost of select_idle_cpu() can also be reduced by enabling
> > SIS_AVG_CPU so it would be interesting to know if the idle mask is superior
> > or inferior to SIS_AVG_CPU for workloads that show regressions.
> >
>
> Thanks,
> -Aubrey

      parent reply	other threads:[~2020-12-02 16:23 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-16 20:04 [RFC PATCH v4] sched/fair: select idle cpu from idle cpumask for task wakeup Aubrey Li
2020-11-18 12:06 ` Valentin Schneider
2020-11-19  1:13   ` Li, Aubrey
2020-11-18 13:36 ` Vincent Guittot
2020-11-19  1:34   ` Li, Aubrey
2020-11-19  8:19     ` Vincent Guittot
2020-11-19 11:41       ` Li, Aubrey
2020-11-22 14:03 ` [sched/fair] 8d86968ac3: hackbench.throughput 51.7% improvement kernel test robot
2020-11-25  9:09 ` [sched/fair] 8d86968ac3: netperf.Throughput_tps -29.5% regression kernel test robot
2020-11-26  6:57   ` Li, Aubrey
2020-11-26 12:13     ` Mel Gorman
2020-12-02 14:29       ` Li, Aubrey
2020-12-02 14:48         ` Mel Gorman
2020-12-02 16:22         ` Vincent Guittot [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKfTPtCydzVv45qbsDTG2XDS=4EF4KuuYg5mjnDDF_81B5p2kA@mail.gmail.com' \
    --to=vincent.guittot@linaro.org \
    --cc=aubrey.li@intel.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=benbjiang@gmail.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=feng.tang@intel.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=mgorman@suse.de \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=qais.yousef@arm.com \
    --cc=rong.a.chen@intel.com \
    --cc=rostedt@goodmis.org \
    --cc=tim.c.chen@linux.intel.com \
    --cc=valentin.schneider@arm.com \
    --cc=ying.huang@intel.com \
    --cc=yu.c.chen@intel.com \
    --cc=zhengjun.xing@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).