All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Aubrey Li <aubrey.li@linux.intel.com>,
	Barry Song <song.bao.hua@hisilicon.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/2] sched/fair: Couple wakee flips with heavy wakers
Date: Tue, 26 Oct 2021 14:13:59 +0200	[thread overview]
Message-ID: <65e20ad92f2580c632f793eafce59140b8b4c827.camel@gmx.de> (raw)
In-Reply-To: <20211026115707.GN3959@techsingularity.net>

On Tue, 2021-10-26 at 12:57 +0100, Mel Gorman wrote:
> 
> The patch in question was also tested on other workloads on NUMA
> machines. For a 2-socket machine (20 cores, HT enabled so 40 CPUs)
> running specjbb 2005 with one JVM per NUMA node, the patch also scaled
> reasonably well

That's way more more interesting.  No idea what this thing does under
the hood thus whether it should be helped or not, but at least it's a
real deal benchmark vs a kernel hacker tool.

> specjbb
>                               5.15.0-rc3             5.15.0-rc3
>                                  vanilla  sched-wakeeflips-v1r1
> Hmean     tput-1     50044.48 (   0.00%)    53969.00 *   7.84%*
> Hmean     tput-2    106050.31 (   0.00%)   113580.78 *   7.10%*
> Hmean     tput-3    156701.44 (   0.00%)   164857.00 *   5.20%*
> Hmean     tput-4    196538.75 (   0.00%)   218373.42 *  11.11%*
> Hmean     tput-5    247566.16 (   0.00%)   267173.09 *   7.92%*
> Hmean     tput-6    284981.46 (   0.00%)   311007.14 *   9.13%*
> Hmean     tput-7    328882.48 (   0.00%)   359373.89 *   9.27%*
> Hmean     tput-8    366941.24 (   0.00%)   393244.37 *   7.17%*
> Hmean     tput-9    402386.74 (   0.00%)   433010.43 *   7.61%*
> Hmean     tput-10   437551.05 (   0.00%)   475756.08 *   8.73%*
> Hmean     tput-11   481349.41 (   0.00%)   519824.54 *   7.99%*
> Hmean     tput-12   533148.45 (   0.00%)   565070.21 *   5.99%*
> Hmean     tput-13   570563.97 (   0.00%)   609499.06 *   6.82%*
> Hmean     tput-14   601117.97 (   0.00%)   647876.05 *   7.78%*
> Hmean     tput-15   639096.38 (   0.00%)   690854.46 *   8.10%*
> Hmean     tput-16   682644.91 (   0.00%)   722826.06 *   5.89%*
> Hmean     tput-17   732248.96 (   0.00%)   758805.17 *   3.63%*
> Hmean     tput-18   762771.33 (   0.00%)   791211.66 *   3.73%*
> Hmean     tput-19   780582.92 (   0.00%)   819064.19 *   4.93%*
> Hmean     tput-20   812183.95 (   0.00%)   836664.87 *   3.01%*
> Hmean     tput-21   821415.48 (   0.00%)   833734.23 (   1.50%)
> Hmean     tput-22   815457.65 (   0.00%)   844393.98 *   3.55%*
> Hmean     tput-23   819263.63 (   0.00%)   846109.07 *   3.28%*
> Hmean     tput-24   817962.95 (   0.00%)   839682.92 *   2.66%*
> Hmean     tput-25   807814.64 (   0.00%)   841826.52 *   4.21%*
> Hmean     tput-26   811755.89 (   0.00%)   838543.08 *   3.30%*
> Hmean     tput-27   799341.75 (   0.00%)   833487.26 *   4.27%*
> Hmean     tput-28   803434.89 (   0.00%)   829022.50 *   3.18%*
> Hmean     tput-29   803233.25 (   0.00%)   826622.37 *   2.91%*
> Hmean     tput-30   800465.12 (   0.00%)   824347.42 *   2.98%*
> Hmean     tput-31   791284.39 (   0.00%)   791575.67 (   0.04%)
> Hmean     tput-32   781930.07 (   0.00%)   805725.80 (   3.04%)
> Hmean     tput-33   785194.31 (   0.00%)   804795.44 (   2.50%)
> Hmean     tput-34   781325.67 (   0.00%)   800067.53 (   2.40%)
> Hmean     tput-35   777715.92 (   0.00%)   753926.32 (  -3.06%)
> Hmean     tput-36   770516.85 (   0.00%)   783328.32 (   1.66%)
> Hmean     tput-37   758067.26 (   0.00%)   772243.18 *   1.87%*
> Hmean     tput-38   764815.45 (   0.00%)   769156.32 (   0.57%)
> Hmean     tput-39   757885.41 (   0.00%)   757670.59 (  -0.03%)
> Hmean     tput-40   750140.15 (   0.00%)   760739.13 (   1.41%)
> 
> The largest regression was within noise. Most results were outside the
> noise.
> 
> Some HPC workloads showed little difference but they do not communicate
> that heavily. redis microbenchmark showed mostly neutral results.
> schbench (facebook simulator workload that is latency sensitive) showed a
> mix of results, but helped more than it hurt. Even the machine with the
> worst results for schbench showed improved wakeup latencies at the 99th
> percentile. These were all on NUMA machines.
> 


  reply	other threads:[~2021-10-26 12:14 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-21 14:56 [PATCH 0/2] Reduce stacking and overscheduling Mel Gorman
2021-10-21 14:56 ` [PATCH 1/2] sched/fair: Couple wakee flips with heavy wakers Mel Gorman
2021-10-22 10:26   ` Mike Galbraith
2021-10-22 11:05     ` Mel Gorman
2021-10-22 12:00       ` Mike Galbraith
2021-10-25  6:35       ` Mike Galbraith
2021-10-26  8:18         ` Mel Gorman
2021-10-26 10:15           ` Mike Galbraith
2021-10-26 10:41             ` Mike Galbraith
2021-10-26 11:57               ` Mel Gorman
2021-10-26 12:13                 ` Mike Galbraith [this message]
2021-10-27  2:09                   ` Mike Galbraith
2021-10-27  9:00                     ` Mel Gorman
2021-10-27 10:18                       ` Mike Galbraith
2021-11-09 11:56   ` Peter Zijlstra
2021-11-09 12:55     ` Mike Galbraith
2021-10-21 14:56 ` [PATCH 2/2] sched/fair: Increase wakeup_gran if current task has not executed the minimum granularity Mel Gorman
2021-10-28  9:48 [PATCH v4 0/2] Reduce stacking and overscheduling Mel Gorman
2021-10-28  9:48 ` [PATCH 1/2] sched/fair: Couple wakee flips with heavy wakers Mel Gorman
2021-10-28 16:19   ` Tao Zhou
2021-10-29  8:42     ` Mel Gorman
2021-11-10  9:53       ` Tao Zhou
2021-11-10 15:40         ` Mike Galbraith
2021-10-29 15:17   ` Vincent Guittot
2021-10-30  3:11     ` Mike Galbraith
2021-10-30  4:12       ` Mike Galbraith
2021-11-01  8:56     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=65e20ad92f2580c632f793eafce59140b8b4c827.camel@gmx.de \
    --to=efault@gmx.de \
    --cc=aubrey.li@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=song.bao.hua@hisilicon.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.