linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>
To: Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Cc: "tim.c.chen@linux.intel.com" <tim.c.chen@linux.intel.com>,
	"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
	"will@kernel.org" <will@kernel.org>,
	"rjw@rjwysocki.net" <rjw@rjwysocki.net>,
	"bp@alien8.de" <bp@alien8.de>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"lenb@kernel.org" <lenb@kernel.org>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"rostedt@goodmis.org" <rostedt@goodmis.org>,
	"bsegall@google.com" <bsegall@google.com>,
	"mgorman@suse.de" <mgorman@suse.de>,
	"msys.mizuma@gmail.com" <msys.mizuma@gmail.com>,
	"valentin.schneider@arm.com" <valentin.schneider@arm.com>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	Jonathan Cameron <jonathan.cameron@huawei.com>,
	"juri.lelli@redhat.com" <juri.lelli@redhat.com>,
	"mark.rutland@arm.com" <mark.rutland@arm.com>,
	"sudeep.holla@arm.com" <sudeep.holla@arm.com>,
	"aubrey.li@linux.intel.com" <aubrey.li@linux.intel.com>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>,
	"xuwei (O)" <xuwei5@huawei.com>,
	"Zengtao (B)" <prime.zeng@hisilicon.com>,
	"guodong.xu@linaro.org" <guodong.xu@linaro.org>,
	yangyicong <yangyicong@huawei.com>,
	"Liguozhu (Kenneth)" <liguozhu@hisilicon.com>,
	"linuxarm@openeuler.org" <linuxarm@openeuler.org>,
	"hpa@zytor.com" <hpa@zytor.com>
Subject: RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
Date: Wed, 26 May 2021 09:54:28 +0000	[thread overview]
Message-ID: <bbc339cef87e4009b6d56ee37e202daf@hisilicon.com> (raw)
In-Reply-To: 45cce983-79ca-392a-f590-9168da7aefab@arm.com



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Tuesday, May 25, 2021 8:07 PM
> To: 'Dietmar Eggemann' <dietmar.eggemann@arm.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> 
> 
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Friday, May 14, 2021 12:32 AM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> > <vincent.guittot@linaro.org>
> > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> > lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> > bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> > valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> mark.rutland@arm.com;
> > sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >
> > [...]
> >
> > >> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> > >>
> > >> [...]
> > >>
> > >>>> From: Song Bao Hua (Barry Song)
> > >>
> > >> [...]
> > >>
> > >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>
> > >> [...]
> > >>
> > >>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> > >>>>>>
> > >>>>>>
> > >>>>>>> -----Original Message-----
> > >>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>>>>
> > >>>>> [...]
> > >>>>>
> > >>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>>>>>>
> > >>>>>>> [...]
> > >>>>>>>
> > >>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> > >>
> > >> [...]
> > >>
> > >>>
> > >>> On the other hand, according to "sched: Implement smarter wake-affine
> logic"
> > >>>
> > >>
> >
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> > >> ?id=62470419
> > >>>
> > >>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> > >> postgresql/pgbench.
> > >>> So using the smaller cluster size as factor might help make wake_affine
> > false
> > >> so
> > >>> improve pgbench.
> > >>>
> > >>> From the commit log, while clients =  2*cpus, the commit made the biggest
> > >>> improvement. In my case, It should be clients=48 for a machine whose LLC
> > >>> size is 24.
> > >>>
> > >>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> > >>> under two different scenarios:
> > >>> 1. page cache always hit, so no real I/O for database read
> > >>> 2. echo 3 > /proc/sys/vm/drop_caches
> > >>>
> > >>> For case 1, using cluster_size and using llc_size will result in similar
> > >>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> > >>>
> > >>> For case 2, using llc_size still shows better performance.
> > >>>
> > >>> tps for each test round(cluster size as factor in wake_wide):
> > >>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> > >> 1539.877146
> > >>> avg tps = 1464
> > >>>
> > >>> tps for each test round(llc size as factor in wake_wide):
> > >>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> > >> 1876.802168
> > >>> avg tps = 1641  (+12%)
> > >>>
> > >>> so it seems using cluster_size as factor in "slave >= factor && master >=
> > >> slave *
> > >>> factor" isn't a good choice for my machine at least.
> > >>
> > >> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> > >>
> > >> Just curious, have you seen the benefit of using wake wide on SD size =
> > >> 24 (LLC) compared to not using it at all?
> > >
> > > At least in my benchmark made today, I have not seen any benefit to use
> > > llc_size. Always returning 0 in wake_wide() seems to be much better.
> > >
> > > postgres@ubuntu:$pgbench -i pgbench
> > > postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> > >
> > > using llc_size, it got to 123tps
> > > always returning 0 in wake_wide(), it got to 158tps
> > >
> > > actually, I really couldn't reproduce the performance improvement
> > > the commit "sched: Implement smarter wake-affine logic" mentioned.
> > > on the other hand, the commit log didn't present the pgbench command
> > > parameter used. I guess the benchmark result will highly depend on
> > > the command parameter and disk I/O speed.
> >
> > I see. And it was a way smaller machine (12 CPUs) back then.
> >
> > You could run pgbench via mmtests https://github.com/gormanm/mmtests.
> >
> > I.e the `timed-ro-medium` test.
> >
> > mmtests# ./run-mmtests.sh --config
> > ./configs/config-db-pgbench-timed-ro-medium test_tag
> >
> > /shellpacks/shellpack-bench-pgbench contains all the individual test
> > steps. Something you could use as a template for your pgbench standalone
> > tests as well.
> >
> > I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
> > memory on v5.12 vanilla and w/o wakewide.
> > The test uses `scale_factor = 2570` on this machine. I guess this
> > relates to ~41GB? At least this was the size of the:
> 
> Thanks. Dietmar, sorry for slow response. Sick leave for the whole
> last week.
> 
> I feel it makes much more sense to use mmtests which is setting
> scale_factor according to total memory size, thus, considering
> the impact of page cache. And it is also doing database warming-up
> for 30minutes.
> 
> I will get more data and compare three cases:
> 1. use cluster as wake_wide factor
> 2. use llc as wake_wide factor
> 3. always return 0 in wake_wide.
> 
> and post the result afterwards.

I used only a numa node with 24cpus and 60GB memory
(scale factor: 2392) to finish the test. As mentioned
before, each numa node shares one LLC. So waker and
wakee are in same LLC domain.

Basically, the difference is just noise between using
cluster size as factor in wake_wide() and using llc size
as factor for 1/48threads, 8/48threads, 12/48threads,
24/48threads, 32/48threads. But for 48/48threads(system
is busy), using llc size as factor shows 4%+ pgbench
improvement.

                 cluster_as_factor     llc_as_factor
Hmean     1     10779.67 (   0.00%)    10869.27 *   0.83%*
Hmean     8     19595.09 (   0.00%)    19580.59 *  -0.07%*
Hmean     12    29553.06 (   0.00%)    29643.56 *   0.31%*
Hmean     24    43368.55 (   0.00%)    43194.47 *  -0.40%*
Hmean     32    40258.08 (   0.00%)    40163.23 *  -0.24%*
Hmean     48    40450.42 (   0.00%)    42249.29 *   4.45%*

I can further see 14%+ improvement for 48/48threads transactions
case if I totally don't depend on wake_wide(), that is like
wake_wide() always return 0.

                llc_as_factor          don't_use_wake_wide
Hmean     1     10869.27 (   0.00%)    10723.08 *  -1.34%*
Hmean     8     19580.59 (   0.00%)    19469.34 *  -0.57%*
Hmean     12    29643.56 (   0.00%)    29520.16 *  -0.42%*
Hmean     24    43194.47 (   0.00%)    43774.78 *   1.34%*
Hmean     32    40163.23 (   0.00%)    40742.93 *   1.44%*
Hmean     48    42249.29 (   0.00%)    48329.00 *  14.39%*

I begin to believe wake_wide() is useless while waker and wakee
are already in same LLC. So I sent another patch to address this
generic issue:
[PATCH] sched: fair: don't depend on wake_wide if waker and wakee are already in same LLC
https://lore.kernel.org/lkml/20210526091057.1800-1-song.bao.hua@hisilicon.com/

> 
> >
> > #mmtests/work/testdisk/data/pgdata directory when the test started.
> >
> >
> > mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
> > wo_wakewide | grep ^Hmean
> >
> >
> >       #clients  v5.12 vanilla          v5.12 w/o wakewide
> >
> > Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
> > Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
> > Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
> > Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
> > Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
> > Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
> > Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*
> >
> >
> > So there are some improvements w/ wakewide but nothing of the scale
> > showed in the original wakewide patch.
> >
> > I'm not an expert on how to set up these pgbench tests though. So maybe
> > other pgbench related mmtests configs or some more fine-grained tuning
> > can produce bigger diffs?
> 
> Thanks
> Barry

Thanks
Barry


  parent reply	other threads:[~2021-05-26  9:54 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-20  0:18 [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2021-04-20  0:18 ` [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Barry Song
2021-04-28  9:48   ` Andrew Jones
2021-04-30  3:46     ` Song Bao Hua (Barry Song)
2021-04-20  0:18 ` [RFC PATCH v6 2/4] scheduler: add scheduler level for clusters Barry Song
2021-04-20  0:18 ` [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC Barry Song
2021-04-27 11:35   ` Dietmar Eggemann
2021-04-28  9:51     ` Song Bao Hua (Barry Song)
2021-04-28 13:04       ` Vincent Guittot
2021-04-28 16:47         ` Dietmar Eggemann
     [not found]           ` <185746c4d02a485ca8f3509439328b26@hisilicon.com>
2021-04-30 10:42             ` Dietmar Eggemann
2021-05-03  6:19               ` Song Bao Hua (Barry Song)
2021-05-03 11:35               ` Song Bao Hua (Barry Song)
2021-05-05 12:29                 ` Dietmar Eggemann
2021-05-07 13:07                   ` Song Bao Hua (Barry Song)
2021-05-13 12:32                     ` Dietmar Eggemann
2021-05-25  8:14                       ` Song Bao Hua (Barry Song)
2021-05-26  9:54                       ` Song Bao Hua (Barry Song) [this message]
2021-04-20  0:18 ` [RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86 Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bbc339cef87e4009b6d56ee37e202daf@hisilicon.com \
    --to=song.bao.hua@hisilicon.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=bsegall@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=guodong.xu@linaro.org \
    --cc=hpa@zytor.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=juri.lelli@redhat.com \
    --cc=lenb@kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=mark.rutland@arm.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=msys.mizuma@gmail.com \
    --cc=peterz@infradead.org \
    --cc=prime.zeng@hisilicon.com \
    --cc=rjw@rjwysocki.net \
    --cc=rostedt@goodmis.org \
    --cc=sudeep.holla@arm.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=xuwei5@huawei.com \
    --cc=yangyicong@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).