linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Aaron Lu <aaron.lwe@gmail.com>
To: Vineeth Remanan Pillai <vpillai@digitalocean.com>
Cc: "Aubrey Li" <aubrey.intel@gmail.com>,
	"Tim Chen" <tim.c.chen@linux.intel.com>,
	"Julien Desfossez" <jdesfossez@digitalocean.com>,
	"Nishanth Aravamudan" <naravamudan@digitalocean.com>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Ingo Molnar" <mingo@kernel.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Paul Turner" <pjt@google.com>,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	"Linux List Kernel Mailing" <linux-kernel@vger.kernel.org>,
	"Dario Faggioli" <dfaggioli@suse.com>,
	"Frédéric Weisbecker" <fweisbec@gmail.com>,
	"Kees Cook" <keescook@chromium.org>,
	"Greg Kerr" <kerrnel@google.com>, "Phil Auld" <pauld@redhat.com>,
	"Valentin Schneider" <valentin.schneider@arm.com>,
	"Mel Gorman" <mgorman@techsingularity.net>,
	"Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4
Date: Wed, 26 Feb 2020 11:13:36 +0800	[thread overview]
Message-ID: <20200226031336.GA622976@ziqianlu-desktop.localdomain> (raw)
In-Reply-To: <CANaguZD205ccu1V_2W-QuMRrJA9SjJ5ng1do4NCdLy8NDKKrbA@mail.gmail.com>

On Tue, Feb 25, 2020 at 03:51:37PM -0500, Vineeth Remanan Pillai wrote:
> Hi Aaron,
> We tried reproducing this with a sample script here:
> https://gist.github.com/vineethrp/4356e66694269d1525ff254d7f213aef

Nice script.

> But the set1 cgroup processes always get their share of cpu time in
> our test. Could you please verify if its the same test that you were
> also doing? The only difference is that we run on a 2 16core/32thread
> socket bare metal using only socket 0. We also tried threads instead of
> processes, but the results are the same.

Sorry for missing one detail: I always start the noise workload first,
and then start the real workload. This is critical for this test here
since only so, the noise workload can occupy all CPUs and presents a
challange for the load balancer to balance real workload's tasks. If
both workloads are started at the same time, the initial task placement
might mitigate the problem.

BTW, your script has given 12cores/24 CPUs to the workloads and cgA
spawned 16 tasks, cgB spawned 32. This is an even more complex scenario
to test since the real workload's task number is already more than the
available core number. Perhaps just starting 12 tasks for cgA and 24
tasks for cgB is enough for now. As for the start sequence, simply sleep
5 seconds after cgB workload is started, then start cgA. I have left a
comment on the script's gist page.

> 
> 
> On a 2sockets/16cores/32threads VM, I grouped 8 sysbench(cpu mode)
> > threads into one cgroup(cgA) and another 16 sysbench(cpu mode) threads
> > into another cgroup(cgB). cgA and cgB's cpusets are set to the same
> > socket's 8 cores/16 CPUs and cgA's cpu.shares is set to 10240 while cgB's
> > cpu.shares is set to 2(so consider cgB as noise workload and cgA as
> > the real workload).
> >
> > I had expected cgA to occupy 8 cpus(with each cpu on a different core)
> 
> The expected behaviour could also be that 8 processes share 4 cores and
> 8 hw threads right? This is what we are seeing mostly
> 
> most of the time since it has way more weight than cgB, while cgB should
> > occupy almost no CPUs since:
> >  - when cgB's task is in the same CPU queue as cgA's task, then cgB's
> >    task is given very little CPU due to its small weight;
> >  - when cgB's task is in a CPU queue whose sibling's queue has cgA's
> >    task, cgB's task should be forced idle(again, due to its small weight).
> >
> We are seeing the cgA is taking half the cores and cgB taking rest half
> of the cores. Looks like the scheduler ultimately groups the tasks into
> its own cores.
> 
> 
> >
> > But testing shows cgA occupies only 2 cpus during the entire run while
> > cgB enjoys the remaining 14 cpus. As a comparison, when coresched is off,
> > cgA can occupy 8 cpus during its run.
> >
> > Not sure why we are not able to reproduce this. I have a quick patch
> which might fix this. The idea is that, we allow migration if p's
> hierarchical load or estimated utilization is more than dest_rq->curr.
> While thinking about this fix, I noticed that we are not holding the
> dest_rq lock for any of the migration patches. Migration patches would
> probably need a rework. Attaching my patch down, but it also does not
> take the dest_rq lock. I have also added a case of dest_core being
> forced_idle. I think that would be an opportunity to migrate. Ideally
> we should check if the forced idle task has the same cookie as p.
> 
> https://gist.github.com/vineethrp/887743608f42a6ce96bf7847b5b119ae

Is this on top of Aubrey's coresched_v4-v5.5.2 branch?

  parent reply	other threads:[~2020-02-26  3:13 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-30 18:33 [RFC PATCH v4 00/19] Core scheduling v4 Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 01/19] stop_machine: Fix stop_cpus_in_progress ordering Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 02/19] sched: Fix kerneldoc comment for ia64_set_curr_task Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 03/19] sched: Wrap rq::lock access Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 04/19] sched/{rt,deadline}: Fix set_next_task vs pick_next_task Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 05/19] sched: Add task_struct pointer to sched_class::set_curr_task Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 06/19] sched/fair: Export newidle_balance() Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 07/19] sched: Allow put_prev_task() to drop rq->lock Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 08/19] sched: Rework pick_next_task() slow-path Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 09/19] sched: Introduce sched_class::pick_task() Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 10/19] sched: Core-wide rq->lock Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 11/19] sched: Basic tracking of matching tasks Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 12/19] sched: A quick and dirty cgroup tagging interface Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 13/19] sched: Add core wide task selection and scheduling Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 14/19] sched/fair: Add a few assertions Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 15/19] sched: Trivial forced-newidle balancer Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 16/19] sched: Debug bits Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 17/19] sched/fair: wrapper for cfs_rq->min_vruntime Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 18/19] sched/fair: core wide vruntime comparison Vineeth Remanan Pillai
2019-10-30 18:33 ` [RFC PATCH v4 19/19] sched/fair : Wake up forced idle siblings if needed Vineeth Remanan Pillai
2019-10-31 11:42 ` [RFC PATCH v4 00/19] Core scheduling v4 Li, Aubrey
2019-11-01 11:33   ` Li, Aubrey
2019-11-08  3:20     ` Li, Aubrey
2019-10-31 18:42 ` Phil Auld
2019-11-01 14:03   ` Vineeth Remanan Pillai
2019-11-01 16:35     ` Greg Kerr
2019-11-01 18:07       ` Dario Faggioli
2019-11-12  1:45     ` Dario Faggioli
2019-11-13 17:16       ` Tim Chen
2020-01-02  2:28       ` Aubrey Li
2020-01-10 23:19         ` Tim Chen
2019-11-11 19:10 ` Tim Chen
2020-01-14  1:12   ` Tim Chen
2020-01-14 15:40     ` Vineeth Remanan Pillai
2020-01-15  3:43       ` Li, Aubrey
2020-01-15 19:33         ` Tim Chen
2020-01-16  1:45           ` Aubrey Li
2020-01-17 16:00             ` Vineeth Remanan Pillai
2020-01-22 18:04               ` Gruza, Agata
2020-01-28  2:40       ` Dario Faggioli
     [not found]         ` <CANaguZDDpzrzdTmvjXvCmV2c+wBt6mXWSz4Vn-LJ-onc_Oj=yw@mail.gmail.com>
2020-02-01 15:31           ` Dario Faggioli
2020-02-06  0:28       ` Tim Chen
2020-02-06 22:37         ` Julien Desfossez
2020-02-12 23:07         ` Julien Desfossez
2020-02-13 18:37           ` Tim Chen
2020-02-14  6:10             ` Aubrey Li
     [not found]               ` <CANaguZC40mDHfL1H_9AA7H8cyd028t9PQVRqQ3kB4ha8R7hhqg@mail.gmail.com>
2020-02-15  6:01                 ` Aubrey Li
     [not found]                   ` <CANaguZBj_x_2+9KwbHCQScsmraC_mHdQB6uRqMTYMmvhBYfv2Q@mail.gmail.com>
2020-02-21 23:20                     ` Julien Desfossez
2020-03-17  0:55                       ` Joel Fernandes
2020-03-17 19:07                         ` Tim Chen
2020-03-17 20:18                           ` Tim Chen
2020-03-18  1:10                             ` Joel Fernandes
2020-03-17 21:17                           ` Thomas Gleixner
2020-03-17 21:58                             ` Tim Chen
2020-03-18  1:03                             ` Joel Fernandes
2020-03-18  2:30                               ` Joel Fernandes
2020-03-18  0:52                           ` Joel Fernandes
2020-03-18 11:53                             ` Thomas Gleixner
2020-03-19  1:54                               ` Joel Fernandes
2020-02-25  3:44               ` Aaron Lu
2020-02-25  5:32                 ` Aubrey Li
2020-02-25  7:34                   ` Aaron Lu
2020-02-25 10:40                     ` Aubrey Li
2020-02-25 11:21                       ` Aaron Lu
2020-02-25 13:41                         ` Aubrey Li
     [not found]                 ` <CANaguZD205ccu1V_2W-QuMRrJA9SjJ5ng1do4NCdLy8NDKKrbA@mail.gmail.com>
2020-02-26  3:13                   ` Aaron Lu [this message]
2020-02-26  7:21                   ` Aubrey Li
     [not found]                     ` <CANaguZDQZg-Z6aNpeLcjQ-cGm3X8CQOkZ_hnJNUyqDRM=yVDFQ@mail.gmail.com>
2020-02-27  4:45                       ` Aubrey Li
2020-02-28 23:55                       ` Tim Chen
2020-03-03 14:59                         ` Li, Aubrey
2020-03-03 23:54                           ` Li, Aubrey
2020-03-05  4:33                             ` Aaron Lu
2020-03-05  6:10                               ` Li, Aubrey
2020-03-05  8:52                                 ` Aaron Lu
2020-02-27  2:04                   ` Aaron Lu
2020-02-27 14:10                     ` Phil Auld
2020-02-27 14:37                       ` Aubrey Li
2020-02-28  2:54                       ` Aaron Lu
2020-03-05 13:45                         ` Aubrey Li
2020-03-06  2:41                           ` Aaron Lu
2020-03-06 18:06                             ` Tim Chen
2020-03-06 18:33                               ` Phil Auld
2020-03-06 21:44                                 ` Tim Chen
2020-03-07  3:13                                   ` Aaron Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200226031336.GA622976@ziqianlu-desktop.localdomain \
    --to=aaron.lwe@gmail.com \
    --cc=aubrey.intel@gmail.com \
    --cc=dfaggioli@suse.com \
    --cc=fweisbec@gmail.com \
    --cc=jdesfossez@digitalocean.com \
    --cc=keescook@chromium.org \
    --cc=kerrnel@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=naravamudan@digitalocean.com \
    --cc=pauld@redhat.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=valentin.schneider@arm.com \
    --cc=vpillai@digitalocean.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).