All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>,
	Barry Song <song.bao.hua@hisilicon.com>
Cc: catalin.marinas@arm.com, will@kernel.org, rjw@rjwysocki.net,
	vincent.guittot@linaro.org, bp@alien8.de, tglx@linutronix.de,
	mingo@redhat.com, lenb@kernel.org, dietmar.eggemann@arm.com,
	rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
	msys.mizuma@gmail.com, valentin.schneider@arm.com,
	gregkh@linuxfoundation.org, jonathan.cameron@huawei.com,
	juri.lelli@redhat.com, mark.rutland@arm.com,
	sudeep.holla@arm.com, aubrey.li@linux.intel.com,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	x86@kernel.org, xuwei5@huawei.com, prime.zeng@hisilicon.com,
	guodong.xu@linaro.org, yangyicong@huawei.com,
	liguozhu@hisilicon.com, linuxarm@openeuler.org, hpa@zytor.com
Subject: Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
Date: Wed, 3 Mar 2021 10:34:00 -0800	[thread overview]
Message-ID: <a8474bae-5d9a-8c0b-766a-7188ed71320b@linux.intel.com> (raw)
In-Reply-To: <YD4T0qBBgR6fPbQb@hirez.programming.kicks-ass.net>



On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
>> From: Tim Chen <tim.c.chen@linux.intel.com>
>>
>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>> is shared among a cluster of cores instead of being exclusive
>> to one single core.
> 
> Isn't that most atoms one way or another? Tremont seems to have it per 4
> cores, but earlier it was per 2 cores.
> 

Yes, older Atoms have 2 cores sharing L2.  I probably should
rephrase my comments to not leave the impression that sharing
L2 among cores is new for Atoms.

Tremont based Atom CPUs increases the possible load imbalance more
with 4 cores per L2 instead of 2.  And also with more overall cores on a die, the
chance increases for packing running tasks on a few clusters while leaving
others empty on light/medium loaded systems.  We did see
this effect on Jacobsville.

So load balancing between the L2 clusters is more
useful on Tremont based Atom CPUs compared to the older Atoms.

Tim

WARNING: multiple messages have this Message-ID (diff)
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>,
	Barry Song <song.bao.hua@hisilicon.com>
Cc: catalin.marinas@arm.com, will@kernel.org, rjw@rjwysocki.net,
	vincent.guittot@linaro.org, bp@alien8.de, tglx@linutronix.de,
	mingo@redhat.com, lenb@kernel.org, dietmar.eggemann@arm.com,
	rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
	msys.mizuma@gmail.com, valentin.schneider@arm.com,
	gregkh@linuxfoundation.org, jonathan.cameron@huawei.com,
	juri.lelli@redhat.com, mark.rutland@arm.com,
	sudeep.holla@arm.com, aubrey.li@linux.intel.com,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	x86@kernel.org, xuwei5@huawei.com, prime.zeng@hisilicon.com,
	guodong.xu@linaro.org, yangyicong@huawei.com,
	liguozhu@hisilicon.com, linuxarm@openeuler.org, hpa@zytor.com
Subject: Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
Date: Wed, 3 Mar 2021 10:34:00 -0800	[thread overview]
Message-ID: <a8474bae-5d9a-8c0b-766a-7188ed71320b@linux.intel.com> (raw)
In-Reply-To: <YD4T0qBBgR6fPbQb@hirez.programming.kicks-ass.net>



On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
>> From: Tim Chen <tim.c.chen@linux.intel.com>
>>
>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>> is shared among a cluster of cores instead of being exclusive
>> to one single core.
> 
> Isn't that most atoms one way or another? Tremont seems to have it per 4
> cores, but earlier it was per 2 cores.
> 

Yes, older Atoms have 2 cores sharing L2.  I probably should
rephrase my comments to not leave the impression that sharing
L2 among cores is new for Atoms.

Tremont based Atom CPUs increases the possible load imbalance more
with 4 cores per L2 instead of 2.  And also with more overall cores on a die, the
chance increases for packing running tasks on a few clusters while leaving
others empty on light/medium loaded systems.  We did see
this effect on Jacobsville.

So load balancing between the L2 clusters is more
useful on Tremont based Atom CPUs compared to the older Atoms.

Tim

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-03-04  0:14 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-01 22:59 [RFC PATCH v4 0/3] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2021-03-01 22:59 ` Barry Song
2021-03-01 22:59 ` [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die Barry Song
2021-03-01 22:59   ` Barry Song
2021-03-15  3:11   ` Song Bao Hua (Barry Song)
2021-03-15  3:11     ` Song Bao Hua (Barry Song)
2021-03-15 10:52     ` Jonathan Cameron
2021-03-15 10:52       ` Jonathan Cameron
2021-03-01 22:59 ` [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters Barry Song
2021-03-01 22:59   ` Barry Song
2021-03-02 10:43   ` Peter Zijlstra
2021-03-02 10:43     ` Peter Zijlstra
2021-03-16  7:33     ` Song Bao Hua (Barry Song)
2021-03-16  7:33       ` Song Bao Hua (Barry Song)
2021-03-08 11:25   ` Vincent Guittot
2021-03-08 11:25     ` Vincent Guittot
2021-03-08 22:15     ` Song Bao Hua (Barry Song)
2021-03-08 22:15       ` Song Bao Hua (Barry Song)
2021-03-01 22:59 ` [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86 Barry Song
2021-03-01 22:59   ` Barry Song
2021-03-02 10:30   ` Peter Zijlstra
2021-03-02 10:30     ` Peter Zijlstra
2021-03-03 18:34     ` Tim Chen [this message]
2021-03-03 18:34       ` Tim Chen
2021-03-08 22:30       ` [Linuxarm] " Song Bao Hua (Barry Song)
2021-03-08 22:30         ` Song Bao Hua (Barry Song)
2021-03-15 20:53         ` Tim Chen
2021-03-15 20:53           ` Tim Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a8474bae-5d9a-8c0b-766a-7188ed71320b@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=aubrey.li@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=bsegall@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=guodong.xu@linaro.org \
    --cc=hpa@zytor.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=juri.lelli@redhat.com \
    --cc=lenb@kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=mark.rutland@arm.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=msys.mizuma@gmail.com \
    --cc=peterz@infradead.org \
    --cc=prime.zeng@hisilicon.com \
    --cc=rjw@rjwysocki.net \
    --cc=rostedt@goodmis.org \
    --cc=song.bao.hua@hisilicon.com \
    --cc=sudeep.holla@arm.com \
    --cc=tglx@linutronix.de \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=xuwei5@huawei.com \
    --cc=yangyicong@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.