All of lore.kernel.org
 help / color / mirror / Atom feed
From: sudeep.holla@arm.com (Sudeep Holla)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 0/7] arm64: numa/topology/smp: update the cpumasks for CPU hotplug
Date: Wed, 27 Jun 2018 10:33:05 +0100	[thread overview]
Message-ID: <0fa16210-ef31-07d9-897a-fb91276164b6@arm.com> (raw)
In-Reply-To: <1fff22a2-0421-c102-172a-508d1ca8f2b0@huawei.com>

Hi Hanjun,

On 27/06/18 04:51, Hanjun Guo wrote:
> On 2018/6/26 17:23, Sudeep Holla wrote:
> [...]
>>> Core(s) per socket:    15      //it's 15 now...
>>
>> I know exactly what's the problem. And I think that's exactly the
>> behavior on my x86 box too. Further you might see another issue
>> as most of these applications like lstopo and lscpu parse only CPU0
>> sysfs, things fall apart if CPU0 is hotplugged out which is not allowed
>> on most of x86 boxes I have tried.
>>
>> So, for me, that's issue with the application TBH.
>>
>> [..]
>>
>>> Offlined 16 cores on NUMA node 0, everything backs to normal:
>>
>> Expected :)
>>
>>> but when onlined CPU15 after,
>>> Architecture:          aarch64
>>> Byte Order:            Little Endian
>>> CPU(s):                64
>>> On-line CPU(s) list:   15-63
>>> Off-line CPU(s) list:  0-14
>>> Thread(s) per core:    1
>>> Core(s) per socket:    12     // it's 12, hmm...
>>
>> I had to looks at the lscpu code and I think that explains why you can
>> get 12 here. We have
>> 	add_summary_n(tb, _("Core(s) per socket:"),
>> 		cores_per_socket ?: desc->ncores / desc->nsockets);
>>
>> Now cores_per_socket because we don't have procfs entry, so ncores=49
>> and nsockets=4, which means you get 12. TBH lscpu should be used only
>> when all CPUs are online. Lots of inaccurate assumptions in it.
>>
>>>
>>> I think maybe the cpumask for the socket is changed in a
>>> wrong way when online/offline CPUs, didn't take a deep look into
>>> that, hope the test helps.
>>>
>>
>> I am not convinced if there's any issue unless you see discrepancy in
>> sysfs entries themselves and not though applications interpreting the
>> values for you based on some wrong assumptions.
> 
> Thanks for the clarify and let me know the detail, I tested 4.17 on
> a x86 machine which CPU0 can be offlined, also with the same lscpu version,
> I got the same on x86:
> 

Thanks for taking trouble and confirming this.

> lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                72
> On-line CPU(s) list:   1-9,11-23,25-45,47-65,67-71
> Off-line CPU(s) list:  0,10,24,46,66                  // offline CPUs to make no continuous 18 logical CPUs on a socket
> Thread(s) per core:    1                              // it's 2 before CPU offlined
> Core(s) per socket:    17                             // and not 18 which is the value before CPU offlined
> Socket(s):             2
> NUMA node(s):          2
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 85
> Model name:            Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
> Stepping:              4
> CPU MHz:               2999.755
> CPU max MHz:           3700.0000
> CPU min MHz:           1000.0000
> BogoMIPS:              4600.00
> Virtualization:        VT-x
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              1024K
> L3 cache:              25344K
> NUMA node0 CPU(s):     1-9,11-17,36-45,47-53
> NUMA node1 CPU(s):     18-23,25-35,54-65,67-71
> 
> So with this patch set:
> 
> Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
> 

Thanks for testing.

-- 
Regards,
Sudeep

  reply	other threads:[~2018-06-27  9:33 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-18 13:18 [PATCH v2 0/7] arm64: numa/topology/smp: update the cpumasks for CPU hotplug Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 1/7] arm64: topology: refactor reset_cpu_topology to add support for removing topology Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 2/7] arm64: numa: separate out updates to percpu nodeid and NUMA node cpumap Sudeep Holla
2018-06-27  6:54   ` Ganapatrao Kulkarni
2018-07-04 13:52   ` Will Deacon
2018-07-04 13:59     ` Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 3/7] arm64: topology: add support to remove cpu topology sibling masks Sudeep Holla
2018-07-04 13:58   ` Will Deacon
2018-07-04 14:11     ` Sudeep Holla
2018-07-04 14:27       ` Will Deacon
2018-07-04 14:30         ` Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 4/7] arm64: topology: restrict updating siblings_masks to online cpus only Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 5/7] arm64: smp: remove cpu and numa topology information when hotplugging out CPU Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 6/7] arm64: topology: rename llc_siblings to align with other struct members Sudeep Holla
2018-06-18 13:18 ` [PATCH v2 7/7] arm64: topology: re-introduce numa mask check for scheduler MC selection Sudeep Holla
2018-06-26  6:50 ` [PATCH v2 0/7] arm64: numa/topology/smp: update the cpumasks for CPU hotplug Hanjun Guo
2018-06-26  9:23   ` Sudeep Holla
2018-06-27  3:51     ` Hanjun Guo
2018-06-27  9:33       ` Sudeep Holla [this message]
2018-06-27  5:35 ` Ganapatrao Kulkarni
2018-06-27  9:31   ` Sudeep Holla
2018-07-04 14:00 ` Will Deacon
2018-07-04 14:01   ` Sudeep Holla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0fa16210-ef31-07d9-897a-fb91276164b6@arm.com \
    --to=sudeep.holla@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.