All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Sudeep Holla <sudeep.holla@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Cc: linux-kernel@vger.kernel.org, Atish Patra <atishp@atishpatra.org>,
	Atish Patra <atishp@rivosinc.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Qing Wang <wangqing@vivo.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH v3 15/16] arch_topology: Set cluster identifier in each core/thread from /cpu-map
Date: Mon, 13 Jun 2022 11:19:36 +0200	[thread overview]
Message-ID: <af7d6f49-09c5-6e60-988c-51c3c7c04d96@arm.com> (raw)
In-Reply-To: <20220610102753.virkx47uyfsojol6@bogus>

On 10/06/2022 12:27, Sudeep Holla wrote:
> On Fri, Jun 10, 2022 at 12:08:44PM +0200, Vincent Guittot wrote:
>> On Mon, 6 Jun 2022 at 12:22, Sudeep Holla <sudeep.holla@arm.com> wrote:
>>>
> 
> [...]
> 
>>> Why ? Are you suggesting that we shouldn't present the hardware cluster
>>> to the topology because of the above reason ? If so, sorry that is not a
>>> valid reason. We could add login to return NULL or appropriate value
>>> needed in cpu_clustergroup_mask id it matches MC level mask if we can't
>>> deal that in generic scheduler code. But the topology code can't be
>>> compromised for that reason as it is user visible.
>>
>> I tend to agree with Dietmar. The legacy use of cluster node in DT
>> refers to the dynamiQ or legacy b.L cluster which is also aligned to
>> the LLC and the MC scheduling level. The new cluster level that has
>> been introduced recently does not target this level but some
>> intermediate levels either inside like for the kupeng920 or the v9
>> complex or outside like for the ampere altra. So I would say that
>> there is one cluster node level in DT that refers to the same MC/LLC
>> level and only an additional child/parent cluster node should be used
>> to fill the clustergroup_mask.
>>
> 
> Again I completely disagree. Let us look at the problems separately.
> The hardware topology that some of the tools like lscpu and lstopo expects
> what the hardware looks like and not the scheduler's view of the hardware.
> So the topology masks that gets exposed to the user-space needs fixing
> even today. I have reports from various tooling people about the same.
> E.g. Juno getting exposed as dual socket system is utter non-sense.
> 
> Yes scheduler uses most of the topology masks as is but that is not a must.
> There are these *group_mask functions that can implement what scheduler
> needs to be fed.
> 
> I am not sure why the 2 issues are getting mixed up and that is the main
> reason why I jumped into this to make sure the topology masks are
> not tampered based on the way it needs to be used for scheduler.

I'm all in favor of not mixing up those 2 issues. But I don't understand
why you have to glue them together.

(1) DT systems broken in userspace (lstopo shows Juno with 2 Packages)

(2) Introduce CONFIG_SCHED_CLUSTER for DT systems


(1) This can be solved with your patch-set w/o setting `(1. level)
    cpu-map cluster nodes`. The `socket nodes` taking over the
    functionality of the `cluster nodes` sorts out the `Juno is seen as
    having 2 packages`.
    This will make core_sibling not suitable for cpu_coregroup_mask()
    anymore. But this is OK since llc from cacheinfo (i.e. llc_sibling)
    takes over here.
    There is no need to involve `cluster nodes` anymore.

(2) This will only make sense for Armv9 L2 complexes if we connect `2.
    level cpu-map cluster nodes` with cluster_id and cluster_sibling.
    And only then clusters would mean the same thing in ACPI and DT.
    I guess this was mentioned already a couple of times.

> Both ACPI and DT on a platform must present exact same hardware topology
> to the user-space, there is no space for argument there.
> 
>> IIUC, we don't describe the dynamiQ level in ACPI which  uses cache
>> topology instead to define cpu_coregroup_mask whereas DT described the
>> dynamiQ instead of using cache topology. If you use cache topology
>> now, then you should skip the dynamiQ
>>
> 
> Yes, unless someone can work out a binding to represent that and convince
> DT maintainers ;).
> 
>> Finally, even if CLS and MC have the same scheduling behavior for now,
>> they might ends up with different scheduling properties which would
>> mean that replacing MC level by CLS one for current SoC would become
>> wrong
>>
> 
> Again as I mentioned to Dietmar, that is something we can and must deal with
> in those *group_mask and not expect topology mask to be altered to meet
> CLS/MC or whatever sched domains needs. Sorry, that is my strong opinion
> as the topology is already user-space visible and (tooling) people are
> complaining that DT systems are broken and doesn't match ACPI systems.
> 
> So unless someone gives me non-scheduler and topology specific reasons
> to change that, sorry but my opinion on this matter is not going to change ;).

`lstopo` is fine with a now correct /sys/.../topology/package_cpus (or
core_siblings (old filename). It's not reading
/sys/.../topology/cluster_cpus (yet) so why set it (wrongly) to 0x39 for
CPU0 on Juno when it can stay 0x01?

> You will get this view of topology, find a way to manage with all those
> *group_mask functions. By the way it is already handled for ACPI systems,
> so if you are not happy with that, then that needs fixing as this change
> set just aligns the behaviour on similar ACPI system. So the Juno example
> is incorrect for the reason that the behaviour of scheduler there is different
> with DT and ACPI.

[...]


WARNING: multiple messages have this Message-ID (diff)
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Sudeep Holla <sudeep.holla@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Cc: linux-kernel@vger.kernel.org, Atish Patra <atishp@atishpatra.org>,
	Atish Patra <atishp@rivosinc.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Qing Wang <wangqing@vivo.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH v3 15/16] arch_topology: Set cluster identifier in each core/thread from /cpu-map
Date: Mon, 13 Jun 2022 11:19:36 +0200	[thread overview]
Message-ID: <af7d6f49-09c5-6e60-988c-51c3c7c04d96@arm.com> (raw)
In-Reply-To: <20220610102753.virkx47uyfsojol6@bogus>

On 10/06/2022 12:27, Sudeep Holla wrote:
> On Fri, Jun 10, 2022 at 12:08:44PM +0200, Vincent Guittot wrote:
>> On Mon, 6 Jun 2022 at 12:22, Sudeep Holla <sudeep.holla@arm.com> wrote:
>>>
> 
> [...]
> 
>>> Why ? Are you suggesting that we shouldn't present the hardware cluster
>>> to the topology because of the above reason ? If so, sorry that is not a
>>> valid reason. We could add login to return NULL or appropriate value
>>> needed in cpu_clustergroup_mask id it matches MC level mask if we can't
>>> deal that in generic scheduler code. But the topology code can't be
>>> compromised for that reason as it is user visible.
>>
>> I tend to agree with Dietmar. The legacy use of cluster node in DT
>> refers to the dynamiQ or legacy b.L cluster which is also aligned to
>> the LLC and the MC scheduling level. The new cluster level that has
>> been introduced recently does not target this level but some
>> intermediate levels either inside like for the kupeng920 or the v9
>> complex or outside like for the ampere altra. So I would say that
>> there is one cluster node level in DT that refers to the same MC/LLC
>> level and only an additional child/parent cluster node should be used
>> to fill the clustergroup_mask.
>>
> 
> Again I completely disagree. Let us look at the problems separately.
> The hardware topology that some of the tools like lscpu and lstopo expects
> what the hardware looks like and not the scheduler's view of the hardware.
> So the topology masks that gets exposed to the user-space needs fixing
> even today. I have reports from various tooling people about the same.
> E.g. Juno getting exposed as dual socket system is utter non-sense.
> 
> Yes scheduler uses most of the topology masks as is but that is not a must.
> There are these *group_mask functions that can implement what scheduler
> needs to be fed.
> 
> I am not sure why the 2 issues are getting mixed up and that is the main
> reason why I jumped into this to make sure the topology masks are
> not tampered based on the way it needs to be used for scheduler.

I'm all in favor of not mixing up those 2 issues. But I don't understand
why you have to glue them together.

(1) DT systems broken in userspace (lstopo shows Juno with 2 Packages)

(2) Introduce CONFIG_SCHED_CLUSTER for DT systems


(1) This can be solved with your patch-set w/o setting `(1. level)
    cpu-map cluster nodes`. The `socket nodes` taking over the
    functionality of the `cluster nodes` sorts out the `Juno is seen as
    having 2 packages`.
    This will make core_sibling not suitable for cpu_coregroup_mask()
    anymore. But this is OK since llc from cacheinfo (i.e. llc_sibling)
    takes over here.
    There is no need to involve `cluster nodes` anymore.

(2) This will only make sense for Armv9 L2 complexes if we connect `2.
    level cpu-map cluster nodes` with cluster_id and cluster_sibling.
    And only then clusters would mean the same thing in ACPI and DT.
    I guess this was mentioned already a couple of times.

> Both ACPI and DT on a platform must present exact same hardware topology
> to the user-space, there is no space for argument there.
> 
>> IIUC, we don't describe the dynamiQ level in ACPI which  uses cache
>> topology instead to define cpu_coregroup_mask whereas DT described the
>> dynamiQ instead of using cache topology. If you use cache topology
>> now, then you should skip the dynamiQ
>>
> 
> Yes, unless someone can work out a binding to represent that and convince
> DT maintainers ;).
> 
>> Finally, even if CLS and MC have the same scheduling behavior for now,
>> they might ends up with different scheduling properties which would
>> mean that replacing MC level by CLS one for current SoC would become
>> wrong
>>
> 
> Again as I mentioned to Dietmar, that is something we can and must deal with
> in those *group_mask and not expect topology mask to be altered to meet
> CLS/MC or whatever sched domains needs. Sorry, that is my strong opinion
> as the topology is already user-space visible and (tooling) people are
> complaining that DT systems are broken and doesn't match ACPI systems.
> 
> So unless someone gives me non-scheduler and topology specific reasons
> to change that, sorry but my opinion on this matter is not going to change ;).

`lstopo` is fine with a now correct /sys/.../topology/package_cpus (or
core_siblings (old filename). It's not reading
/sys/.../topology/cluster_cpus (yet) so why set it (wrongly) to 0x39 for
CPU0 on Juno when it can stay 0x01?

> You will get this view of topology, find a way to manage with all those
> *group_mask functions. By the way it is already handled for ACPI systems,
> so if you are not happy with that, then that needs fixing as this change
> set just aligns the behaviour on similar ACPI system. So the Juno example
> is incorrect for the reason that the behaviour of scheduler there is different
> with DT and ACPI.

[...]


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Sudeep Holla <sudeep.holla@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>
Cc: linux-kernel@vger.kernel.org, Atish Patra <atishp@atishpatra.org>,
	Atish Patra <atishp@rivosinc.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Qing Wang <wangqing@vivo.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH v3 15/16] arch_topology: Set cluster identifier in each core/thread from /cpu-map
Date: Mon, 13 Jun 2022 11:19:36 +0200	[thread overview]
Message-ID: <af7d6f49-09c5-6e60-988c-51c3c7c04d96@arm.com> (raw)
In-Reply-To: <20220610102753.virkx47uyfsojol6@bogus>

On 10/06/2022 12:27, Sudeep Holla wrote:
> On Fri, Jun 10, 2022 at 12:08:44PM +0200, Vincent Guittot wrote:
>> On Mon, 6 Jun 2022 at 12:22, Sudeep Holla <sudeep.holla@arm.com> wrote:
>>>
> 
> [...]
> 
>>> Why ? Are you suggesting that we shouldn't present the hardware cluster
>>> to the topology because of the above reason ? If so, sorry that is not a
>>> valid reason. We could add login to return NULL or appropriate value
>>> needed in cpu_clustergroup_mask id it matches MC level mask if we can't
>>> deal that in generic scheduler code. But the topology code can't be
>>> compromised for that reason as it is user visible.
>>
>> I tend to agree with Dietmar. The legacy use of cluster node in DT
>> refers to the dynamiQ or legacy b.L cluster which is also aligned to
>> the LLC and the MC scheduling level. The new cluster level that has
>> been introduced recently does not target this level but some
>> intermediate levels either inside like for the kupeng920 or the v9
>> complex or outside like for the ampere altra. So I would say that
>> there is one cluster node level in DT that refers to the same MC/LLC
>> level and only an additional child/parent cluster node should be used
>> to fill the clustergroup_mask.
>>
> 
> Again I completely disagree. Let us look at the problems separately.
> The hardware topology that some of the tools like lscpu and lstopo expects
> what the hardware looks like and not the scheduler's view of the hardware.
> So the topology masks that gets exposed to the user-space needs fixing
> even today. I have reports from various tooling people about the same.
> E.g. Juno getting exposed as dual socket system is utter non-sense.
> 
> Yes scheduler uses most of the topology masks as is but that is not a must.
> There are these *group_mask functions that can implement what scheduler
> needs to be fed.
> 
> I am not sure why the 2 issues are getting mixed up and that is the main
> reason why I jumped into this to make sure the topology masks are
> not tampered based on the way it needs to be used for scheduler.

I'm all in favor of not mixing up those 2 issues. But I don't understand
why you have to glue them together.

(1) DT systems broken in userspace (lstopo shows Juno with 2 Packages)

(2) Introduce CONFIG_SCHED_CLUSTER for DT systems


(1) This can be solved with your patch-set w/o setting `(1. level)
    cpu-map cluster nodes`. The `socket nodes` taking over the
    functionality of the `cluster nodes` sorts out the `Juno is seen as
    having 2 packages`.
    This will make core_sibling not suitable for cpu_coregroup_mask()
    anymore. But this is OK since llc from cacheinfo (i.e. llc_sibling)
    takes over here.
    There is no need to involve `cluster nodes` anymore.

(2) This will only make sense for Armv9 L2 complexes if we connect `2.
    level cpu-map cluster nodes` with cluster_id and cluster_sibling.
    And only then clusters would mean the same thing in ACPI and DT.
    I guess this was mentioned already a couple of times.

> Both ACPI and DT on a platform must present exact same hardware topology
> to the user-space, there is no space for argument there.
> 
>> IIUC, we don't describe the dynamiQ level in ACPI which  uses cache
>> topology instead to define cpu_coregroup_mask whereas DT described the
>> dynamiQ instead of using cache topology. If you use cache topology
>> now, then you should skip the dynamiQ
>>
> 
> Yes, unless someone can work out a binding to represent that and convince
> DT maintainers ;).
> 
>> Finally, even if CLS and MC have the same scheduling behavior for now,
>> they might ends up with different scheduling properties which would
>> mean that replacing MC level by CLS one for current SoC would become
>> wrong
>>
> 
> Again as I mentioned to Dietmar, that is something we can and must deal with
> in those *group_mask and not expect topology mask to be altered to meet
> CLS/MC or whatever sched domains needs. Sorry, that is my strong opinion
> as the topology is already user-space visible and (tooling) people are
> complaining that DT systems are broken and doesn't match ACPI systems.
> 
> So unless someone gives me non-scheduler and topology specific reasons
> to change that, sorry but my opinion on this matter is not going to change ;).

`lstopo` is fine with a now correct /sys/.../topology/package_cpus (or
core_siblings (old filename). It's not reading
/sys/.../topology/cluster_cpus (yet) so why set it (wrongly) to 0x39 for
CPU0 on Juno when it can stay 0x01?

> You will get this view of topology, find a way to manage with all those
> *group_mask functions. By the way it is already handled for ACPI systems,
> so if you are not happy with that, then that needs fixing as this change
> set just aligns the behaviour on similar ACPI system. So the Juno example
> is incorrect for the reason that the behaviour of scheduler there is different
> with DT and ACPI.

[...]


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-06-13  9:20 UTC|newest]

Thread overview: 153+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-25  8:14 [PATCH v3 00/16] arch_topology: Updates to add socket support and fix cluster ids Sudeep Holla
2022-05-25  8:14 ` Sudeep Holla
2022-05-25  8:14 ` Sudeep Holla
2022-05-25  8:14 ` [PATCH v3 01/16] cacheinfo: Use of_cpu_device_node_get instead cpu_dev->of_node Sudeep Holla
2022-05-25  8:14   ` Sudeep Holla
2022-05-25  8:14   ` Sudeep Holla
2022-05-25  8:14   ` [PATCH v3 02/16] cacheinfo: Add helper to access any cache index for a given CPU Sudeep Holla
2022-05-25  8:14     ` Sudeep Holla
2022-05-25  8:14     ` Sudeep Holla
2022-05-25  8:14     ` [PATCH v3 03/16] cacheinfo: Move cache_leaves_are_shared out of CONFIG_OF Sudeep Holla
2022-05-25  8:14       ` Sudeep Holla
2022-05-25  8:14       ` Sudeep Holla
2022-05-25  8:14       ` [PATCH v3 04/16] cacheinfo: Add support to check if last level cache(LLC) is valid or shared Sudeep Holla
2022-05-25  8:14         ` Sudeep Holla
2022-05-25  8:14         ` Sudeep Holla
2022-05-25  8:14         ` [PATCH v3 05/16] cacheinfo: Allow early detection and population of cache attributes Sudeep Holla
2022-05-25  8:14           ` Sudeep Holla
2022-05-25  8:14           ` Sudeep Holla
2022-05-25  8:14           ` [PATCH v3 06/16] arch_topology: Add support to parse and detect " Sudeep Holla
2022-05-25  8:14             ` Sudeep Holla
2022-05-25  8:14             ` Sudeep Holla
2022-05-25  8:14             ` [PATCH v3 07/16] arch_topology: Use the last level cache information from the cacheinfo Sudeep Holla
2022-05-25  8:14               ` Sudeep Holla
2022-05-25  8:14               ` Sudeep Holla
2022-05-25  8:14               ` [PATCH v3 08/16] arm64: topology: Remove redundant setting of llc_id in CPU topology Sudeep Holla
2022-05-25  8:14                 ` Sudeep Holla
2022-05-25  8:14                 ` Sudeep Holla
2022-05-25  8:14                 ` [PATCH v3 09/16] arch_topology: Drop LLC identifier stash from the " Sudeep Holla
2022-05-25  8:14                   ` Sudeep Holla
2022-05-25  8:14                   ` Sudeep Holla
2022-05-25  8:14                   ` [PATCH v3 10/16] arch_topology: Set thread sibling cpumask only within the cluster Sudeep Holla
2022-05-25  8:14                     ` Sudeep Holla
2022-05-25  8:14                     ` Sudeep Holla
2022-05-25  8:14                     ` [PATCH v3 11/16] arch_topology: Check for non-negative value rather than -1 for IDs validity Sudeep Holla
2022-05-25  8:14                       ` Sudeep Holla
2022-05-25  8:14                       ` Sudeep Holla
2022-05-25  8:14                       ` [PATCH v3 12/16] arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found Sudeep Holla
2022-05-25  8:14                         ` Sudeep Holla
2022-05-25  8:14                         ` Sudeep Holla
2022-05-25  8:14                         ` [PATCH v3 13/16] arch_topology: Don't set cluster identifier as physical package identifier Sudeep Holla
2022-05-25  8:14                           ` Sudeep Holla
2022-05-25  8:14                           ` Sudeep Holla
2022-05-25  8:14                           ` [PATCH v3 14/16] arch_topology: Drop unnecessary check for uninitialised package_id Sudeep Holla
2022-05-25  8:14                             ` Sudeep Holla
2022-05-25  8:14                             ` Sudeep Holla
2022-05-25  8:14                             ` [PATCH v3 15/16] arch_topology: Set cluster identifier in each core/thread from /cpu-map Sudeep Holla
2022-05-25  8:14                               ` Sudeep Holla
2022-05-25  8:14                               ` Sudeep Holla
2022-05-25  8:14                               ` [PATCH v3 16/16] arch_topology: Add support for parsing sockets in /cpu-map Sudeep Holla
2022-05-25  8:14                                 ` Sudeep Holla
2022-05-25  8:14                                 ` Sudeep Holla
2022-06-03 12:30                               ` [PATCH v3 15/16] arch_topology: Set cluster identifier in each core/thread from /cpu-map Dietmar Eggemann
2022-06-03 12:30                                 ` Dietmar Eggemann
2022-06-03 12:30                                 ` Dietmar Eggemann
2022-06-06 10:21                                 ` Sudeep Holla
2022-06-06 10:21                                   ` Sudeep Holla
2022-06-06 10:21                                   ` Sudeep Holla
2022-06-10 10:08                                   ` Vincent Guittot
2022-06-10 10:08                                     ` Vincent Guittot
2022-06-10 10:08                                     ` Vincent Guittot
2022-06-10 10:27                                     ` Sudeep Holla
2022-06-10 10:27                                       ` Sudeep Holla
2022-06-10 10:27                                       ` Sudeep Holla
2022-06-13  9:19                                       ` Dietmar Eggemann [this message]
2022-06-13  9:19                                         ` Dietmar Eggemann
2022-06-13  9:19                                         ` Dietmar Eggemann
2022-06-13 11:17                                         ` Sudeep Holla
2022-06-13 11:17                                           ` Sudeep Holla
2022-06-13 11:17                                           ` Sudeep Holla
2022-06-16 16:02                                           ` Dietmar Eggemann
2022-06-16 16:02                                             ` Dietmar Eggemann
2022-06-16 16:02                                             ` Dietmar Eggemann
2022-06-17 11:16                                             ` Sudeep Holla
2022-06-17 11:16                                               ` Sudeep Holla
2022-06-17 11:16                                               ` Sudeep Holla
2022-06-20 13:27                                               ` Dietmar Eggemann
2022-06-20 13:27                                                 ` Dietmar Eggemann
2022-06-20 13:27                                                 ` Dietmar Eggemann
2022-06-21 16:00                                                 ` Sudeep Holla
2022-06-21 16:00                                                   ` Sudeep Holla
2022-06-21 16:00                                                   ` Sudeep Holla
2022-06-14 17:59                                       ` Vincent Guittot
2022-06-14 17:59                                         ` Vincent Guittot
2022-06-14 17:59                                         ` Vincent Guittot
2022-06-15 17:00                                         ` Sudeep Holla
2022-06-15 17:00                                           ` Sudeep Holla
2022-06-15 17:00                                           ` Sudeep Holla
2022-06-15 22:44                                           ` Vincent Guittot
2022-06-15 22:44                                             ` Vincent Guittot
2022-06-15 22:44                                             ` Vincent Guittot
2022-06-15 22:45                                           ` Vincent Guittot
2022-06-15 22:45                                             ` Vincent Guittot
2022-06-15 22:45                                             ` Vincent Guittot
2022-06-01  3:40                         ` [PATCH v3 12/16] arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found Gavin Shan
2022-06-01  3:40                           ` Gavin Shan
2022-06-01  3:40                           ` Gavin Shan
2022-06-01  3:38                       ` [PATCH v3 11/16] arch_topology: Check for non-negative value rather than -1 for IDs validity Gavin Shan
2022-06-01  3:38                         ` Gavin Shan
2022-06-01  3:38                         ` Gavin Shan
2022-06-01  3:36                     ` [PATCH v3 10/16] arch_topology: Set thread sibling cpumask only within the cluster Gavin Shan
2022-06-01  3:36                       ` Gavin Shan
2022-06-01  3:36                       ` Gavin Shan
2022-06-01  3:35                   ` [PATCH v3 09/16] arch_topology: Drop LLC identifier stash from the CPU topology Gavin Shan
2022-06-01  3:35                     ` Gavin Shan
2022-06-01  3:35                     ` Gavin Shan
2022-06-01 12:06                     ` Sudeep Holla
2022-06-01 12:06                       ` Sudeep Holla
2022-06-01 12:06                       ` Sudeep Holla
2022-06-02  6:44                       ` Gavin Shan
2022-06-02  6:44                         ` Gavin Shan
2022-06-02  6:44                         ` Gavin Shan
2022-06-02  6:42                   ` Gavin Shan
2022-06-02  6:42                     ` Gavin Shan
2022-06-02  6:42                     ` Gavin Shan
2022-06-02  6:42                 ` [PATCH v3 08/16] arm64: topology: Remove redundant setting of llc_id in " Gavin Shan
2022-06-02  6:42                   ` Gavin Shan
2022-06-02  6:42                   ` Gavin Shan
2022-06-01  3:31               ` [PATCH v3 07/16] arch_topology: Use the last level cache information from the cacheinfo Gavin Shan
2022-06-01  3:31                 ` Gavin Shan
2022-06-01  3:31                 ` Gavin Shan
2022-06-02 14:26               ` Dietmar Eggemann
2022-06-02 14:26                 ` Dietmar Eggemann
2022-06-02 14:26                 ` Dietmar Eggemann
2022-06-06  9:54                 ` Sudeep Holla
2022-06-06  9:54                   ` Sudeep Holla
2022-06-06  9:54                   ` Sudeep Holla
2022-06-01  3:29             ` [PATCH v3 06/16] arch_topology: Add support to parse and detect cache attributes Gavin Shan
2022-06-01  3:29               ` Gavin Shan
2022-06-01  3:29               ` Gavin Shan
2022-06-01  3:25           ` [PATCH v3 05/16] cacheinfo: Allow early detection and population of " Gavin Shan
2022-06-01  3:25             ` Gavin Shan
2022-06-01  3:25             ` Gavin Shan
2022-06-01  3:20         ` [PATCH v3 04/16] cacheinfo: Add support to check if last level cache(LLC) is valid or shared Gavin Shan
2022-06-01  3:20           ` Gavin Shan
2022-06-01  3:20           ` Gavin Shan
2022-06-01  2:51       ` [PATCH v3 03/16] cacheinfo: Move cache_leaves_are_shared out of CONFIG_OF Gavin Shan
2022-06-01  2:51         ` Gavin Shan
2022-06-01  2:51         ` Gavin Shan
2022-06-01  2:44     ` [PATCH v3 02/16] cacheinfo: Add helper to access any cache index for a given CPU Gavin Shan
2022-06-01  2:44       ` Gavin Shan
2022-06-01  2:44       ` Gavin Shan
2022-06-01 12:45       ` Sudeep Holla
2022-06-01 12:45         ` Sudeep Holla
2022-06-01 12:45         ` Sudeep Holla
2022-06-01  2:45   ` [PATCH v3 01/16] cacheinfo: Use of_cpu_device_node_get instead cpu_dev->of_node Gavin Shan
2022-06-01  2:45     ` Gavin Shan
2022-06-01  2:45     ` Gavin Shan
2022-06-01  3:49 ` [PATCH v3 00/16] arch_topology: Updates to add socket support and fix cluster ids Gavin Shan
2022-06-01  3:49   ` Gavin Shan
2022-06-01  3:49   ` Gavin Shan
2022-06-01 12:03   ` Sudeep Holla
2022-06-01 12:03     ` Sudeep Holla
2022-06-01 12:03     ` Sudeep Holla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=af7d6f49-09c5-6e60-988c-51c3c7c04d96@arm.com \
    --to=dietmar.eggemann@arm.com \
    --cc=atishp@atishpatra.org \
    --cc=atishp@rivosinc.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=morten.rasmussen@arm.com \
    --cc=robh+dt@kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=wangqing@vivo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.