All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ionela Voinescu <ionela.voinescu@arm.com>
To: Sudeep Holla <sudeep.holla@arm.com>
Cc: Atish Patra <atishp@atishpatra.org>,
	linux-kernel@vger.kernel.org, Atish Patra <atishp@rivosinc.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Qing Wang <wangqing@vivo.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH v2 3/8] arch_topology: Set cluster identifier in each core/thread from /cpu-map
Date: Thu, 19 May 2022 17:55:30 +0100	[thread overview]
Message-ID: <YoZ2gjjS3rbRaJZm@arm.com> (raw)
In-Reply-To: <20220518093325.2070336-4-sudeep.holla@arm.com>

Hi,

As said before, this creates trouble for CONFIG_SCHED_CLUSTER=y.
The output below is obtained from Juno.

When cluster_id is populated, a new CLS level is created by the scheduler
topology code. In this case the clusters in DT determine that the cluster
siblings and llc siblings are the same so the MC scheduler domain will
be removed and, for Juno, only CLS and DIE will be kept.

root@debian-arm64-buster:/sys/kernel/debug/sched/domains/cpu1# grep . */*
domain0/busy_factor:16
domain0/cache_nice_tries:1
domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING
domain0/imbalance_pct:117
domain0/max_interval:4
domain0/max_newidle_lb_cost:14907
domain0/min_interval:2
domain0/name:CLS
domain1/busy_factor:16
domain1/cache_nice_tries:1
domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL SD_PREFER_SIBLING
domain1/imbalance_pct:117
domain1/max_interval:12
domain1/max_newidle_lb_cost:11858
domain1/min_interval:6
domain1/name:DIE

To be noted that we also get a new flag SD_PREFER_SIBLING for the CLS
level that is not appropriate. We usually remove it for the child of a
SD_ASYM_CPUCAPACITY domain, but we don't currently redo this after some
levels are degenerated. This is a fixable issue.

But looking at the bigger picture, a good question is what is the best
thing to do when cluster domains and llc domains span the same CPUs?

Possibly it would be best to restrict clusters (which are almost an
arbitrary concept) to always span a subset of CPUs of the llc domain,
if llc siblings can be obtained? If those clusters are not properly set
up in DT to respect this condition, cluster_siblings would need to be
cleared (or set to the current CPU) so the CLS domain is not created at
all.

Additionally, should we use cluster information from DT (cluster_id) to
create an MC level if we don't have llc information, even if
CONFIG_SCHED_CLUSTER=n?

I currently don't have a very clear picture of how cluster domains and
llc domains would "live" together in a variety of topologies. I'll try
other DT topologies to see if there are others that can lead to trouble.

Thanks,
Ionela.

On Wednesday 18 May 2022 at 10:33:20 (+0100), Sudeep Holla wrote:
> Let us set the cluster identifier as parsed from the device tree
> cluster nodes within /cpu-map.
> 
> We don't support nesting of clusters yet as there are no real hardware
> to support clusters of clusters.
> 
> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> ---
>  drivers/base/arch_topology.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 7f5aa655c1f4..bdb6f2a17df0 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -491,7 +491,7 @@ static int __init get_cpu_for_node(struct device_node *node)
>  }
>  
>  static int __init parse_core(struct device_node *core, int package_id,
> -			     int core_id)
> +			     int cluster_id, int core_id)
>  {
>  	char name[20];
>  	bool leaf = true;
> @@ -507,6 +507,7 @@ static int __init parse_core(struct device_node *core, int package_id,
>  			cpu = get_cpu_for_node(t);
>  			if (cpu >= 0) {
>  				cpu_topology[cpu].package_id = package_id;
> +				cpu_topology[cpu].cluster_id = cluster_id;
>  				cpu_topology[cpu].core_id = core_id;
>  				cpu_topology[cpu].thread_id = i;
>  			} else if (cpu != -ENODEV) {
> @@ -528,6 +529,7 @@ static int __init parse_core(struct device_node *core, int package_id,
>  		}
>  
>  		cpu_topology[cpu].package_id = package_id;
> +		cpu_topology[cpu].cluster_id = cluster_id;
>  		cpu_topology[cpu].core_id = core_id;
>  	} else if (leaf && cpu != -ENODEV) {
>  		pr_err("%pOF: Can't get CPU for leaf core\n", core);
> @@ -537,7 +539,8 @@ static int __init parse_core(struct device_node *core, int package_id,
>  	return 0;
>  }
>  
> -static int __init parse_cluster(struct device_node *cluster, int depth)
> +static int __init
> +parse_cluster(struct device_node *cluster, int cluster_id, int depth)
>  {
>  	char name[20];
>  	bool leaf = true;
> @@ -557,7 +560,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth)
>  		c = of_get_child_by_name(cluster, name);
>  		if (c) {
>  			leaf = false;
> -			ret = parse_cluster(c, depth + 1);
> +			ret = parse_cluster(c, i, depth + 1);
>  			of_node_put(c);
>  			if (ret != 0)
>  				return ret;
> @@ -581,7 +584,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth)
>  			}
>  
>  			if (leaf) {
> -				ret = parse_core(c, 0, core_id++);
> +				ret = parse_core(c, 0, cluster_id, core_id++);
>  			} else {
>  				pr_err("%pOF: Non-leaf cluster with core %s\n",
>  				       cluster, name);
> @@ -621,7 +624,7 @@ static int __init parse_dt_topology(void)
>  	if (!map)
>  		goto out;
>  
> -	ret = parse_cluster(map, 0);
> +	ret = parse_cluster(map, -1, 0);
>  	if (ret != 0)
>  		goto out_map;
>  
> -- 
> 2.36.1
> 
> 

WARNING: multiple messages have this Message-ID (diff)
From: Ionela Voinescu <ionela.voinescu@arm.com>
To: Sudeep Holla <sudeep.holla@arm.com>
Cc: Atish Patra <atishp@atishpatra.org>,
	linux-kernel@vger.kernel.org, Atish Patra <atishp@rivosinc.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Qing Wang <wangqing@vivo.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH v2 3/8] arch_topology: Set cluster identifier in each core/thread from /cpu-map
Date: Thu, 19 May 2022 17:55:30 +0100	[thread overview]
Message-ID: <YoZ2gjjS3rbRaJZm@arm.com> (raw)
In-Reply-To: <20220518093325.2070336-4-sudeep.holla@arm.com>

Hi,

As said before, this creates trouble for CONFIG_SCHED_CLUSTER=y.
The output below is obtained from Juno.

When cluster_id is populated, a new CLS level is created by the scheduler
topology code. In this case the clusters in DT determine that the cluster
siblings and llc siblings are the same so the MC scheduler domain will
be removed and, for Juno, only CLS and DIE will be kept.

root@debian-arm64-buster:/sys/kernel/debug/sched/domains/cpu1# grep . */*
domain0/busy_factor:16
domain0/cache_nice_tries:1
domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING
domain0/imbalance_pct:117
domain0/max_interval:4
domain0/max_newidle_lb_cost:14907
domain0/min_interval:2
domain0/name:CLS
domain1/busy_factor:16
domain1/cache_nice_tries:1
domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL SD_PREFER_SIBLING
domain1/imbalance_pct:117
domain1/max_interval:12
domain1/max_newidle_lb_cost:11858
domain1/min_interval:6
domain1/name:DIE

To be noted that we also get a new flag SD_PREFER_SIBLING for the CLS
level that is not appropriate. We usually remove it for the child of a
SD_ASYM_CPUCAPACITY domain, but we don't currently redo this after some
levels are degenerated. This is a fixable issue.

But looking at the bigger picture, a good question is what is the best
thing to do when cluster domains and llc domains span the same CPUs?

Possibly it would be best to restrict clusters (which are almost an
arbitrary concept) to always span a subset of CPUs of the llc domain,
if llc siblings can be obtained? If those clusters are not properly set
up in DT to respect this condition, cluster_siblings would need to be
cleared (or set to the current CPU) so the CLS domain is not created at
all.

Additionally, should we use cluster information from DT (cluster_id) to
create an MC level if we don't have llc information, even if
CONFIG_SCHED_CLUSTER=n?

I currently don't have a very clear picture of how cluster domains and
llc domains would "live" together in a variety of topologies. I'll try
other DT topologies to see if there are others that can lead to trouble.

Thanks,
Ionela.

On Wednesday 18 May 2022 at 10:33:20 (+0100), Sudeep Holla wrote:
> Let us set the cluster identifier as parsed from the device tree
> cluster nodes within /cpu-map.
> 
> We don't support nesting of clusters yet as there are no real hardware
> to support clusters of clusters.
> 
> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> ---
>  drivers/base/arch_topology.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 7f5aa655c1f4..bdb6f2a17df0 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -491,7 +491,7 @@ static int __init get_cpu_for_node(struct device_node *node)
>  }
>  
>  static int __init parse_core(struct device_node *core, int package_id,
> -			     int core_id)
> +			     int cluster_id, int core_id)
>  {
>  	char name[20];
>  	bool leaf = true;
> @@ -507,6 +507,7 @@ static int __init parse_core(struct device_node *core, int package_id,
>  			cpu = get_cpu_for_node(t);
>  			if (cpu >= 0) {
>  				cpu_topology[cpu].package_id = package_id;
> +				cpu_topology[cpu].cluster_id = cluster_id;
>  				cpu_topology[cpu].core_id = core_id;
>  				cpu_topology[cpu].thread_id = i;
>  			} else if (cpu != -ENODEV) {
> @@ -528,6 +529,7 @@ static int __init parse_core(struct device_node *core, int package_id,
>  		}
>  
>  		cpu_topology[cpu].package_id = package_id;
> +		cpu_topology[cpu].cluster_id = cluster_id;
>  		cpu_topology[cpu].core_id = core_id;
>  	} else if (leaf && cpu != -ENODEV) {
>  		pr_err("%pOF: Can't get CPU for leaf core\n", core);
> @@ -537,7 +539,8 @@ static int __init parse_core(struct device_node *core, int package_id,
>  	return 0;
>  }
>  
> -static int __init parse_cluster(struct device_node *cluster, int depth)
> +static int __init
> +parse_cluster(struct device_node *cluster, int cluster_id, int depth)
>  {
>  	char name[20];
>  	bool leaf = true;
> @@ -557,7 +560,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth)
>  		c = of_get_child_by_name(cluster, name);
>  		if (c) {
>  			leaf = false;
> -			ret = parse_cluster(c, depth + 1);
> +			ret = parse_cluster(c, i, depth + 1);
>  			of_node_put(c);
>  			if (ret != 0)
>  				return ret;
> @@ -581,7 +584,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth)
>  			}
>  
>  			if (leaf) {
> -				ret = parse_core(c, 0, core_id++);
> +				ret = parse_core(c, 0, cluster_id, core_id++);
>  			} else {
>  				pr_err("%pOF: Non-leaf cluster with core %s\n",
>  				       cluster, name);
> @@ -621,7 +624,7 @@ static int __init parse_dt_topology(void)
>  	if (!map)
>  		goto out;
>  
> -	ret = parse_cluster(map, 0);
> +	ret = parse_cluster(map, -1, 0);
>  	if (ret != 0)
>  		goto out_map;
>  
> -- 
> 2.36.1
> 
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Ionela Voinescu <ionela.voinescu@arm.com>
To: Sudeep Holla <sudeep.holla@arm.com>
Cc: Atish Patra <atishp@atishpatra.org>,
	linux-kernel@vger.kernel.org, Atish Patra <atishp@rivosinc.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Qing Wang <wangqing@vivo.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org, Rob Herring <robh+dt@kernel.org>
Subject: Re: [PATCH v2 3/8] arch_topology: Set cluster identifier in each core/thread from /cpu-map
Date: Thu, 19 May 2022 17:55:30 +0100	[thread overview]
Message-ID: <YoZ2gjjS3rbRaJZm@arm.com> (raw)
In-Reply-To: <20220518093325.2070336-4-sudeep.holla@arm.com>

Hi,

As said before, this creates trouble for CONFIG_SCHED_CLUSTER=y.
The output below is obtained from Juno.

When cluster_id is populated, a new CLS level is created by the scheduler
topology code. In this case the clusters in DT determine that the cluster
siblings and llc siblings are the same so the MC scheduler domain will
be removed and, for Juno, only CLS and DIE will be kept.

root@debian-arm64-buster:/sys/kernel/debug/sched/domains/cpu1# grep . */*
domain0/busy_factor:16
domain0/cache_nice_tries:1
domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SHARE_PKG_RESOURCES SD_PREFER_SIBLING
domain0/imbalance_pct:117
domain0/max_interval:4
domain0/max_newidle_lb_cost:14907
domain0/min_interval:2
domain0/name:CLS
domain1/busy_factor:16
domain1/cache_nice_tries:1
domain1/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL SD_PREFER_SIBLING
domain1/imbalance_pct:117
domain1/max_interval:12
domain1/max_newidle_lb_cost:11858
domain1/min_interval:6
domain1/name:DIE

To be noted that we also get a new flag SD_PREFER_SIBLING for the CLS
level that is not appropriate. We usually remove it for the child of a
SD_ASYM_CPUCAPACITY domain, but we don't currently redo this after some
levels are degenerated. This is a fixable issue.

But looking at the bigger picture, a good question is what is the best
thing to do when cluster domains and llc domains span the same CPUs?

Possibly it would be best to restrict clusters (which are almost an
arbitrary concept) to always span a subset of CPUs of the llc domain,
if llc siblings can be obtained? If those clusters are not properly set
up in DT to respect this condition, cluster_siblings would need to be
cleared (or set to the current CPU) so the CLS domain is not created at
all.

Additionally, should we use cluster information from DT (cluster_id) to
create an MC level if we don't have llc information, even if
CONFIG_SCHED_CLUSTER=n?

I currently don't have a very clear picture of how cluster domains and
llc domains would "live" together in a variety of topologies. I'll try
other DT topologies to see if there are others that can lead to trouble.

Thanks,
Ionela.

On Wednesday 18 May 2022 at 10:33:20 (+0100), Sudeep Holla wrote:
> Let us set the cluster identifier as parsed from the device tree
> cluster nodes within /cpu-map.
> 
> We don't support nesting of clusters yet as there are no real hardware
> to support clusters of clusters.
> 
> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> ---
>  drivers/base/arch_topology.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 7f5aa655c1f4..bdb6f2a17df0 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -491,7 +491,7 @@ static int __init get_cpu_for_node(struct device_node *node)
>  }
>  
>  static int __init parse_core(struct device_node *core, int package_id,
> -			     int core_id)
> +			     int cluster_id, int core_id)
>  {
>  	char name[20];
>  	bool leaf = true;
> @@ -507,6 +507,7 @@ static int __init parse_core(struct device_node *core, int package_id,
>  			cpu = get_cpu_for_node(t);
>  			if (cpu >= 0) {
>  				cpu_topology[cpu].package_id = package_id;
> +				cpu_topology[cpu].cluster_id = cluster_id;
>  				cpu_topology[cpu].core_id = core_id;
>  				cpu_topology[cpu].thread_id = i;
>  			} else if (cpu != -ENODEV) {
> @@ -528,6 +529,7 @@ static int __init parse_core(struct device_node *core, int package_id,
>  		}
>  
>  		cpu_topology[cpu].package_id = package_id;
> +		cpu_topology[cpu].cluster_id = cluster_id;
>  		cpu_topology[cpu].core_id = core_id;
>  	} else if (leaf && cpu != -ENODEV) {
>  		pr_err("%pOF: Can't get CPU for leaf core\n", core);
> @@ -537,7 +539,8 @@ static int __init parse_core(struct device_node *core, int package_id,
>  	return 0;
>  }
>  
> -static int __init parse_cluster(struct device_node *cluster, int depth)
> +static int __init
> +parse_cluster(struct device_node *cluster, int cluster_id, int depth)
>  {
>  	char name[20];
>  	bool leaf = true;
> @@ -557,7 +560,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth)
>  		c = of_get_child_by_name(cluster, name);
>  		if (c) {
>  			leaf = false;
> -			ret = parse_cluster(c, depth + 1);
> +			ret = parse_cluster(c, i, depth + 1);
>  			of_node_put(c);
>  			if (ret != 0)
>  				return ret;
> @@ -581,7 +584,7 @@ static int __init parse_cluster(struct device_node *cluster, int depth)
>  			}
>  
>  			if (leaf) {
> -				ret = parse_core(c, 0, core_id++);
> +				ret = parse_core(c, 0, cluster_id, core_id++);
>  			} else {
>  				pr_err("%pOF: Non-leaf cluster with core %s\n",
>  				       cluster, name);
> @@ -621,7 +624,7 @@ static int __init parse_dt_topology(void)
>  	if (!map)
>  		goto out;
>  
> -	ret = parse_cluster(map, 0);
> +	ret = parse_cluster(map, -1, 0);
>  	if (ret != 0)
>  		goto out_map;
>  
> -- 
> 2.36.1
> 
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-05-19 16:55 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-18  9:33 [PATCH v2 0/8] arch_topology: Updates to add socket support and fix cluster ids Sudeep Holla
2022-05-18  9:33 ` Sudeep Holla
2022-05-18  9:33 ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 1/8] arch_topology: Don't set cluster identifier as physical package identifier Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-20 12:31   ` Dietmar Eggemann
2022-05-20 12:31     ` Dietmar Eggemann
2022-05-20 12:31     ` Dietmar Eggemann
2022-05-20 13:13     ` Sudeep Holla
2022-05-20 13:13       ` Sudeep Holla
2022-05-20 13:13       ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 2/8] arch_topology: Set thread sibling cpumask only within the cluster Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-20 12:32   ` Dietmar Eggemann
2022-05-20 12:32     ` Dietmar Eggemann
2022-05-20 12:32     ` Dietmar Eggemann
2022-05-20 13:20     ` Sudeep Holla
2022-05-20 13:20       ` Sudeep Holla
2022-05-20 13:20       ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 3/8] arch_topology: Set cluster identifier in each core/thread from /cpu-map Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-19 16:55   ` Ionela Voinescu [this message]
2022-05-19 16:55     ` Ionela Voinescu
2022-05-19 16:55     ` Ionela Voinescu
2022-05-20 12:33     ` Dietmar Eggemann
2022-05-20 12:33       ` Dietmar Eggemann
2022-05-20 12:33       ` Dietmar Eggemann
2022-05-20 13:54       ` Sudeep Holla
2022-05-20 13:54         ` Sudeep Holla
2022-05-20 13:54         ` Sudeep Holla
2022-05-20 15:27     ` Sudeep Holla
2022-05-20 15:27       ` Sudeep Holla
2022-05-20 15:27       ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 4/8] arch_topology: Add support for parsing sockets in /cpu-map Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 5/8] arch_topology: Check for non-negative value rather than -1 for IDs validity Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 6/8] arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 7/8] of: base: add support to get the device node for the CPU's last level cache Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33 ` [PATCH v2 8/8] arch_topology: Add support to build llc_sibling on DT platforms Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-18  9:33   ` Sudeep Holla
2022-05-19 18:10   ` Rob Herring
2022-05-19 18:10     ` Rob Herring
2022-05-19 18:10     ` Rob Herring
2022-05-20 12:59     ` Sudeep Holla
2022-05-20 12:59       ` Sudeep Holla
2022-05-20 12:59       ` Sudeep Holla
2022-05-20 14:36       ` Rob Herring
2022-05-20 14:36         ` Rob Herring
2022-05-20 14:36         ` Rob Herring
2022-05-20 15:06         ` Sudeep Holla
2022-05-20 15:06           ` Sudeep Holla
2022-05-20 15:06           ` Sudeep Holla
2022-05-20 12:33   ` Dietmar Eggemann
2022-05-20 12:33     ` Dietmar Eggemann
2022-05-20 12:33     ` Dietmar Eggemann
2022-05-20 14:56     ` Sudeep Holla
2022-05-20 14:56       ` Sudeep Holla
2022-05-20 14:56       ` Sudeep Holla
2022-05-19 16:32 ` [PATCH v2 0/8] arch_topology: Updates to add socket support and fix cluster ids Ionela Voinescu
2022-05-19 16:32   ` Ionela Voinescu
2022-05-19 16:32   ` Ionela Voinescu
2022-05-20 15:33   ` Sudeep Holla
2022-05-20 15:33     ` Sudeep Holla
2022-05-20 15:33     ` Sudeep Holla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YoZ2gjjS3rbRaJZm@arm.com \
    --to=ionela.voinescu@arm.com \
    --cc=atishp@atishpatra.org \
    --cc=atishp@rivosinc.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=morten.rasmussen@arm.com \
    --cc=robh+dt@kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=wangqing@vivo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.