All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-06-20 12:43 ` Zhen Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Zhen Lei @ 2017-06-20 12:43 UTC (permalink / raw)
  To: linux-kernel, linux-api, Greg Kroah-Hartman, Michal Hocko, linux-mm
  Cc: Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Zhen Lei

When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
and display cpumask_of_node for each node), but I got different result on
X86 and arm64. For each numa node, the former only displayed online CPUs,
and the latter displayed all possible CPUs. Unfortunately, both Linux
documentation and numactl manual have not described it clear.

I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
that he preferred to print online cpus because it doesn't really make much
sense to bind anything on offline nodes.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 drivers/base/node.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index 5548f96..d5e7ce7 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
 static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
 {
 	struct node *node_dev = to_node(dev);
-	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
+	struct cpumask mask;
+
+	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);

 	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
 	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));

-	return cpumap_print_to_pagebuf(list, buf, mask);
+	return cpumap_print_to_pagebuf(list, buf, &mask);
 }

 static inline ssize_t node_read_cpumask(struct device *dev,
--
2.5.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-06-20 12:43 ` Zhen Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Zhen Lei @ 2017-06-20 12:43 UTC (permalink / raw)
  To: linux-kernel, linux-api, Greg Kroah-Hartman, Michal Hocko, linux-mm
  Cc: Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Zhen Lei

When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
and display cpumask_of_node for each node), but I got different result on
X86 and arm64. For each numa node, the former only displayed online CPUs,
and the latter displayed all possible CPUs. Unfortunately, both Linux
documentation and numactl manual have not described it clear.

I sent a mail to ask for help, and Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> replied
that he preferred to print online cpus because it doesn't really make much
sense to bind anything on offline nodes.

Signed-off-by: Zhen Lei <thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 drivers/base/node.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index 5548f96..d5e7ce7 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
 static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
 {
 	struct node *node_dev = to_node(dev);
-	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
+	struct cpumask mask;
+
+	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);

 	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
 	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));

-	return cpumap_print_to_pagebuf(list, buf, mask);
+	return cpumap_print_to_pagebuf(list, buf, &mask);
 }

 static inline ssize_t node_read_cpumask(struct device *dev,
--
2.5.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-06-20 12:43 ` Zhen Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Zhen Lei @ 2017-06-20 12:43 UTC (permalink / raw)
  To: linux-kernel, linux-api, Greg Kroah-Hartman, Michal Hocko, linux-mm
  Cc: Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Zhen Lei

When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
and display cpumask_of_node for each node), but I got different result on
X86 and arm64. For each numa node, the former only displayed online CPUs,
and the latter displayed all possible CPUs. Unfortunately, both Linux
documentation and numactl manual have not described it clear.

I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
that he preferred to print online cpus because it doesn't really make much
sense to bind anything on offline nodes.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 drivers/base/node.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index 5548f96..d5e7ce7 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
 static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
 {
 	struct node *node_dev = to_node(dev);
-	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
+	struct cpumask mask;
+
+	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);

 	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
 	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));

-	return cpumap_print_to_pagebuf(list, buf, mask);
+	return cpumap_print_to_pagebuf(list, buf, &mask);
 }

 static inline ssize_t node_read_cpumask(struct device *dev,
--
2.5.0


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
  2017-06-20 12:43 ` Zhen Lei
@ 2017-08-24  8:32   ` Michal Hocko
  -1 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2017-08-24  8:32 UTC (permalink / raw)
  To: Zhen Lei
  Cc: linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm, Zefan Li,
	Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas,
	Will Deacon

It seems this has slipped through cracks. Let's CC arm64 guys

On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> and display cpumask_of_node for each node), but I got different result on
> X86 and arm64. For each numa node, the former only displayed online CPUs,
> and the latter displayed all possible CPUs. Unfortunately, both Linux
> documentation and numactl manual have not described it clear.
> 
> I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
> that he preferred to print online cpus because it doesn't really make much
> sense to bind anything on offline nodes.

Yes printing offline CPUs is just confusing and more so when the
behavior is not consistent over architectures. I believe that x86
behavior is the more appropriate one because it is more logical to dump
the NUMA topology and use it for affinity setting than adding one
additional step to check the cpu state to achieve the same.

It is true that the online/offline state might change at any time so the
above might be tricky on its own but if we should at least make the
behavior consistent.

> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  drivers/base/node.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 5548f96..d5e7ce7 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
>  static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
>  {
>  	struct node *node_dev = to_node(dev);
> -	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
> +	struct cpumask mask;
> +
> +	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> 
>  	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
>  	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> 
> -	return cpumap_print_to_pagebuf(list, buf, mask);
> +	return cpumap_print_to_pagebuf(list, buf, &mask);
>  }
> 
>  static inline ssize_t node_read_cpumask(struct device *dev,
> --
> 2.5.0
> 
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-08-24  8:32   ` Michal Hocko
  0 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2017-08-24  8:32 UTC (permalink / raw)
  To: Zhen Lei
  Cc: linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm, Zefan Li,
	Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas,
	Will Deacon

It seems this has slipped through cracks. Let's CC arm64 guys

On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> and display cpumask_of_node for each node), but I got different result on
> X86 and arm64. For each numa node, the former only displayed online CPUs,
> and the latter displayed all possible CPUs. Unfortunately, both Linux
> documentation and numactl manual have not described it clear.
> 
> I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
> that he preferred to print online cpus because it doesn't really make much
> sense to bind anything on offline nodes.

Yes printing offline CPUs is just confusing and more so when the
behavior is not consistent over architectures. I believe that x86
behavior is the more appropriate one because it is more logical to dump
the NUMA topology and use it for affinity setting than adding one
additional step to check the cpu state to achieve the same.

It is true that the online/offline state might change at any time so the
above might be tricky on its own but if we should at least make the
behavior consistent.

> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  drivers/base/node.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 5548f96..d5e7ce7 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
>  static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
>  {
>  	struct node *node_dev = to_node(dev);
> -	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
> +	struct cpumask mask;
> +
> +	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> 
>  	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
>  	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> 
> -	return cpumap_print_to_pagebuf(list, buf, mask);
> +	return cpumap_print_to_pagebuf(list, buf, &mask);
>  }
> 
>  static inline ssize_t node_read_cpumask(struct device *dev,
> --
> 2.5.0
> 
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-08-25 17:34     ` Will Deacon
  0 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2017-08-25 17:34 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Zhen Lei, linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm,
	Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas

On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
> It seems this has slipped through cracks. Let's CC arm64 guys
> 
> On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> > When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> > and display cpumask_of_node for each node), but I got different result on
> > X86 and arm64. For each numa node, the former only displayed online CPUs,
> > and the latter displayed all possible CPUs. Unfortunately, both Linux
> > documentation and numactl manual have not described it clear.
> > 
> > I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
> > that he preferred to print online cpus because it doesn't really make much
> > sense to bind anything on offline nodes.
> 
> Yes printing offline CPUs is just confusing and more so when the
> behavior is not consistent over architectures. I believe that x86
> behavior is the more appropriate one because it is more logical to dump
> the NUMA topology and use it for affinity setting than adding one
> additional step to check the cpu state to achieve the same.
> 
> It is true that the online/offline state might change at any time so the
> above might be tricky on its own but if we should at least make the
> behavior consistent.
> 
> > Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

The concept looks find to me, but shouldn't we use cpumask_var_t and
alloc/free_cpumask_var?

Will

> >  drivers/base/node.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/base/node.c b/drivers/base/node.c
> > index 5548f96..d5e7ce7 100644
> > --- a/drivers/base/node.c
> > +++ b/drivers/base/node.c
> > @@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
> >  static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
> >  {
> >  	struct node *node_dev = to_node(dev);
> > -	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
> > +	struct cpumask mask;
> > +
> > +	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> > 
> >  	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
> >  	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> > 
> > -	return cpumap_print_to_pagebuf(list, buf, mask);
> > +	return cpumap_print_to_pagebuf(list, buf, &mask);
> >  }
> > 
> >  static inline ssize_t node_read_cpumask(struct device *dev,
> > --
> > 2.5.0
> > 
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-08-25 17:34     ` Will Deacon
  0 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2017-08-25 17:34 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Zhen Lei, linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm,
	Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas

On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
> It seems this has slipped through cracks. Let's CC arm64 guys
> 
> On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> > When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> > and display cpumask_of_node for each node), but I got different result on
> > X86 and arm64. For each numa node, the former only displayed online CPUs,
> > and the latter displayed all possible CPUs. Unfortunately, both Linux
> > documentation and numactl manual have not described it clear.
> > 
> > I sent a mail to ask for help, and Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> replied
> > that he preferred to print online cpus because it doesn't really make much
> > sense to bind anything on offline nodes.
> 
> Yes printing offline CPUs is just confusing and more so when the
> behavior is not consistent over architectures. I believe that x86
> behavior is the more appropriate one because it is more logical to dump
> the NUMA topology and use it for affinity setting than adding one
> additional step to check the cpu state to achieve the same.
> 
> It is true that the online/offline state might change at any time so the
> above might be tricky on its own but if we should at least make the
> behavior consistent.
> 
> > Signed-off-by: Zhen Lei <thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> 
> Acked-by: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>

The concept looks find to me, but shouldn't we use cpumask_var_t and
alloc/free_cpumask_var?

Will

> >  drivers/base/node.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/base/node.c b/drivers/base/node.c
> > index 5548f96..d5e7ce7 100644
> > --- a/drivers/base/node.c
> > +++ b/drivers/base/node.c
> > @@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
> >  static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
> >  {
> >  	struct node *node_dev = to_node(dev);
> > -	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
> > +	struct cpumask mask;
> > +
> > +	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> > 
> >  	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
> >  	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> > 
> > -	return cpumap_print_to_pagebuf(list, buf, mask);
> > +	return cpumap_print_to_pagebuf(list, buf, &mask);
> >  }
> > 
> >  static inline ssize_t node_read_cpumask(struct device *dev,
> > --
> > 2.5.0
> > 
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-08-25 17:34     ` Will Deacon
  0 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2017-08-25 17:34 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Zhen Lei, linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm,
	Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas

On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
> It seems this has slipped through cracks. Let's CC arm64 guys
> 
> On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> > When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> > and display cpumask_of_node for each node), but I got different result on
> > X86 and arm64. For each numa node, the former only displayed online CPUs,
> > and the latter displayed all possible CPUs. Unfortunately, both Linux
> > documentation and numactl manual have not described it clear.
> > 
> > I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
> > that he preferred to print online cpus because it doesn't really make much
> > sense to bind anything on offline nodes.
> 
> Yes printing offline CPUs is just confusing and more so when the
> behavior is not consistent over architectures. I believe that x86
> behavior is the more appropriate one because it is more logical to dump
> the NUMA topology and use it for affinity setting than adding one
> additional step to check the cpu state to achieve the same.
> 
> It is true that the online/offline state might change at any time so the
> above might be tricky on its own but if we should at least make the
> behavior consistent.
> 
> > Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

The concept looks find to me, but shouldn't we use cpumask_var_t and
alloc/free_cpumask_var?

Will

> >  drivers/base/node.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/base/node.c b/drivers/base/node.c
> > index 5548f96..d5e7ce7 100644
> > --- a/drivers/base/node.c
> > +++ b/drivers/base/node.c
> > @@ -28,12 +28,14 @@ static struct bus_type node_subsys = {
> >  static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
> >  {
> >  	struct node *node_dev = to_node(dev);
> > -	const struct cpumask *mask = cpumask_of_node(node_dev->dev.id);
> > +	struct cpumask mask;
> > +
> > +	cpumask_and(&mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> > 
> >  	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
> >  	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> > 
> > -	return cpumap_print_to_pagebuf(list, buf, mask);
> > +	return cpumap_print_to_pagebuf(list, buf, &mask);
> >  }
> > 
> >  static inline ssize_t node_read_cpumask(struct device *dev,
> > --
> > 2.5.0
> > 
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
  2017-08-25 17:34     ` Will Deacon
@ 2017-08-28 13:13       ` Michal Hocko
  -1 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2017-08-28 13:13 UTC (permalink / raw)
  To: Will Deacon
  Cc: Zhen Lei, linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm,
	Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas

On Fri 25-08-17 18:34:33, Will Deacon wrote:
> On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
> > It seems this has slipped through cracks. Let's CC arm64 guys
> > 
> > On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> > > When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> > > and display cpumask_of_node for each node), but I got different result on
> > > X86 and arm64. For each numa node, the former only displayed online CPUs,
> > > and the latter displayed all possible CPUs. Unfortunately, both Linux
> > > documentation and numactl manual have not described it clear.
> > > 
> > > I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
> > > that he preferred to print online cpus because it doesn't really make much
> > > sense to bind anything on offline nodes.
> > 
> > Yes printing offline CPUs is just confusing and more so when the
> > behavior is not consistent over architectures. I believe that x86
> > behavior is the more appropriate one because it is more logical to dump
> > the NUMA topology and use it for affinity setting than adding one
> > additional step to check the cpu state to achieve the same.
> > 
> > It is true that the online/offline state might change at any time so the
> > above might be tricky on its own but if we should at least make the
> > behavior consistent.
> > 
> > > Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> > 
> > Acked-by: Michal Hocko <mhocko@suse.com>
> 
> The concept looks find to me, but shouldn't we use cpumask_var_t and
> alloc/free_cpumask_var?

This will be safer but both callers of node_read_cpumap are shallow
stack so I am not sure a stack is a limiting factor here.

Zhen Lei, would you care to update that part please?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-08-28 13:13       ` Michal Hocko
  0 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2017-08-28 13:13 UTC (permalink / raw)
  To: Will Deacon
  Cc: Zhen Lei, linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm,
	Zefan Li, Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas

On Fri 25-08-17 18:34:33, Will Deacon wrote:
> On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
> > It seems this has slipped through cracks. Let's CC arm64 guys
> > 
> > On Tue 20-06-17 20:43:28, Zhen Lei wrote:
> > > When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
> > > and display cpumask_of_node for each node), but I got different result on
> > > X86 and arm64. For each numa node, the former only displayed online CPUs,
> > > and the latter displayed all possible CPUs. Unfortunately, both Linux
> > > documentation and numactl manual have not described it clear.
> > > 
> > > I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
> > > that he preferred to print online cpus because it doesn't really make much
> > > sense to bind anything on offline nodes.
> > 
> > Yes printing offline CPUs is just confusing and more so when the
> > behavior is not consistent over architectures. I believe that x86
> > behavior is the more appropriate one because it is more logical to dump
> > the NUMA topology and use it for affinity setting than adding one
> > additional step to check the cpu state to achieve the same.
> > 
> > It is true that the online/offline state might change at any time so the
> > above might be tricky on its own but if we should at least make the
> > behavior consistent.
> > 
> > > Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> > 
> > Acked-by: Michal Hocko <mhocko@suse.com>
> 
> The concept looks find to me, but shouldn't we use cpumask_var_t and
> alloc/free_cpumask_var?

This will be safer but both callers of node_read_cpumap are shallow
stack so I am not sure a stack is a limiting factor here.

Zhen Lei, would you care to update that part please?

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
  2017-08-28 13:13       ` Michal Hocko
@ 2017-09-29  6:46         ` Leizhen (ThunderTown)
  -1 siblings, 0 replies; 12+ messages in thread
From: Leizhen (ThunderTown) @ 2017-09-29  6:46 UTC (permalink / raw)
  To: Michal Hocko, Will Deacon
  Cc: linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm, Zefan Li,
	Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas



On 2017/8/28 21:13, Michal Hocko wrote:
> On Fri 25-08-17 18:34:33, Will Deacon wrote:
>> On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
>>> It seems this has slipped through cracks. Let's CC arm64 guys
>>>
>>> On Tue 20-06-17 20:43:28, Zhen Lei wrote:
>>>> When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
>>>> and display cpumask_of_node for each node), but I got different result on
>>>> X86 and arm64. For each numa node, the former only displayed online CPUs,
>>>> and the latter displayed all possible CPUs. Unfortunately, both Linux
>>>> documentation and numactl manual have not described it clear.
>>>>
>>>> I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
>>>> that he preferred to print online cpus because it doesn't really make much
>>>> sense to bind anything on offline nodes.
>>>
>>> Yes printing offline CPUs is just confusing and more so when the
>>> behavior is not consistent over architectures. I believe that x86
>>> behavior is the more appropriate one because it is more logical to dump
>>> the NUMA topology and use it for affinity setting than adding one
>>> additional step to check the cpu state to achieve the same.
>>>
>>> It is true that the online/offline state might change at any time so the
>>> above might be tricky on its own but if we should at least make the
>>> behavior consistent.
>>>
>>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>>
>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>
>> The concept looks find to me, but shouldn't we use cpumask_var_t and
>> alloc/free_cpumask_var?
> 
> This will be safer but both callers of node_read_cpumap are shallow
> stack so I am not sure a stack is a limiting factor here.
> 
> Zhen Lei, would you care to update that part please?
> 
Sure, I will send v2 immediately.

I'm so sorry that missed this email until someone told me.

-- 
Thanks!
BestRegards

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] mm: only dispaly online cpus of the numa node
@ 2017-09-29  6:46         ` Leizhen (ThunderTown)
  0 siblings, 0 replies; 12+ messages in thread
From: Leizhen (ThunderTown) @ 2017-09-29  6:46 UTC (permalink / raw)
  To: Michal Hocko, Will Deacon
  Cc: linux-kernel, linux-api, Greg Kroah-Hartman, linux-mm, Zefan Li,
	Xinwei Hu, Tianhong Ding, Hanjun Guo, Catalin Marinas



On 2017/8/28 21:13, Michal Hocko wrote:
> On Fri 25-08-17 18:34:33, Will Deacon wrote:
>> On Thu, Aug 24, 2017 at 10:32:26AM +0200, Michal Hocko wrote:
>>> It seems this has slipped through cracks. Let's CC arm64 guys
>>>
>>> On Tue 20-06-17 20:43:28, Zhen Lei wrote:
>>>> When I executed numactl -H(which read /sys/devices/system/node/nodeX/cpumap
>>>> and display cpumask_of_node for each node), but I got different result on
>>>> X86 and arm64. For each numa node, the former only displayed online CPUs,
>>>> and the latter displayed all possible CPUs. Unfortunately, both Linux
>>>> documentation and numactl manual have not described it clear.
>>>>
>>>> I sent a mail to ask for help, and Michal Hocko <mhocko@kernel.org> replied
>>>> that he preferred to print online cpus because it doesn't really make much
>>>> sense to bind anything on offline nodes.
>>>
>>> Yes printing offline CPUs is just confusing and more so when the
>>> behavior is not consistent over architectures. I believe that x86
>>> behavior is the more appropriate one because it is more logical to dump
>>> the NUMA topology and use it for affinity setting than adding one
>>> additional step to check the cpu state to achieve the same.
>>>
>>> It is true that the online/offline state might change at any time so the
>>> above might be tricky on its own but if we should at least make the
>>> behavior consistent.
>>>
>>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>>
>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>
>> The concept looks find to me, but shouldn't we use cpumask_var_t and
>> alloc/free_cpumask_var?
> 
> This will be safer but both callers of node_read_cpumap are shallow
> stack so I am not sure a stack is a limiting factor here.
> 
> Zhen Lei, would you care to update that part please?
> 
Sure, I will send v2 immediately.

I'm so sorry that missed this email until someone told me.

-- 
Thanks!
BestRegards

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-09-29  6:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-20 12:43 [PATCH 1/1] mm: only dispaly online cpus of the numa node Zhen Lei
2017-06-20 12:43 ` Zhen Lei
2017-06-20 12:43 ` Zhen Lei
2017-08-24  8:32 ` Michal Hocko
2017-08-24  8:32   ` Michal Hocko
2017-08-25 17:34   ` Will Deacon
2017-08-25 17:34     ` Will Deacon
2017-08-25 17:34     ` Will Deacon
2017-08-28 13:13     ` Michal Hocko
2017-08-28 13:13       ` Michal Hocko
2017-09-29  6:46       ` Leizhen (ThunderTown)
2017-09-29  6:46         ` Leizhen (ThunderTown)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.