All of lore.kernel.org
 help / color / mirror / Atom feed
* [LTP] [PATCH 2/2] libltpnuma: remove restrictions on numa node-id
@ 2019-05-08  5:23 Li Wang
  2019-05-08 15:04 ` Jan Stancek
  2019-05-14 14:17 ` Cyril Hrubis
  0 siblings, 2 replies; 3+ messages in thread
From: Li Wang @ 2019-05-08  5:23 UTC (permalink / raw)
  To: ltp

For some ppc64le systems, it has non-continuous numa nodes in
hardware configuration. So we're hitting the below warnings while
running set_mempolicy tests on that. To fix this issue, let's just
remove restrictions on numa node-id in get_mempolicy().

Error Log
---------
tst_test.c:1096: INFO: Timeout per run is 0h 50m 00s
tst_numa.c:190: INFO: Found 2 NUMA memory nodes
set_mempolicy01.c:63: PASS: set_mempolicy(MPOL_BIND) node 0
tst_numa.c:26: INFO: Node 0 allocated 16 pages
tst_numa.c:26: INFO: Node 8 allocated 0 pages
set_mempolicy01.c:82: PASS: child: Node 0 allocated 16
set_mempolicy01.c:63: PASS: set_mempolicy(MPOL_BIND) node 8
tst_numa.c:92: WARN: get_mempolicy(...) returned invalid node 8
tst_numa.c:92: WARN: get_mempolicy(...) returned invalid node 8
tst_numa.c:92: WARN: get_mempolicy(...) returned invalid node 8
...
tst_numa.c:26: INFO: Node 0 allocated 0 pages
tst_numa.c:26: INFO: Node 8 allocated 0 pages
set_mempolicy01.c:86: FAIL: child: Node 8 allocated 0, expected 16

lscpu
-----
Architecture:        ppc64le
...
CPU(s):              128
Core(s) per socket:  16
Socket(s):           2
NUMA node(s):        2
Model name:          POWER9, altivec supported
...
NUMA node0 CPU(s):   0-63
NUMA node8 CPU(s):   64-127

Signed-off-by: Li Wang <liwang@redhat.com>
---
 libs/libltpnuma/tst_numa.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/libs/libltpnuma/tst_numa.c b/libs/libltpnuma/tst_numa.c
index 0ba6daf39..56c8640ff 100644
--- a/libs/libltpnuma/tst_numa.c
+++ b/libs/libltpnuma/tst_numa.c
@@ -88,8 +88,9 @@ void tst_nodemap_count_pages(struct tst_nodemap *nodes,
 		if (ret < 0)
 			tst_brk(TBROK | TERRNO, "get_mempolicy() failed");
 
-		if (node < 0 || (unsigned int)node >= nodes->cnt) {
-			tst_res(TWARN, "get_mempolicy(...) returned invalid node %i\n", node);
+		if (node < 0) {
+			tst_res(TWARN,
+				"get_mempolicy(...) returned invalid node %i\n", node);
 			continue;
 		}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [LTP] [PATCH 2/2] libltpnuma: remove restrictions on numa node-id
  2019-05-08  5:23 [LTP] [PATCH 2/2] libltpnuma: remove restrictions on numa node-id Li Wang
@ 2019-05-08 15:04 ` Jan Stancek
  2019-05-14 14:17 ` Cyril Hrubis
  1 sibling, 0 replies; 3+ messages in thread
From: Jan Stancek @ 2019-05-08 15:04 UTC (permalink / raw)
  To: ltp



----- Original Message -----
> For some ppc64le systems, it has non-continuous numa nodes in
> hardware configuration. So we're hitting the below warnings while
> running set_mempolicy tests on that. To fix this issue, let's just
> remove restrictions on numa node-id in get_mempolicy().
> 
> Error Log
> ---------
> tst_test.c:1096: INFO: Timeout per run is 0h 50m 00s
> tst_numa.c:190: INFO: Found 2 NUMA memory nodes
> set_mempolicy01.c:63: PASS: set_mempolicy(MPOL_BIND) node 0
> tst_numa.c:26: INFO: Node 0 allocated 16 pages
> tst_numa.c:26: INFO: Node 8 allocated 0 pages
> set_mempolicy01.c:82: PASS: child: Node 0 allocated 16
> set_mempolicy01.c:63: PASS: set_mempolicy(MPOL_BIND) node 8
> tst_numa.c:92: WARN: get_mempolicy(...) returned invalid node 8
> tst_numa.c:92: WARN: get_mempolicy(...) returned invalid node 8
> tst_numa.c:92: WARN: get_mempolicy(...) returned invalid node 8
> ...
> tst_numa.c:26: INFO: Node 0 allocated 0 pages
> tst_numa.c:26: INFO: Node 8 allocated 0 pages
> set_mempolicy01.c:86: FAIL: child: Node 8 allocated 0, expected 16
> 
> lscpu
> -----
> Architecture:        ppc64le
> ...
> CPU(s):              128
> Core(s) per socket:  16
> Socket(s):           2
> NUMA node(s):        2
> Model name:          POWER9, altivec supported
> ...
> NUMA node0 CPU(s):   0-63
> NUMA node8 CPU(s):   64-127
> 
> Signed-off-by: Li Wang <liwang@redhat.com>
> ---
>  libs/libltpnuma/tst_numa.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/libs/libltpnuma/tst_numa.c b/libs/libltpnuma/tst_numa.c
> index 0ba6daf39..56c8640ff 100644
> --- a/libs/libltpnuma/tst_numa.c
> +++ b/libs/libltpnuma/tst_numa.c
> @@ -88,8 +88,9 @@ void tst_nodemap_count_pages(struct tst_nodemap *nodes,
>  		if (ret < 0)
>  			tst_brk(TBROK | TERRNO, "get_mempolicy() failed");
>  
> -		if (node < 0 || (unsigned int)node >= nodes->cnt) {
> -			tst_res(TWARN, "get_mempolicy(...) returned invalid node %i\n", node);
> +		if (node < 0) {
> +			tst_res(TWARN,
> +				"get_mempolicy(...) returned invalid node %i\n", node);
>  			continue;
>  		}
>  
> --
> 2.20.1

2/2 looks good to me:

Acked-by: Jan Stancek <jstancek@redhat.com>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [LTP] [PATCH 2/2] libltpnuma: remove restrictions on numa node-id
  2019-05-08  5:23 [LTP] [PATCH 2/2] libltpnuma: remove restrictions on numa node-id Li Wang
  2019-05-08 15:04 ` Jan Stancek
@ 2019-05-14 14:17 ` Cyril Hrubis
  1 sibling, 0 replies; 3+ messages in thread
From: Cyril Hrubis @ 2019-05-14 14:17 UTC (permalink / raw)
  To: ltp

Hi!
This is obviously OK, applied, thanks for the fix.

I guess that later on we may replace the check by storing the maximal
numa node to the numa mapping structure and check that the id is <= than
the max node.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-05-14 14:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-08  5:23 [LTP] [PATCH 2/2] libltpnuma: remove restrictions on numa node-id Li Wang
2019-05-08 15:04 ` Jan Stancek
2019-05-14 14:17 ` Cyril Hrubis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.