All of lore.kernel.org
 help / color / mirror / Atom feed
From: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
To: acme@kernel.org, jolsa@kernel.org, disgoel@linux.vnet.ibm.com
Cc: mpe@ellerman.id.au, linux-perf-users@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, maddy@linux.vnet.ibm.com,
	rnsastry@linux.ibm.com, kjain@linux.ibm.com,
	linux-kernel@vger.kernel.org, srikar@linux.vnet.ibm.com,
	irogers@google.com
Subject: [PATCH V3 0/2] Fix perf bench numa to work with machines having #CPUs > 1K
Date: Tue, 12 Apr 2022 22:10:57 +0530	[thread overview]
Message-ID: <20220412164059.42654-1-atrajeev@linux.vnet.ibm.com> (raw)

The perf benchmark for collections: numa hits failure in system
configuration with CPU's more than 1024. These benchmarks uses
"sched_getaffinity" and "sched_setaffinity" in the code to
work with affinity.

Example snippet from numa benchmark:
<<>>
perf: bench/numa.c:302: bind_to_node: Assertion `!(ret)' failed.
Aborted (core dumped)
<<>>

bind_to_node function uses "sched_getaffinity" to save the cpumask.
This fails with EINVAL because the default mask size in glibc is 1024

To overcome this 1024 CPUs mask size limitation of cpu_set_t,
change the mask size using the CPU_*_S macros ie, use CPU_ALLOC to
allocate cpumask, CPU_ALLOC_SIZE for size, CPU_SET_S to set mask bit.

Fix all the relevant places in the code to use mask size which is large
enough to represent number of possible CPU's in the system.

This patchset also address a fix for parse_setup_cpu_list function in
numa bench to check if input CPU is online before binding task to
that CPU. This is to fix failures where, though CPU number is within
max CPU, it could happen that CPU is offline. Here, sched_setaffinity
will result in failure when using cpumask having that cpu bit set
in the mask.

Patch 1 address fix for parse_setup_cpu_list to check if CPU used to bind
task is online. Patch 2 has fix for bench numa to work with machines
having #CPUs > 1K

Athira Rajeev (2):
  tools/perf: Fix perf bench numa testcase to check if CPU used to bind
    task is online
  perf bench: Fix numa bench to fix usage of affinity for machines with
    #CPUs > 1K

Changelog:
v2 -> v3
Link to the v2 version :
https://lore.kernel.org/all/20220406175113.87881-1-atrajeev@linux.vnet.ibm.com/
 - From the v2 version, patch 1 and patch 2 are now part of upstream.
 - This v3 version separates patch 3 and patch 4 to address review
   comments from arnaldo which includes using sysfs__read_str for reading
   sysfs file and fixing the compilation issues observed in debian

 tools/perf/bench/numa.c  | 136 +++++++++++++++++++++++++++++----------
 tools/perf/util/header.c |  51 +++++++++++++++
 tools/perf/util/header.h |   1 +
 3 files changed, 153 insertions(+), 35 deletions(-)

-- 
2.35.1


WARNING: multiple messages have this Message-ID (diff)
From: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
To: acme@kernel.org, jolsa@kernel.org, disgoel@linux.vnet.ibm.com
Cc: irogers@google.com, maddy@linux.vnet.ibm.com,
	srikar@linux.vnet.ibm.com, rnsastry@linux.ibm.com,
	linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
	kjain@linux.ibm.com, linuxppc-dev@lists.ozlabs.org
Subject: [PATCH V3 0/2] Fix perf bench numa to work with machines having #CPUs > 1K
Date: Tue, 12 Apr 2022 22:10:57 +0530	[thread overview]
Message-ID: <20220412164059.42654-1-atrajeev@linux.vnet.ibm.com> (raw)

The perf benchmark for collections: numa hits failure in system
configuration with CPU's more than 1024. These benchmarks uses
"sched_getaffinity" and "sched_setaffinity" in the code to
work with affinity.

Example snippet from numa benchmark:
<<>>
perf: bench/numa.c:302: bind_to_node: Assertion `!(ret)' failed.
Aborted (core dumped)
<<>>

bind_to_node function uses "sched_getaffinity" to save the cpumask.
This fails with EINVAL because the default mask size in glibc is 1024

To overcome this 1024 CPUs mask size limitation of cpu_set_t,
change the mask size using the CPU_*_S macros ie, use CPU_ALLOC to
allocate cpumask, CPU_ALLOC_SIZE for size, CPU_SET_S to set mask bit.

Fix all the relevant places in the code to use mask size which is large
enough to represent number of possible CPU's in the system.

This patchset also address a fix for parse_setup_cpu_list function in
numa bench to check if input CPU is online before binding task to
that CPU. This is to fix failures where, though CPU number is within
max CPU, it could happen that CPU is offline. Here, sched_setaffinity
will result in failure when using cpumask having that cpu bit set
in the mask.

Patch 1 address fix for parse_setup_cpu_list to check if CPU used to bind
task is online. Patch 2 has fix for bench numa to work with machines
having #CPUs > 1K

Athira Rajeev (2):
  tools/perf: Fix perf bench numa testcase to check if CPU used to bind
    task is online
  perf bench: Fix numa bench to fix usage of affinity for machines with
    #CPUs > 1K

Changelog:
v2 -> v3
Link to the v2 version :
https://lore.kernel.org/all/20220406175113.87881-1-atrajeev@linux.vnet.ibm.com/
 - From the v2 version, patch 1 and patch 2 are now part of upstream.
 - This v3 version separates patch 3 and patch 4 to address review
   comments from arnaldo which includes using sysfs__read_str for reading
   sysfs file and fixing the compilation issues observed in debian

 tools/perf/bench/numa.c  | 136 +++++++++++++++++++++++++++++----------
 tools/perf/util/header.c |  51 +++++++++++++++
 tools/perf/util/header.h |   1 +
 3 files changed, 153 insertions(+), 35 deletions(-)

-- 
2.35.1


             reply	other threads:[~2022-04-12 16:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-12 16:40 Athira Rajeev [this message]
2022-04-12 16:40 ` [PATCH V3 0/2] Fix perf bench numa to work with machines having #CPUs > 1K Athira Rajeev
2022-04-12 16:40 ` [PATCH V3 1/2] tools/perf: Fix perf bench numa testcase to check if CPU used to bind task is online Athira Rajeev
2022-04-12 16:40   ` Athira Rajeev
2022-04-12 16:40 ` [PATCH V3 2/2] perf bench: Fix numa bench to fix usage of affinity for machines with #CPUs > 1K Athira Rajeev
2022-04-12 16:40   ` Athira Rajeev
2022-04-13 14:31 ` [PATCH V3 0/2] Fix perf bench numa to work with machines having " Disha Goel
2022-04-14 12:16 ` Arnaldo Carvalho de Melo
2022-04-14 12:16   ` Arnaldo Carvalho de Melo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220412164059.42654-1-atrajeev@linux.vnet.ibm.com \
    --to=atrajeev@linux.vnet.ibm.com \
    --cc=acme@kernel.org \
    --cc=disgoel@linux.vnet.ibm.com \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kjain@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=maddy@linux.vnet.ibm.com \
    --cc=mpe@ellerman.id.au \
    --cc=rnsastry@linux.ibm.com \
    --cc=srikar@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.