linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yury Norov <yury.norov@gmail.com>
To: Valentin Schneider <vschneid@redhat.com>
Cc: linux-kernel@vger.kernel.org,
	"David S. Miller" <davem@davemloft.net>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Barry Song <baohua@kernel.org>, Ben Segall <bsegall@google.com>,
	haniel Bristot de Oliveira <bristot@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Gal Pressman <gal@nvidia.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Heiko Carstens <hca@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>, Jakub Kicinski <kuba@kernel.org>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Leon Romanovsky <leonro@nvidia.com>, Mel Gorman <mgorman@suse.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Tariq Toukan <tariqt@nvidia.com>,
	Tariq Toukan <ttoukan.linux@gmail.com>,
	Tony Luck <tony.luck@intel.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	linux-crypto@vger.kernel.org, netdev@vger.kernel.org,
	linux-rdma@vger.kernel.org
Subject: Re: [PATCH v2 0/4] cpumask: improve on cpumask_local_spread() locality
Date: Tue, 15 Nov 2022 10:32:31 -0800	[thread overview]
Message-ID: <Y3PXw8Hqn+RCMg2J@yury-laptop> (raw)
In-Reply-To: <xhsmh7czwyvtj.mognet@vschneid.remote.csb>

On Tue, Nov 15, 2022 at 05:24:56PM +0000, Valentin Schneider wrote:
> Hi,
> 
> On 12/11/22 11:09, Yury Norov wrote:
> > cpumask_local_spread() currently checks local node for presence of i'th
> > CPU, and then if it finds nothing makes a flat search among all non-local
> > CPUs. We can do it better by checking CPUs per NUMA hops.
> >
> > This series is inspired by Tariq Toukan and Valentin Schneider's "net/mlx5e:
> > Improve remote NUMA preferences used for the IRQ affinity hints"
> >
> > https://patchwork.kernel.org/project/netdevbpf/patch/20220728191203.4055-3-tariqt@nvidia.com/
> >
> > According to their measurements, for mlx5e:
> >
> >         Bottleneck in RX side is released, reached linerate (~1.8x speedup).
> >         ~30% less cpu util on TX.
> >
> > This patch makes cpumask_local_spread() traversing CPUs based on NUMA
> > distance, just as well, and I expect comparabale improvement for its
> > users, as in case of mlx5e.
> >
> > I tested new behavior on my VM with the following NUMA configuration:
> >
> > root@debian:~# numactl -H
> > available: 4 nodes (0-3)
> > node 0 cpus: 0 1 2 3
> > node 0 size: 3869 MB
> > node 0 free: 3740 MB
> > node 1 cpus: 4 5
> > node 1 size: 1969 MB
> > node 1 free: 1937 MB
> > node 2 cpus: 6 7
> > node 2 size: 1967 MB
> > node 2 free: 1873 MB
> > node 3 cpus: 8 9 10 11 12 13 14 15
> > node 3 size: 7842 MB
> > node 3 free: 7723 MB
> > node distances:
> > node   0   1   2   3
> >   0:  10  50  30  70
> >   1:  50  10  70  30
> >   2:  30  70  10  50
> >   3:  70  30  50  10
> >
> > And the cpumask_local_spread() for each node and offset traversing looks
> > like this:
> >
> > node 0:   0   1   2   3   6   7   4   5   8   9  10  11  12  13  14  15
> > node 1:   4   5   8   9  10  11  12  13  14  15   0   1   2   3   6   7
> > node 2:   6   7   0   1   2   3   8   9  10  11  12  13  14  15   4   5
> > node 3:   8   9  10  11  12  13  14  15   4   5   6   7   0   1   2   3
> >
> 
> Is this meant as a replacement for [1]?

No. Your series adds an iterator, and in my experience the code that
uses iterators of that sort is almost always better and easier to
understand than cpumask_nth() or cpumask_next()-like users.

My series has the only advantage that it allows keep existing codebase
untouched.
 
> I like that this is changing an existing interface so that all current
> users directly benefit from the change. Now, about half of the users of
> cpumask_local_spread() use it in a loop with incremental @i parameter,
> which makes the repeated bsearch a bit of a shame, but then I'm tempted to
> say the first point makes it worth it.
> 
> [1]: https://lore.kernel.org/all/20221028164959.1367250-1-vschneid@redhat.com/

In terms of very common case of sequential invocation of local_spread()
for cpus from 0 to nr_cpu_ids, the complexity of my approach is n * log n,
and your approach is amortized O(n), which is better. Not a big deal _now_,
as you mentioned in the other email. But we never know how things will
evolve, right?

So, I would take both and maybe in comment to cpumask_local_spread()
mention that there's a better alternative for those who call the
function for all CPUs incrementally.

Thanks,
Yury

  reply	other threads:[~2022-11-15 18:33 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-12 19:09 [PATCH v2 0/4] cpumask: improve on cpumask_local_spread() locality Yury Norov
2022-11-12 19:09 ` [PATCH v2 1/4] lib/find: introduce find_nth_and_andnot_bit Yury Norov
2022-11-12 19:09 ` [PATCH v2 2/4] cpumask: introduce cpumask_nth_and_andnot Yury Norov
2022-11-12 19:09 ` [PATCH v2 3/4] sched: add sched_numa_find_nth_cpu() Yury Norov
2022-11-14 14:32   ` Andy Shevchenko
2022-11-14 15:02     ` Andy Shevchenko
2022-12-08  2:55     ` Yury Norov
2022-11-15 17:25   ` Valentin Schneider
2022-11-12 19:09 ` [PATCH v2 4/4] cpumask: improve on cpumask_local_spread() locality Yury Norov
2022-11-15 17:24 ` [PATCH v2 0/4] " Valentin Schneider
2022-11-15 18:32   ` Yury Norov [this message]
2022-11-17 12:23     ` Valentin Schneider
2022-11-28  6:39       ` Tariq Toukan
2022-11-30  1:47         ` Yury Norov
2022-12-07 12:53           ` Tariq Toukan
2022-12-07 20:45             ` Yury Norov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y3PXw8Hqn+RCMg2J@yury-laptop \
    --to=yury.norov@gmail.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=baohua@kernel.org \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=davem@davemloft.net \
    --cc=dietmar.eggemann@arm.com \
    --cc=gal@nvidia.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hca@linux.ibm.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=jgg@nvidia.com \
    --cc=juri.lelli@redhat.com \
    --cc=kuba@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux@rasmusvillemoes.dk \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=tony.luck@intel.com \
    --cc=ttoukan.linux@gmail.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).