From: Pawel Chmielewski <pawel.chmielewski@intel.com>
To: pawel.chmielewski@intel.com
Cc: Jonathan.Cameron@huawei.com, andriy.shevchenko@linux.intel.com,
baohua@kernel.org, bristot@redhat.com, bsegall@google.com,
davem@davemloft.net, dietmar.eggemann@arm.com, gal@nvidia.com,
gregkh@linuxfoundation.org, hca@linux.ibm.com,
jacob.e.keller@intel.com, jesse.brandeburg@intel.com,
jgg@nvidia.com, juri.lelli@redhat.com, kuba@kernel.org,
leonro@nvidia.com, linux-crypto@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
linux@rasmusvillemoes.dk, mgorman@suse.de, mingo@redhat.com,
netdev@vger.kernel.org, peter@n8pjl.ca, peterz@infradead.org,
rostedt@goodmis.org, saeedm@nvidia.com, tariqt@nvidia.com,
tony.luck@intel.com, torvalds@linux-foundation.org,
ttoukan.linux@gmail.com, vincent.guittot@linaro.org,
vschneid@redhat.com, yury.norov@gmail.com
Subject: [PATCH v2 1/1] ice: Change assigning method of the CPU affinity masks
Date: Thu, 16 Feb 2023 15:54:55 +0100 [thread overview]
Message-ID: <20230216145455.661709-1-pawel.chmielewski@intel.com> (raw)
In-Reply-To: <20230208153905.109912-1-pawel.chmielewski@intel.com>
With the introduction of sched_numa_hop_mask() and for_each_numa_hop_mask(),
the affinity masks for queue vectors can be conveniently set by preferring the
CPUs that are closest to the NUMA node of the parent PCI device.
Signed-off-by: Pawel Chmielewski <pawel.chmielewski@intel.com>
---
Changes since v1:
* Removed obsolete comment
* Inverted condition for loop escape
* Incrementing v_idx only in case of available cpu
---
drivers/net/ethernet/intel/ice/ice_base.c | 24 +++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 9e36f01dfa4f..27b00d224c5d 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -121,9 +121,6 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
if (vsi->type == ICE_VSI_VF)
goto out;
- /* only set affinity_mask if the CPU is online */
- if (cpu_online(v_idx))
- cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
/* This will not be called in the driver load path because the netdev
* will not be created yet. All other cases with register the NAPI
@@ -659,8 +656,10 @@ int ice_vsi_wait_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx)
*/
int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
{
+ cpumask_t *aff_mask, *last_aff_mask = cpu_none_mask;
struct device *dev = ice_pf_to_dev(vsi->back);
- u16 v_idx;
+ int numa_node = dev->numa_node;
+ u16 v_idx, cpu = 0;
int err;
if (vsi->q_vectors[0]) {
@@ -674,6 +673,23 @@ int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
goto err_out;
}
+ v_idx = 0;
+
+ for_each_numa_hop_mask(aff_mask, numa_node) {
+ for_each_cpu_andnot(cpu, aff_mask, last_aff_mask) {
+ if (v_idx >= vsi->num_q_vectors)
+ goto out;
+
+ if (cpu_online(cpu)) {
+ cpumask_set_cpu(cpu, &vsi->q_vectors[v_idx]->affinity_mask);
+ v_idx++;
+ }
+ }
+
+ last_aff_mask = aff_mask;
+ }
+
+out:
return 0;
err_out:
--
2.37.3
next prev parent reply other threads:[~2023-02-16 14:56 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-21 4:24 [PATCH RESEND 0/9] sched: cpumask: improve on cpumask_local_spread() locality Yury Norov
2023-01-21 4:24 ` [PATCH 1/9] lib/find: introduce find_nth_and_andnot_bit Yury Norov
2023-01-21 4:24 ` [PATCH 2/9] cpumask: introduce cpumask_nth_and_andnot Yury Norov
2023-01-21 4:24 ` [PATCH 3/9] sched: add sched_numa_find_nth_cpu() Yury Norov
2023-02-03 0:58 ` Chen Yu
2023-02-07 5:09 ` Jakub Kicinski
2023-02-07 10:29 ` Valentin Schneider
2023-02-17 1:39 ` Yury Norov
2023-02-17 11:11 ` Andy Shevchenko
2023-02-20 19:46 ` Jakub Kicinski
2023-01-21 4:24 ` [PATCH 4/9] cpumask: improve on cpumask_local_spread() locality Yury Norov
2023-01-21 4:24 ` [PATCH 5/9] lib/cpumask: reorganize cpumask_local_spread() logic Yury Norov
2023-01-21 4:24 ` [PATCH 6/9] sched/topology: Introduce sched_numa_hop_mask() Yury Norov
2023-01-21 4:24 ` [PATCH 7/9] sched/topology: Introduce for_each_numa_hop_mask() Yury Norov
2023-01-21 4:24 ` [PATCH 8/9] net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity hints Yury Norov
2023-01-21 4:24 ` [PATCH 9/9] lib/cpumask: update comment for cpumask_local_spread() Yury Norov
2023-01-22 12:57 ` [PATCH RESEND 0/9] sched: cpumask: improve on cpumask_local_spread() locality Tariq Toukan
2023-01-23 9:57 ` Valentin Schneider
2023-01-29 8:07 ` Tariq Toukan
2023-01-30 20:22 ` Jakub Kicinski
2023-02-02 17:33 ` Jakub Kicinski
2023-02-02 17:37 ` Yury Norov
2023-02-08 2:25 ` Jakub Kicinski
2023-02-08 4:20 ` patchwork-bot+netdevbpf
2023-02-08 15:39 ` [PATCH 1/1] ice: Change assigning method of the CPU affinity masks Pawel Chmielewski
[not found] ` <CAH-L+nO+KyzPSX_F0fh+9i=0rW1hoBPFTGbXc1EX+4MGYOR1kA@mail.gmail.com>
2023-02-08 16:08 ` Andy Shevchenko
2023-02-08 16:39 ` Yury Norov
2023-02-08 16:58 ` Andy Shevchenko
2023-02-08 19:11 ` kernel test robot
2023-02-09 2:41 ` Philip Li
2023-02-08 19:22 ` kernel test robot
2023-02-08 23:21 ` Jakub Kicinski
2023-02-09 5:14 ` kernel test robot
2023-02-16 14:54 ` Pawel Chmielewski [this message]
2023-02-16 15:14 ` [PATCH v2 " Andy Shevchenko
2023-02-16 15:16 ` Andy Shevchenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230216145455.661709-1-pawel.chmielewski@intel.com \
--to=pawel.chmielewski@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=andriy.shevchenko@linux.intel.com \
--cc=baohua@kernel.org \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=davem@davemloft.net \
--cc=dietmar.eggemann@arm.com \
--cc=gal@nvidia.com \
--cc=gregkh@linuxfoundation.org \
--cc=hca@linux.ibm.com \
--cc=jacob.e.keller@intel.com \
--cc=jesse.brandeburg@intel.com \
--cc=jgg@nvidia.com \
--cc=juri.lelli@redhat.com \
--cc=kuba@kernel.org \
--cc=leonro@nvidia.com \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=peter@n8pjl.ca \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=tony.luck@intel.com \
--cc=torvalds@linux-foundation.org \
--cc=ttoukan.linux@gmail.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).