From: Nitesh Narayan Lal <nitesh@redhat.com>
To: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org,
frederic@kernel.org, mtosatti@redhat.com, juri.lelli@redhat.com,
abelits@marvell.com, bhelgaas@google.com,
linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org,
peterz@infradead.org, tglx@linutronix.de
Subject: [Patch v1 3/3] net: restrict queuing of receive packets to housekeeping CPUs
Date: Wed, 10 Jun 2020 12:12:26 -0400 [thread overview]
Message-ID: <20200610161226.424337-4-nitesh@redhat.com> (raw)
In-Reply-To: <20200610161226.424337-1-nitesh@redhat.com>
From: Alex Belits <abelits@marvell.com>
With the existing implementation of store_rps_map() packets are
queued in the receive path on the backlog queues of other
CPUs irrespective of whether they are isolated or not. This could
add a latency overhead to any RT workload that is running on
the same CPU.
This patch ensures that store_rps_map() only uses available
housekeeping CPUs for storing the rps_map.
Signed-off-by: Alex Belits <abelits@marvell.com>
Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
---
net/core/net-sysfs.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index e353b822bb15..16e433287191 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -11,6 +11,7 @@
#include <linux/if_arp.h>
#include <linux/slab.h>
#include <linux/sched/signal.h>
+#include <linux/sched/isolation.h>
#include <linux/nsproxy.h>
#include <net/sock.h>
#include <net/net_namespace.h>
@@ -741,7 +742,7 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
{
struct rps_map *old_map, *map;
cpumask_var_t mask;
- int err, cpu, i;
+ int err, cpu, i, hk_flags;
static DEFINE_MUTEX(rps_map_mutex);
if (!capable(CAP_NET_ADMIN))
@@ -756,6 +757,13 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
return err;
}
+ hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
+ cpumask_and(mask, mask, housekeeping_cpumask(hk_flags));
+ if (cpumask_weight(mask) == 0) {
+ free_cpumask_var(mask);
+ return -EINVAL;
+ }
+
map = kzalloc(max_t(unsigned int,
RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
GFP_KERNEL);
--
2.18.4
next prev parent reply other threads:[~2020-06-10 16:13 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-10 16:12 [PATCH v1 0/3] Preventing job distribution to isolated CPUs Nitesh Narayan Lal
2020-06-10 16:12 ` [Patch v1 1/3] lib: restricting cpumask_local_spread to only houskeeping CPUs Nitesh Narayan Lal
2020-06-10 16:12 ` [Patch v1 2/3] PCI: prevent work_on_cpu's probe to execute on isolated CPUs Nitesh Narayan Lal
2020-06-16 20:05 ` Bjorn Helgaas
2020-06-16 22:03 ` Nitesh Narayan Lal
2020-06-16 23:22 ` Frederic Weisbecker
2020-06-10 16:12 ` Nitesh Narayan Lal [this message]
2020-06-16 17:26 ` [PATCH v1 0/3] Preventing job distribution to " Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200610161226.424337-4-nitesh@redhat.com \
--to=nitesh@redhat.com \
--cc=abelits@marvell.com \
--cc=bhelgaas@google.com \
--cc=frederic@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mtosatti@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).