From: Nitesh Narayan Lal <nitesh@redhat.com>
To: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org,
frederic@kernel.org, mtosatti@redhat.com, juri.lelli@redhat.com,
abelits@marvell.com, bhelgaas@google.com,
linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org,
peterz@infradead.org, tglx@linutronix.de, davem@davemloft.net,
akpm@linux-foundation.org, sfr@canb.auug.org.au,
stephen@networkplumber.org, rppt@linux.vnet.ibm.com,
jinyuqi@huawei.com, zhangshaokun@hisilicon.com
Subject: [PATCH v4 0/3] Preventing job distribution to isolated CPUs
Date: Thu, 25 Jun 2020 18:34:40 -0400 [thread overview]
Message-ID: <20200625223443.2684-1-nitesh@redhat.com> (raw)
This patch-set is originated from one of the patches that have been
posted earlier as a part of "Task_isolation" mode [1] patch series
by Alex Belits <abelits@marvell.com>. There are only a couple of
changes that I am proposing in this patch-set compared to what Alex
has posted earlier.
Context
=======
On a broad level, all three patches that are included in this patch
set are meant to improve the driver/library to respect isolated
CPUs by not pinning any job on it. Not doing so could impact
the latency values in RT use-cases.
Patches
=======
* Patch1:
The first patch is meant to make cpumask_local_spread()
aware of the isolated CPUs. It ensures that the CPUs that
are returned by this API only includes housekeeping CPUs.
* Patch2:
This patch ensures that a probe function that is called
using work_on_cpu() doesn't run any task on an isolated CPU.
* Patch3:
This patch makes store_rps_map() aware of the isolated
CPUs so that rps don't queue any jobs on an isolated CPU.
Proposed Changes
================
To fix the above-mentioned issues Alex has used housekeeping_cpumask().
The only changes that I am proposing here are:
- Removing the dependency on CONFIG_TASK_ISOLATION that was proposed by
Alex. As it should be safe to rely on housekeeping_cpumask()
even when we don't have any isolated CPUs and we want
to fall back to using all available CPUs in any of the above scenarios.
- Using both HK_FLAG_DOMAIN and HK_FLAG_WQ in Patch2 & 3, this is
because we would want the above fixes not only when we have isolcpus but
also with something like systemd's CPU affinity.
Testing
=======
* Patch 1:
Fix for cpumask_local_spread() is tested by creating VFs, loading
iavf module and by adding a tracepoint to confirm that only housekeeping
CPUs are picked when an appropriate profile is set up and all remaining
CPUs when no CPU isolation is configured.
* Patch 2:
To test the PCI fix, I hotplugged a virtio-net-pci from qemu console
and forced its addition to a specific node to trigger the code path that
includes the proposed fix and verified that only housekeeping CPUs
are included via tracepoint.
* Patch 3:
To test the fix in store_rps_map(), I tried configuring an isolated
CPU by writing to /sys/class/net/en*/queues/rx*/rps_cpus which
resulted in 'write error: Invalid argument' error. For the case
where a non-isolated CPU is writing in rps_cpus the above operation
succeeded without any error.
Changes from v3[2]:
==================
- In patch 1, replaced HK_FLAG_WQ with HK_FLAG_MANAGED_IRQ based on the
suggestion from Frederic Weisbecker.
Changes from v2[3]:
==================
Both the following suggestions are from Peter Zijlstra.
- Patch1: Removed the extra while loop from cpumask_local_spread and fixed
the code styling issues.
- Patch3: Change to use cpumask_empty() for verifying that the requested
CPUs are available in the the housekeeping CPUs.
Changes from v1[4]:
==================
- Included the suggestions made by Bjorn Helgaas in the commit message.
- Included the 'Reviewed-by' and 'Acked-by' received for Patch-2.
[1] https://patchwork.ozlabs.org/project/netdev/patch/51102eebe62336c6a4e584c7a503553b9f90e01c.camel@marvell.com/
[2] https://patchwork.ozlabs.org/project/linux-pci/cover/20200623192331.215557-1-nitesh@redhat.com/
[3] https://patchwork.ozlabs.org/project/linux-pci/cover/20200622234510.240834-1-nitesh@redhat.com/
[4] https://patchwork.ozlabs.org/project/linux-pci/cover/20200610161226.424337-1-nitesh@redhat.com/
Alex Belits (3):
lib: Restrict cpumask_local_spread to houskeeping CPUs
PCI: Restrict probe functions to housekeeping CPUs
net: Restrict receive packets queuing to housekeeping CPUs
drivers/pci/pci-driver.c | 5 ++++-
lib/cpumask.c | 16 +++++++++++-----
net/core/net-sysfs.c | 10 +++++++++-
3 files changed, 24 insertions(+), 7 deletions(-)
--
next reply other threads:[~2020-06-25 22:39 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-25 22:34 Nitesh Narayan Lal [this message]
2020-06-25 22:34 ` [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs Nitesh Narayan Lal
2020-06-29 16:11 ` Nitesh Narayan Lal
2020-07-01 0:32 ` Andrew Morton
2020-07-01 0:47 ` Nitesh Narayan Lal
2021-01-27 11:57 ` Robin Murphy
2021-01-27 12:19 ` Marcelo Tosatti
2021-01-27 12:36 ` Robin Murphy
2021-01-27 13:09 ` Marcelo Tosatti
2021-01-27 13:49 ` Robin Murphy
2021-01-27 14:16 ` Nitesh Narayan Lal
2021-01-28 15:56 ` Thomas Gleixner
2021-01-28 16:33 ` Marcelo Tosatti
[not found] ` <02ac9d85-7ddd-96da-1252-4663feea7c9f@marvell.com>
2021-02-01 17:50 ` [EXT] " Marcelo Tosatti
2021-01-28 16:02 ` Thomas Gleixner
2021-01-28 16:59 ` Marcelo Tosatti
2021-01-28 17:35 ` Nitesh Narayan Lal
2021-01-28 20:01 ` Thomas Gleixner
[not found] ` <d2a4dc97-a9ed-e0e7-3b9c-c56ae46f6608@redhat.com>
[not found] ` <20210129142356.GB40876@fuller.cnet>
2021-01-29 17:34 ` [EXT] " Alex Belits
[not found] ` <18584612-868c-0f88-5de2-dc93c8638816@redhat.com>
2021-02-05 19:56 ` Thomas Gleixner
2021-02-04 18:15 ` Marcelo Tosatti
2021-02-04 18:47 ` Nitesh Narayan Lal
2021-02-04 19:06 ` Marcelo Tosatti
2021-02-04 19:17 ` Nitesh Narayan Lal
2021-02-05 22:23 ` Thomas Gleixner
2021-02-05 22:26 ` Thomas Gleixner
2021-02-07 0:43 ` Nitesh Narayan Lal
2021-02-11 15:55 ` Nitesh Narayan Lal
2021-03-04 18:15 ` Nitesh Narayan Lal
[not found] ` <faa8d84e-db67-7fbe-891e-f4987f106b20@marvell.com>
2021-03-04 23:23 ` [EXT] " Nitesh Narayan Lal
2021-04-06 17:22 ` Jesse Brandeburg
2021-04-07 15:18 ` Nitesh Narayan Lal
2021-04-08 18:49 ` Nitesh Narayan Lal
2021-04-14 16:11 ` Jesse Brandeburg
2021-04-15 22:11 ` Nitesh Narayan Lal
2021-04-29 21:44 ` Nitesh Lal
2021-04-30 1:48 ` Jesse Brandeburg
2021-04-30 13:10 ` Nitesh Lal
2021-04-30 7:10 ` Thomas Gleixner
2021-04-30 16:14 ` Nitesh Lal
2021-04-30 18:21 ` Thomas Gleixner
2021-04-30 21:07 ` Nitesh Lal
2021-05-01 2:21 ` Jesse Brandeburg
2021-05-03 13:15 ` Nitesh Lal
2020-06-25 22:34 ` [Patch v4 2/3] PCI: Restrict probe functions to housekeeping CPUs Nitesh Narayan Lal
2020-06-25 22:34 ` [Patch v4 3/3] net: Restrict receive packets queuing " Nitesh Narayan Lal
2020-06-26 11:14 ` Peter Zijlstra
2020-06-26 17:20 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200625223443.2684-1-nitesh@redhat.com \
--to=nitesh@redhat.com \
--cc=abelits@marvell.com \
--cc=akpm@linux-foundation.org \
--cc=bhelgaas@google.com \
--cc=davem@davemloft.net \
--cc=frederic@kernel.org \
--cc=jinyuqi@huawei.com \
--cc=juri.lelli@redhat.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mtosatti@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=rppt@linux.vnet.ibm.com \
--cc=sfr@canb.auug.org.au \
--cc=stephen@networkplumber.org \
--cc=tglx@linutronix.de \
--cc=zhangshaokun@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).