From: Frederic Weisbecker <frederic@kernel.org>
To: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Nitesh Narayan Lal <nitesh@redhat.com>,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
linux-pci@vger.kernel.org, mtosatti@redhat.com,
sassmann@redhat.com, jeffrey.t.kirsher@intel.com,
jacob.e.keller@intel.com, jlelli@redhat.com, hch@infradead.org,
bhelgaas@google.com, mike.marciniszyn@intel.com,
dennis.dalessandro@intel.com, thomas.lendacky@amd.com,
jerinj@marvell.com, mathias.nyman@intel.com, jiri@nvidia.com
Subject: Re: [RFC][Patch v1 2/3] i40e: limit msix vectors based on housekeeping CPUs
Date: Tue, 22 Sep 2020 00:58:35 +0200 [thread overview]
Message-ID: <20200921225834.GA30521@lenoir> (raw)
In-Reply-To: <20200917112359.00006e10@intel.com>
On Thu, Sep 17, 2020 at 11:23:59AM -0700, Jesse Brandeburg wrote:
> Nitesh Narayan Lal wrote:
>
> > In a realtime environment, it is essential to isolate unwanted IRQs from
> > isolated CPUs to prevent latency overheads. Creating MSIX vectors only
> > based on the online CPUs could lead to a potential issue on an RT setup
> > that has several isolated CPUs but a very few housekeeping CPUs. This is
> > because in these kinds of setups an attempt to move the IRQs to the
> > limited housekeeping CPUs from isolated CPUs might fail due to the per
> > CPU vector limit. This could eventually result in latency spikes because
> > of the IRQ threads that we fail to move from isolated CPUs.
> >
> > This patch prevents i40e to add vectors only based on available
> > housekeeping CPUs by using num_housekeeping_cpus().
> >
> > Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
>
> The driver changes are straightforward, but this isn't the only driver
> with this issue, right? I'm sure ixgbe and ice both have this problem
> too, you should fix them as well, at a minimum, and probably other
> vendors drivers:
>
> $ rg -c --stats num_online_cpus drivers/net/ethernet
> ...
> 50 files contained matches
Ouch, I was indeed surprised that these MSI vector allocations were done
at the driver level and not at some $SUBSYSTEM level.
The logic is already there in the driver so I wouldn't oppose to this very patch
but would a shared infrastructure make sense for this? Something that would
also handle hotplug operations?
Does it possibly go even beyond networking drivers?
Thanks.
>
> for this patch i40e
> Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
next prev parent reply other threads:[~2020-09-21 22:58 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-09 15:08 [RFC] [PATCH v1 0/3] isolation: limit msix vectors based on housekeeping CPUs Nitesh Narayan Lal
2020-09-09 15:08 ` [RFC][Patch v1 1/3] sched/isolation: API to get num of hosekeeping CPUs Nitesh Narayan Lal
2020-09-17 18:18 ` Jesse Brandeburg
2020-09-17 18:43 ` Nitesh Narayan Lal
2020-09-17 20:11 ` Bjorn Helgaas
2020-09-17 21:48 ` Jacob Keller
2020-09-17 22:09 ` Nitesh Narayan Lal
2020-09-21 23:40 ` Frederic Weisbecker
2020-09-22 3:16 ` Nitesh Narayan Lal
2020-09-22 10:08 ` Frederic Weisbecker
2020-09-22 13:50 ` Nitesh Narayan Lal
2020-09-22 20:58 ` Frederic Weisbecker
2020-09-22 21:15 ` Nitesh Narayan Lal
2020-09-22 21:26 ` Andrew Lunn
2020-09-22 22:20 ` Nitesh Narayan Lal
2020-09-09 15:08 ` [RFC][Patch v1 2/3] i40e: limit msix vectors based on housekeeping CPUs Nitesh Narayan Lal
2020-09-11 15:23 ` Marcelo Tosatti
2020-09-17 18:23 ` Jesse Brandeburg
2020-09-17 18:31 ` Nitesh Narayan Lal
2020-09-21 22:58 ` Frederic Weisbecker [this message]
2020-09-22 3:08 ` Nitesh Narayan Lal
2020-09-22 9:54 ` Frederic Weisbecker
2020-09-22 13:34 ` Nitesh Narayan Lal
2020-09-22 20:44 ` Frederic Weisbecker
2020-09-22 21:05 ` Nitesh Narayan Lal
2020-09-09 15:08 ` [RFC][Patch v1 3/3] PCI: Limit pci_alloc_irq_vectors as per " Nitesh Narayan Lal
2020-09-10 19:22 ` Marcelo Tosatti
2020-09-10 19:31 ` Nitesh Narayan Lal
2020-09-22 13:54 ` Nitesh Narayan Lal
2020-09-22 21:08 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200921225834.GA30521@lenoir \
--to=frederic@kernel.org \
--cc=bhelgaas@google.com \
--cc=dennis.dalessandro@intel.com \
--cc=hch@infradead.org \
--cc=jacob.e.keller@intel.com \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jerinj@marvell.com \
--cc=jesse.brandeburg@intel.com \
--cc=jiri@nvidia.com \
--cc=jlelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=mathias.nyman@intel.com \
--cc=mike.marciniszyn@intel.com \
--cc=mtosatti@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=nitesh@redhat.com \
--cc=sassmann@redhat.com \
--cc=thomas.lendacky@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).