From: Jacob Keller <jacob.e.keller@intel.com> To: Thomas Gleixner <tglx@linutronix.de>, Marcelo Tosatti <mtosatti@redhat.com> Cc: Nitesh Narayan Lal <nitesh@redhat.com>, Peter Zijlstra <peterz@infradead.org>, helgaas@kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, intel-wired-lan@lists.osuosl.org, frederic@kernel.org, sassmann@redhat.com, jesse.brandeburg@intel.com, lihong.yang@intel.com, jeffrey.t.kirsher@intel.com, jlelli@redhat.com, hch@infradead.org, bhelgaas@google.com, mike.marciniszyn@intel.com, dennis.dalessandro@intel.com, thomas.lendacky@amd.com, jiri@nvidia.com, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, lgoncalv@redhat.com Subject: Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs Date: Mon, 26 Oct 2020 12:21:45 -0700 Message-ID: <86f8f667-bda6-59c4-91b7-6ba2ef55e3db@intel.com> (raw) In-Reply-To: <875z6w4xt4.fsf@nanos.tec.linutronix.de> On 10/26/2020 12:00 PM, Thomas Gleixner wrote: > On Mon, Oct 26 2020 at 14:30, Marcelo Tosatti wrote: >> On Fri, Oct 23, 2020 at 11:00:52PM +0200, Thomas Gleixner wrote: >>> So without information from the driver which tells what the best number >>> of interrupts is with a reduced number of CPUs, this cutoff will cause >>> more problems than it solves. Regressions guaranteed. >> >> One might want to move from one interrupt per isolated app core >> to zero, or vice versa. It seems that "best number of interrupts >> is with reduced number of CPUs" information, is therefore in userspace, >> not in driver... > > How does userspace know about the driver internals? Number of management > interrupts, optimal number of interrupts per queue? > I guess this is the problem solved in part by the queue management work that would make queues a thing that userspace is aware of. Are there drivers which use more than one interrupt per queue? I know drivers have multiple management interrupts.. and I guess some drivers do combined 1 interrupt per pair of Tx/Rx.. It's also plausible to to have multiple queues for one interrupt .. I'm not sure how a single queue with multiple interrupts would work though. >>> Managed interrupts base their interrupt allocation and spreading on >>> information which is handed in by the individual driver and not on crude >>> assumptions. They are not imposing restrictions on the use case. >>> >>> It's perfectly fine for isolated work to save a data set to disk after >>> computation has finished and that just works with the per-cpu I/O queue >>> which is otherwise completely silent. >> >> Userspace could only change the mask of interrupts which are not >> triggered by requests from the local CPU (admin, error, mgmt, etc), >> to avoid the vector exhaustion problem. >> >> However, there is no explicit way for userspace to know that, as far as >> i know. >> >> 130: 34845 0 0 0 0 0 0 0 IR-PCI-MSI 33554433-edge nvme0q1 >> 131: 0 27062 0 0 0 0 0 0 IR-PCI-MSI 33554434-edge nvme0q2 >> 132: 0 0 24393 0 0 0 0 0 IR-PCI-MSI 33554435-edge nvme0q3 >> 133: 0 0 0 24313 0 0 0 0 IR-PCI-MSI 33554436-edge nvme0q4 >> 134: 0 0 0 0 20608 0 0 0 IR-PCI-MSI 33554437-edge nvme0q5 >> 135: 0 0 0 0 0 22163 0 0 IR-PCI-MSI 33554438-edge nvme0q6 >> 136: 0 0 0 0 0 0 23020 0 IR-PCI-MSI 33554439-edge nvme0q7 >> 137: 0 0 0 0 0 0 0 24285 IR-PCI-MSI 33554440-edge nvme0q8 >> >> Can that be retrieved from PCI-MSI information, or drivers >> have to inform this? > > The driver should use a different name for the admin queues. > > Thanks, > > tglx >
next prev parent reply index Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-09-28 18:35 [PATCH v4 0/4] isolation: limit msix vectors " Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 1/4] sched/isolation: API to get number of " Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 2/4] sched/isolation: Extend nohz_full to isolate managed IRQs Nitesh Narayan Lal 2020-10-23 13:25 ` Peter Zijlstra 2020-10-23 13:29 ` Frederic Weisbecker 2020-10-23 13:57 ` Nitesh Narayan Lal 2020-10-23 13:45 ` Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 3/4] i40e: Limit msix vectors to housekeeping CPUs Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() " Nitesh Narayan Lal 2020-09-28 21:59 ` Bjorn Helgaas 2020-09-29 17:46 ` Christoph Hellwig 2020-10-16 12:20 ` Peter Zijlstra 2020-10-18 18:14 ` Nitesh Narayan Lal 2020-10-19 11:11 ` Peter Zijlstra 2020-10-19 14:00 ` Marcelo Tosatti 2020-10-19 14:25 ` Nitesh Narayan Lal 2020-10-20 7:30 ` Peter Zijlstra 2020-10-20 13:00 ` Nitesh Narayan Lal 2020-10-20 13:41 ` Peter Zijlstra 2020-10-20 14:39 ` Nitesh Narayan Lal 2020-10-22 17:47 ` Nitesh Narayan Lal 2020-10-23 8:58 ` Peter Zijlstra 2020-10-23 13:10 ` Nitesh Narayan Lal 2020-10-23 21:00 ` Thomas Gleixner 2020-10-26 13:35 ` Nitesh Narayan Lal 2020-10-26 13:57 ` Thomas Gleixner 2020-10-26 17:30 ` Marcelo Tosatti 2020-10-26 19:00 ` Thomas Gleixner 2020-10-26 19:11 ` Marcelo Tosatti 2020-10-26 19:21 ` Jacob Keller [this message] 2020-10-26 20:11 ` Thomas Gleixner 2020-10-26 21:11 ` Jacob Keller 2020-10-26 21:50 ` Thomas Gleixner 2020-10-26 22:13 ` Jakub Kicinski 2020-10-26 22:46 ` Thomas Gleixner 2020-10-26 22:52 ` Jacob Keller 2020-10-26 22:22 ` Nitesh Narayan Lal 2020-10-26 22:49 ` Thomas Gleixner 2020-10-26 23:08 ` Jacob Keller 2020-10-27 14:28 ` Thomas Gleixner 2020-10-27 11:47 ` Marcelo Tosatti 2020-10-27 14:43 ` Thomas Gleixner 2020-10-19 14:21 ` Frederic Weisbecker 2020-10-20 14:16 ` Thomas Gleixner 2020-10-20 16:18 ` Nitesh Narayan Lal 2020-10-20 18:07 ` Thomas Gleixner 2020-10-21 20:25 ` Thomas Gleixner 2020-10-21 21:04 ` Nitesh Narayan Lal 2020-10-22 0:02 ` Jakub Kicinski 2020-10-22 0:27 ` Jacob Keller 2020-10-22 8:28 ` Thomas Gleixner 2020-10-22 12:28 ` Marcelo Tosatti 2020-10-22 22:39 ` Thomas Gleixner 2020-10-01 15:49 ` [PATCH v4 0/4] isolation: limit msix vectors " Frederic Weisbecker 2020-10-08 21:40 ` Nitesh Narayan Lal
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=86f8f667-bda6-59c4-91b7-6ba2ef55e3db@intel.com \ --to=jacob.e.keller@intel.com \ --cc=bhelgaas@google.com \ --cc=dennis.dalessandro@intel.com \ --cc=frederic@kernel.org \ --cc=hch@infradead.org \ --cc=helgaas@kernel.org \ --cc=intel-wired-lan@lists.osuosl.org \ --cc=jeffrey.t.kirsher@intel.com \ --cc=jesse.brandeburg@intel.com \ --cc=jiri@nvidia.com \ --cc=jlelli@redhat.com \ --cc=juri.lelli@redhat.com \ --cc=lgoncalv@redhat.com \ --cc=lihong.yang@intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-pci@vger.kernel.org \ --cc=mike.marciniszyn@intel.com \ --cc=mingo@redhat.com \ --cc=mtosatti@redhat.com \ --cc=netdev@vger.kernel.org \ --cc=nitesh@redhat.com \ --cc=peterz@infradead.org \ --cc=sassmann@redhat.com \ --cc=tglx@linutronix.de \ --cc=thomas.lendacky@amd.com \ --cc=vincent.guittot@linaro.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Linux-PCI Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/linux-pci/0 linux-pci/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 linux-pci linux-pci/ https://lore.kernel.org/linux-pci \ linux-pci@vger.kernel.org public-inbox-index linux-pci Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pci AGPL code for this site: git clone https://public-inbox.org/public-inbox.git