From: Marcelo Tosatti <mtosatti@redhat.com> To: Thomas Gleixner <tglx@linutronix.de> Cc: Nitesh Narayan Lal <nitesh@redhat.com>, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, intel-wired-lan@lists.osuosl.org, frederic@kernel.org, sassmann@redhat.com, jesse.brandeburg@intel.com, lihong.yang@intel.com, helgaas@kernel.org, jeffrey.t.kirsher@intel.com, jacob.e.keller@intel.com, jlelli@redhat.com, hch@infradead.org, bhelgaas@google.com, mike.marciniszyn@intel.com, dennis.dalessandro@intel.com, thomas.lendacky@amd.com, jiri@nvidia.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, lgoncalv@redhat.com, Jakub Kicinski <kuba@kernel.org>, Dave Miller <davem@davemloft.net> Subject: Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs Date: Thu, 22 Oct 2020 09:28:49 -0300 [thread overview] Message-ID: <20201022122849.GA148426@fuller.cnet> (raw) In-Reply-To: <877drj72cz.fsf@nanos.tec.linutronix.de> On Wed, Oct 21, 2020 at 10:25:48PM +0200, Thomas Gleixner wrote: > On Tue, Oct 20 2020 at 20:07, Thomas Gleixner wrote: > > On Tue, Oct 20 2020 at 12:18, Nitesh Narayan Lal wrote: > >> However, IMHO we would still need a logic to prevent the devices from > >> creating excess vectors. > > > > Managed interrupts are preventing exactly that by pinning the interrupts > > and queues to one or a set of CPUs, which prevents vector exhaustion on > > CPU hotplug. > > > > Non-managed, yes that is and always was a problem. One of the reasons > > why managed interrupts exist. > > But why is this only a problem for isolation? The very same problem > exists vs. CPU hotplug and therefore hibernation. > > On x86 we have at max. 204 vectors available for device interrupts per > CPU. So assumed the only device interrupt in use is networking then any > machine which has more than 204 network interrupts (queues, aux ...) > active will prevent the machine from hibernation. > > Aside of that it's silly to have multiple queues targeted at a single > CPU in case of hotplug. And that's not a theoretical problem. Some > power management schemes shut down sockets when the utilization of a > system is low enough, e.g. outside of working hours. Exactly. It seems the proper way to do handle this is to disable individual vectors rather than moving them. And that is needed for dynamic isolate / unisolate anyway... > The whole point of multi-queue is to have locality so that traffic from > a CPU goes through the CPU local queue. What's the point of having two > or more queues on a CPU in case of hotplug? > > The right answer to this is to utilize managed interrupts and have > according logic in your network driver to handle CPU hotplug. When a CPU > goes down, then the queue which is associated to that CPU is quiesced > and the interrupt core shuts down the relevant interrupt instead of > moving it to an online CPU (which causes the whole vector exhaustion > problem on x86). When the CPU comes online again, then the interrupt is > reenabled in the core and the driver reactivates the queue. Aha... But it would be necessary to do that from userspace (for runtime isolate/unisolate).
WARNING: multiple messages have this Message-ID (diff)
From: Marcelo Tosatti <mtosatti@redhat.com> To: intel-wired-lan@osuosl.org Subject: [Intel-wired-lan] [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs Date: Thu, 22 Oct 2020 09:28:49 -0300 [thread overview] Message-ID: <20201022122849.GA148426@fuller.cnet> (raw) In-Reply-To: <877drj72cz.fsf@nanos.tec.linutronix.de> On Wed, Oct 21, 2020 at 10:25:48PM +0200, Thomas Gleixner wrote: > On Tue, Oct 20 2020 at 20:07, Thomas Gleixner wrote: > > On Tue, Oct 20 2020 at 12:18, Nitesh Narayan Lal wrote: > >> However, IMHO we would still need a logic to prevent the devices from > >> creating excess vectors. > > > > Managed interrupts are preventing exactly that by pinning the interrupts > > and queues to one or a set of CPUs, which prevents vector exhaustion on > > CPU hotplug. > > > > Non-managed, yes that is and always was a problem. One of the reasons > > why managed interrupts exist. > > But why is this only a problem for isolation? The very same problem > exists vs. CPU hotplug and therefore hibernation. > > On x86 we have at max. 204 vectors available for device interrupts per > CPU. So assumed the only device interrupt in use is networking then any > machine which has more than 204 network interrupts (queues, aux ...) > active will prevent the machine from hibernation. > > Aside of that it's silly to have multiple queues targeted at a single > CPU in case of hotplug. And that's not a theoretical problem. Some > power management schemes shut down sockets when the utilization of a > system is low enough, e.g. outside of working hours. Exactly. It seems the proper way to do handle this is to disable individual vectors rather than moving them. And that is needed for dynamic isolate / unisolate anyway... > The whole point of multi-queue is to have locality so that traffic from > a CPU goes through the CPU local queue. What's the point of having two > or more queues on a CPU in case of hotplug? > > The right answer to this is to utilize managed interrupts and have > according logic in your network driver to handle CPU hotplug. When a CPU > goes down, then the queue which is associated to that CPU is quiesced > and the interrupt core shuts down the relevant interrupt instead of > moving it to an online CPU (which causes the whole vector exhaustion > problem on x86). When the CPU comes online again, then the interrupt is > reenabled in the core and the driver reactivates the queue. Aha... But it would be necessary to do that from userspace (for runtime isolate/unisolate).
next prev parent reply other threads:[~2020-10-22 12:29 UTC|newest] Thread overview: 110+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-09-28 18:35 [PATCH v4 0/4] isolation: limit msix vectors to housekeeping CPUs Nitesh Narayan Lal 2020-09-28 18:35 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 1/4] sched/isolation: API to get number of " Nitesh Narayan Lal 2020-09-28 18:35 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 2/4] sched/isolation: Extend nohz_full to isolate managed IRQs Nitesh Narayan Lal 2020-09-28 18:35 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-23 13:25 ` Peter Zijlstra 2020-10-23 13:25 ` [Intel-wired-lan] " Peter Zijlstra 2020-10-23 13:29 ` Frederic Weisbecker 2020-10-23 13:29 ` [Intel-wired-lan] " Frederic Weisbecker 2020-10-23 13:57 ` Nitesh Narayan Lal 2020-10-23 13:57 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-23 13:45 ` Nitesh Narayan Lal 2020-10-23 13:45 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 3/4] i40e: Limit msix vectors to housekeeping CPUs Nitesh Narayan Lal 2020-09-28 18:35 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-09-28 18:35 ` [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() " Nitesh Narayan Lal 2020-09-28 18:35 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-09-28 21:59 ` Bjorn Helgaas 2020-09-28 21:59 ` [Intel-wired-lan] " Bjorn Helgaas 2020-09-29 17:46 ` Christoph Hellwig 2020-09-29 17:46 ` [Intel-wired-lan] " Christoph Hellwig 2020-10-16 12:20 ` Peter Zijlstra 2020-10-16 12:20 ` [Intel-wired-lan] " Peter Zijlstra 2020-10-18 18:14 ` Nitesh Narayan Lal 2020-10-18 18:14 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-19 11:11 ` Peter Zijlstra 2020-10-19 11:11 ` [Intel-wired-lan] " Peter Zijlstra 2020-10-19 14:00 ` Marcelo Tosatti 2020-10-19 14:00 ` [Intel-wired-lan] " Marcelo Tosatti 2020-10-19 14:25 ` Nitesh Narayan Lal 2020-10-19 14:25 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-20 7:30 ` Peter Zijlstra 2020-10-20 7:30 ` [Intel-wired-lan] " Peter Zijlstra 2020-10-20 13:00 ` Nitesh Narayan Lal 2020-10-20 13:00 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-20 13:41 ` Peter Zijlstra 2020-10-20 13:41 ` [Intel-wired-lan] " Peter Zijlstra 2020-10-20 14:39 ` Nitesh Narayan Lal 2020-10-20 14:39 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-22 17:47 ` Nitesh Narayan Lal 2020-10-22 17:47 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-23 8:58 ` Peter Zijlstra 2020-10-23 8:58 ` [Intel-wired-lan] " Peter Zijlstra 2020-10-23 13:10 ` Nitesh Narayan Lal 2020-10-23 13:10 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-23 21:00 ` Thomas Gleixner 2020-10-23 21:00 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 13:35 ` Nitesh Narayan Lal 2020-10-26 13:35 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-26 13:57 ` Thomas Gleixner 2020-10-26 13:57 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 17:30 ` Marcelo Tosatti 2020-10-26 17:30 ` [Intel-wired-lan] " Marcelo Tosatti 2020-10-26 19:00 ` Thomas Gleixner 2020-10-26 19:00 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 19:11 ` Marcelo Tosatti 2020-10-26 19:11 ` [Intel-wired-lan] " Marcelo Tosatti 2020-10-26 19:21 ` Jacob Keller 2020-10-26 19:21 ` [Intel-wired-lan] " Jacob Keller 2020-10-26 20:11 ` Thomas Gleixner 2020-10-26 20:11 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 21:11 ` Jacob Keller 2020-10-26 21:11 ` [Intel-wired-lan] " Jacob Keller 2020-10-26 21:50 ` Thomas Gleixner 2020-10-26 21:50 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 22:13 ` Jakub Kicinski 2020-10-26 22:13 ` [Intel-wired-lan] " Jakub Kicinski 2020-10-26 22:46 ` Thomas Gleixner 2020-10-26 22:46 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 22:52 ` Jacob Keller 2020-10-26 22:52 ` [Intel-wired-lan] " Jacob Keller 2020-10-26 22:22 ` Nitesh Narayan Lal 2020-10-26 22:22 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-26 22:49 ` Thomas Gleixner 2020-10-26 22:49 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-26 23:08 ` Jacob Keller 2020-10-26 23:08 ` [Intel-wired-lan] " Jacob Keller 2020-10-27 14:28 ` Thomas Gleixner 2020-10-27 14:28 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-27 11:47 ` Marcelo Tosatti 2020-10-27 11:47 ` [Intel-wired-lan] " Marcelo Tosatti 2020-10-27 14:43 ` Thomas Gleixner 2020-10-27 14:43 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-19 14:21 ` Frederic Weisbecker 2020-10-19 14:21 ` [Intel-wired-lan] " Frederic Weisbecker 2020-10-20 14:16 ` Thomas Gleixner 2020-10-20 14:16 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-20 16:18 ` Nitesh Narayan Lal 2020-10-20 16:18 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-20 18:07 ` Thomas Gleixner 2020-10-20 18:07 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-21 20:25 ` Thomas Gleixner 2020-10-21 20:25 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-21 21:04 ` Nitesh Narayan Lal 2020-10-21 21:04 ` [Intel-wired-lan] " Nitesh Narayan Lal 2020-10-22 0:02 ` Jakub Kicinski 2020-10-22 0:02 ` [Intel-wired-lan] " Jakub Kicinski 2020-10-22 0:27 ` Jacob Keller 2020-10-22 0:27 ` [Intel-wired-lan] " Jacob Keller 2020-10-22 8:28 ` Thomas Gleixner 2020-10-22 8:28 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-22 12:28 ` Marcelo Tosatti [this message] 2020-10-22 12:28 ` Marcelo Tosatti 2020-10-22 22:39 ` Thomas Gleixner 2020-10-22 22:39 ` [Intel-wired-lan] " Thomas Gleixner 2020-10-01 15:49 ` [PATCH v4 0/4] isolation: limit msix vectors " Frederic Weisbecker 2020-10-01 15:49 ` [Intel-wired-lan] " Frederic Weisbecker 2020-10-08 21:40 ` Nitesh Narayan Lal 2020-10-08 21:40 ` [Intel-wired-lan] " Nitesh Narayan Lal
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20201022122849.GA148426@fuller.cnet \ --to=mtosatti@redhat.com \ --cc=bhelgaas@google.com \ --cc=davem@davemloft.net \ --cc=dennis.dalessandro@intel.com \ --cc=frederic@kernel.org \ --cc=hch@infradead.org \ --cc=helgaas@kernel.org \ --cc=intel-wired-lan@lists.osuosl.org \ --cc=jacob.e.keller@intel.com \ --cc=jeffrey.t.kirsher@intel.com \ --cc=jesse.brandeburg@intel.com \ --cc=jiri@nvidia.com \ --cc=jlelli@redhat.com \ --cc=juri.lelli@redhat.com \ --cc=kuba@kernel.org \ --cc=lgoncalv@redhat.com \ --cc=lihong.yang@intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-pci@vger.kernel.org \ --cc=mike.marciniszyn@intel.com \ --cc=mingo@redhat.com \ --cc=netdev@vger.kernel.org \ --cc=nitesh@redhat.com \ --cc=peterz@infradead.org \ --cc=sassmann@redhat.com \ --cc=tglx@linutronix.de \ --cc=thomas.lendacky@amd.com \ --cc=vincent.guittot@linaro.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.