linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sumit Saxena <sumit.saxena@broadcom.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: tglx@linutronix.de, hch@lst.de, linux-kernel@vger.kernel.org,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	Shivasharan Srikanteshwara 
	<shivasharan.srikanteshwara@broadcom.com>
Subject: RE: Affinity managed interrupts vs non-managed interrupts
Date: Wed, 29 Aug 2018 16:16:23 +0530	[thread overview]
Message-ID: <300d6fef733ca76ced581f8c6304bac6@mail.gmail.com> (raw)
In-Reply-To: <20180829084618.GA24765@ming.t460p>

> -----Original Message-----
> From: Ming Lei [mailto:ming.lei@redhat.com]
> Sent: Wednesday, August 29, 2018 2:16 PM
> To: Sumit Saxena <sumit.saxena@broadcom.com>
> Cc: tglx@linutronix.de; hch@lst.de; linux-kernel@vger.kernel.org
> Subject: Re: Affinity managed interrupts vs non-managed interrupts
>
> Hello Sumit,
Hi Ming,
Thanks for response.
>
> On Tue, Aug 28, 2018 at 12:04:52PM +0530, Sumit Saxena wrote:
> >  Affinity managed interrupts vs non-managed interrupts
> >
> > Hi Thomas,
> >
> > We are working on next generation MegaRAID product where requirement
> > is- to allocate additional 16 MSI-x vectors in addition to number of
> > MSI-x vectors megaraid_sas driver usually allocates.  MegaRAID adapter
> > supports 128 MSI-x vectors.
> >
> > To explain the requirement and solution, consider that we have 2
> > socket system (each socket having 36 logical CPUs). Current driver
> > will allocate total 72 MSI-x vectors by calling API-
> > pci_alloc_irq_vectors(with flag- PCI_IRQ_AFFINITY).  All 72 MSI-x
> > vectors will have affinity across NUMA node s and interrupts are
affinity
> managed.
> >
> > If driver calls- pci_alloc_irq_vectors_affinity() with pre_vectors =
> > 16 and, driver can allocate 16 + 72 MSI-x vectors.
>
> Could you explain a bit what the specific use case the extra 16 vectors
is?
We are trying to avoid the penalty due to one interrupt per IO completion
and decided to coalesce interrupts on these extra 16 reply queues.
For regular 72 reply queues, we will not coalesce interrupts as for low IO
workload, interrupt coalescing may take more time due to less IO
completions.
In IO submission path, driver will decide which set of reply queues
(either extra 16 reply queues or regular 72 reply queues) to be picked
based on IO workload.
>
> >
> > All pre_vectors (16) will be mapped to all available online CPUs but e
> > ffective affinity of each vector is to CPU 0. Our requirement is to
> > have pre _vectors 16 reply queues to be mapped to local NUMA node with
> > effective CPU should be spread within local node cpu mask. Without
> > changing kernel code, we can
>
> If all CPUs in one NUMA node is offline, can this use case work as
expected?
> Seems we have to understand what the use case is and how it works.

Yes, if all CPUs of the NUMA node is offlined, IRQ-CPU affinity will be
broken and irqbalancer takes care of migrating affected IRQs to online
CPUs of different NUMA node.
When offline CPUs are onlined again, irqbalancer restores affinity.
>
>
> Thanks,
> Ming

  reply	other threads:[~2018-08-29 10:46 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <eccc46e12890a1d033d9003837012502@mail.gmail.com>
2018-08-29  8:46 ` Affinity managed interrupts vs non-managed interrupts Ming Lei
2018-08-29 10:46   ` Sumit Saxena [this message]
2018-08-30 17:15     ` Kashyap Desai
2018-08-31  6:54     ` Ming Lei
2018-08-31  7:50       ` Kashyap Desai
2018-08-31 20:24         ` Thomas Gleixner
2018-08-31 21:49           ` Kashyap Desai
2018-08-31 22:48             ` Thomas Gleixner
2018-08-31 23:37               ` Kashyap Desai
2018-09-02 12:02                 ` Thomas Gleixner
2018-09-03  5:34                   ` Kashyap Desai
2018-09-03 16:28                     ` Thomas Gleixner
2018-09-04 10:29                       ` Kashyap Desai
2018-09-05  5:46                         ` Dou Liyang
2018-09-05  9:45                           ` Kashyap Desai
2018-09-05 10:38                             ` Thomas Gleixner
2018-09-06 10:14                               ` Dou Liyang
2018-09-06 11:46                                 ` Thomas Gleixner
2018-09-11  9:13                                   ` Christoph Hellwig
2018-09-11  9:38                                     ` Dou Liyang
2018-09-11  9:22               ` Christoph Hellwig
2018-09-03  2:13         ` Ming Lei
2018-09-03  6:10           ` Kashyap Desai
2018-09-03  9:21             ` Ming Lei
2018-09-03  9:50               ` Kashyap Desai
2018-09-11  9:21     ` Christoph Hellwig
2018-09-11  9:54       ` Kashyap Desai
2018-08-28  6:47 Sumit Saxena

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=300d6fef733ca76ced581f8c6304bac6@mail.gmail.com \
    --to=sumit.saxena@broadcom.com \
    --cc=hch@lst.de \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=shivasharan.srikanteshwara@broadcom.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).