All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Guilherme G. Piccoli" <gpiccoli@linux.vnet.ibm.com>
To: Christoph Hellwig <hch@lst.de>
Cc: linux-block@vger.kernel.org, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	axboe@fb.com, tglx@linutronix.de, bart.vanassche@sandisk.com
Subject: Re: [PATCH 06/13] irq: add a helper spread an affinity mask for MSI/MSI-X vectors
Date: Wed, 15 Jun 2016 10:09:33 -0300	[thread overview]
Message-ID: <5761538D.6060303@linux.vnet.ibm.com> (raw)
In-Reply-To: <20160615101045.GB16425@lst.de>

Thanks for the responses Bart and Christoph.


On 06/15/2016 07:10 AM, Christoph Hellwig wrote:
> On Tue, Jun 14, 2016 at 06:54:22PM -0300, Guilherme G. Piccoli wrote:
>> On 06/14/2016 04:58 PM, Christoph Hellwig wrote:
>>> This is lifted from the blk-mq code and adopted to use the affinity mask
>>> concept just intruced in the irq handling code.
>>
>> Very nice patch Christoph, thanks. There's a little typo above, on
>> "intruced".
>
> fixed.
>
>> Another little typo above in "assining".
>
> fixed a swell.
>
>> I take this opportunity to ask you something, since I'm working in a
>> related code in a specific driver
>
> Which driver?  One of the points here is to get this sort of code out
> of drivers and into common code..

A network driver, i40e. I'd be glad to implement/see some common code to 
raise the topology information I need, but I was implementing on i40e 
more as a test case/toy example heheh...


>> - sorry in advance if my question is
>> silly or if I misunderstood your code.
>>
>> The function irq_create_affinity_mask() below deals with the case in which
>> we have nr_vecs < num_online_cpus(); in this case, wouldn't be a good idea
>> to trying distribute the vecs among cores?
>>
>> Example: if we have 128 online cpus, 8 per core (meaning 16 cores) and 64
>> vecs, I guess would be ideal to distribute 4 vecs _per core_, leaving 4
>> CPUs in each core without vecs.
>
> There have been some reports about the blk-mq IRQ distribution being
> suboptimal, but no one sent patches so far.  This patch just moves the
> existing algorithm into the core code to be better bisectable.
>
> I think an algorithm that takes cores into account instead of just SMT
> sibling would be very useful.  So if you have a case where this helps
> for you an incremental patch (or even one against the current blk-mq
> code for now) would be appreciated.

...but now I'll focus on the common/general case! Thanks for the 
suggestion Christoph. I guess would be even better to have a generic 
function that retrieves an optimal mask, something like 
topology_get_optimal_mask(n, *cpumask), in which we get the best 
distribution of n CPUs among all cores and return such a mask - 
interesting case is when n < num_online_cpus. So, this function could be 
used inside your irq_create_affinity_mask() and maybe in other places it 
is needed.

I was planning to use topology_core_id() to retrieve the core of a CPU, 
if anybody has a better idea, I'd be glad to hear it.

Cheers,


Guilherme


>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>


WARNING: multiple messages have this Message-ID (diff)
From: gpiccoli@linux.vnet.ibm.com (Guilherme G. Piccoli)
Subject: [PATCH 06/13] irq: add a helper spread an affinity mask for MSI/MSI-X vectors
Date: Wed, 15 Jun 2016 10:09:33 -0300	[thread overview]
Message-ID: <5761538D.6060303@linux.vnet.ibm.com> (raw)
In-Reply-To: <20160615101045.GB16425@lst.de>

Thanks for the responses Bart and Christoph.


On 06/15/2016 07:10 AM, Christoph Hellwig wrote:
> On Tue, Jun 14, 2016@06:54:22PM -0300, Guilherme G. Piccoli wrote:
>> On 06/14/2016 04:58 PM, Christoph Hellwig wrote:
>>> This is lifted from the blk-mq code and adopted to use the affinity mask
>>> concept just intruced in the irq handling code.
>>
>> Very nice patch Christoph, thanks. There's a little typo above, on
>> "intruced".
>
> fixed.
>
>> Another little typo above in "assining".
>
> fixed a swell.
>
>> I take this opportunity to ask you something, since I'm working in a
>> related code in a specific driver
>
> Which driver?  One of the points here is to get this sort of code out
> of drivers and into common code..

A network driver, i40e. I'd be glad to implement/see some common code to 
raise the topology information I need, but I was implementing on i40e 
more as a test case/toy example heheh...


>> - sorry in advance if my question is
>> silly or if I misunderstood your code.
>>
>> The function irq_create_affinity_mask() below deals with the case in which
>> we have nr_vecs < num_online_cpus(); in this case, wouldn't be a good idea
>> to trying distribute the vecs among cores?
>>
>> Example: if we have 128 online cpus, 8 per core (meaning 16 cores) and 64
>> vecs, I guess would be ideal to distribute 4 vecs _per core_, leaving 4
>> CPUs in each core without vecs.
>
> There have been some reports about the blk-mq IRQ distribution being
> suboptimal, but no one sent patches so far.  This patch just moves the
> existing algorithm into the core code to be better bisectable.
>
> I think an algorithm that takes cores into account instead of just SMT
> sibling would be very useful.  So if you have a case where this helps
> for you an incremental patch (or even one against the current blk-mq
> code for now) would be appreciated.

...but now I'll focus on the common/general case! Thanks for the 
suggestion Christoph. I guess would be even better to have a generic 
function that retrieves an optimal mask, something like 
topology_get_optimal_mask(n, *cpumask), in which we get the best 
distribution of n CPUs among all cores and return such a mask - 
interesting case is when n < num_online_cpus. So, this function could be 
used inside your irq_create_affinity_mask() and maybe in other places it 
is needed.

I was planning to use topology_core_id() to retrieve the core of a CPU, 
if anybody has a better idea, I'd be glad to hear it.

Cheers,


Guilherme


>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>

  reply	other threads:[~2016-06-15 13:09 UTC|newest]

Thread overview: 132+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-14 19:58 automatic interrupt affinity for MSI/MSI-X capable devices V2 Christoph Hellwig
2016-06-14 19:58 ` Christoph Hellwig
2016-06-14 19:58 ` [PATCH 01/13] irq/msi: Remove unused MSI_FLAG_IDENTITY_MAP Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-16  9:05   ` Bart Van Assche
2016-06-16  9:05     ` Bart Van Assche
2016-06-14 19:58 ` [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-15  8:44   ` Bart Van Assche
2016-06-15  8:44     ` Bart Van Assche
2016-06-15 10:23     ` Christoph Hellwig
2016-06-15 10:23       ` Christoph Hellwig
2016-06-15 10:42       ` Bart Van Assche
2016-06-15 10:42         ` Bart Van Assche
2016-06-15 10:42         ` Bart Van Assche
2016-06-15 15:14         ` Keith Busch
2016-06-15 15:14           ` Keith Busch
2016-06-15 15:28           ` Bart Van Assche
2016-06-15 15:28             ` Bart Van Assche
2016-06-15 16:03             ` Keith Busch
2016-06-15 16:03               ` Keith Busch
2016-06-15 19:36               ` Bart Van Assche
2016-06-15 19:36                 ` Bart Van Assche
2016-06-15 20:06                 ` Keith Busch
2016-06-15 20:06                   ` Keith Busch
2016-06-15 20:12                   ` Keith Busch
2016-06-15 20:12                     ` Keith Busch
2016-06-15 20:50                     ` Bart Van Assche
2016-06-15 20:50                       ` Bart Van Assche
2016-06-16 15:19                       ` Keith Busch
2016-06-16 15:19                         ` Keith Busch
2016-06-22 11:56                         ` Alexander Gordeev
2016-06-22 11:56                           ` Alexander Gordeev
2016-06-22 11:56                           ` Alexander Gordeev
2016-06-16 15:20                 ` Christoph Hellwig
2016-06-16 15:20                   ` Christoph Hellwig
2016-06-16 15:39                   ` Bart Van Assche
2016-06-16 15:39                     ` Bart Van Assche
2016-06-20 12:22                     ` Christoph Hellwig
2016-06-20 12:22                       ` Christoph Hellwig
2016-06-20 12:22                       ` Christoph Hellwig
2016-06-20 13:21                       ` Bart Van Assche
2016-06-20 13:21                         ` Bart Van Assche
2016-06-20 13:21                         ` Bart Van Assche
2016-06-21 14:31                         ` Christoph Hellwig
2016-06-21 14:31                           ` Christoph Hellwig
2016-06-21 14:31                           ` Christoph Hellwig
2016-06-16  9:08   ` Bart Van Assche
2016-06-16  9:08     ` Bart Van Assche
2016-06-14 19:58 ` [PATCH 03/13] irq: Add affinity hint to irq allocation Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58 ` [PATCH 04/13] irq: Use affinity hint in irqdesc allocation Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58 ` [PATCH 05/13] irq/msi: Make use of affinity aware allocations Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58 ` [PATCH 06/13] irq: add a helper spread an affinity mask for MSI/MSI-X vectors Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 19:58   ` Christoph Hellwig
2016-06-14 21:54   ` Guilherme G. Piccoli
2016-06-14 21:54     ` Guilherme G. Piccoli
2016-06-15  8:35     ` Bart Van Assche
2016-06-15  8:35       ` Bart Van Assche
2016-06-15  8:35       ` Bart Van Assche
2016-06-15 10:10     ` Christoph Hellwig
2016-06-15 10:10       ` Christoph Hellwig
2016-06-15 13:09       ` Guilherme G. Piccoli [this message]
2016-06-15 13:09         ` Guilherme G. Piccoli
2016-06-16 15:16         ` Christoph Hellwig
2016-06-16 15:16           ` Christoph Hellwig
2016-06-25 20:05   ` Alexander Gordeev
2016-06-25 20:05     ` Alexander Gordeev
2016-06-30 17:48     ` Christoph Hellwig
2016-06-30 17:48       ` Christoph Hellwig
2016-06-30 17:48       ` Christoph Hellwig
2016-07-01  7:25       ` Alexander Gordeev
2016-07-01  7:25         ` Alexander Gordeev
2016-06-14 19:59 ` [PATCH 07/13] pci: Provide sensible irq vector alloc/free routines Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-23 11:16   ` Alexander Gordeev
2016-06-23 11:16     ` Alexander Gordeev
2016-06-30 16:54     ` Christoph Hellwig
2016-06-30 16:54       ` Christoph Hellwig
2016-06-30 17:28       ` Alexander Gordeev
2016-06-30 17:28         ` Alexander Gordeev
2016-06-30 17:35         ` Christoph Hellwig
2016-06-30 17:35           ` Christoph Hellwig
2016-06-14 19:59 ` [PATCH 08/13] pci: spread interrupt vectors in pci_alloc_irq_vectors Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-25 20:22   ` Alexander Gordeev
2016-06-25 20:22     ` Alexander Gordeev
2016-06-14 19:59 ` [PATCH 09/13] blk-mq: don't redistribute hardware queues on a CPU hotplug event Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59 ` [PATCH 10/13] blk-mq: only allocate a single mq_map per tag_set Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59 ` [PATCH 11/13] blk-mq: allow the driver to pass in an affinity mask Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-07-04  8:15   ` Alexander Gordeev
2016-07-04  8:15     ` Alexander Gordeev
2016-07-04  8:38     ` Christoph Hellwig
2016-07-04  8:38       ` Christoph Hellwig
2016-07-04  9:35       ` Alexander Gordeev
2016-07-04  9:35         ` Alexander Gordeev
2016-07-10  3:41         ` Christoph Hellwig
2016-07-10  3:41           ` Christoph Hellwig
2016-07-12  6:42           ` Alexander Gordeev
2016-07-12  6:42             ` Alexander Gordeev
2016-06-14 19:59 ` [PATCH 12/13] nvme: switch to use pci_alloc_irq_vectors Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59 ` [PATCH 13/13] nvme: remove the post_scan callout Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-14 19:59   ` Christoph Hellwig
2016-06-16  9:45 ` automatic interrupt affinity for MSI/MSI-X capable devices V2 Bart Van Assche
2016-06-16  9:45   ` Bart Van Assche
2016-06-16  9:45   ` Bart Van Assche
2016-06-16 15:22   ` Christoph Hellwig
2016-06-16 15:22     ` Christoph Hellwig
2016-06-26 19:40 ` Alexander Gordeev
2016-06-26 19:40   ` Alexander Gordeev
2016-07-04  8:39 automatic interrupt affinity for MSI/MSI-X capable devices V3 Christoph Hellwig
2016-07-04  8:39 ` [PATCH 06/13] irq: add a helper spread an affinity mask for MSI/MSI-X vectors Christoph Hellwig
2016-07-04  8:39   ` Christoph Hellwig
2016-07-04  8:39   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5761538D.6060303@linux.vnet.ibm.com \
    --to=gpiccoli@linux.vnet.ibm.com \
    --cc=axboe@fb.com \
    --cc=bart.vanassche@sandisk.com \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.