linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bart Van Assche <bart.vanassche@sandisk.com>
To: Keith Busch <keith.busch@intel.com>
Cc: Christoph Hellwig <hch@lst.de>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"axboe@fb.com" <axboe@fb.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag
Date: Wed, 15 Jun 2016 21:36:54 +0200	[thread overview]
Message-ID: <86aa652b-48d0-a7bb-683e-bf43939aa811@sandisk.com> (raw)
In-Reply-To: <20160615160316.GB1919@localhost.localdomain>

On 06/15/2016 06:03 PM, Keith Busch wrote:
> On Wed, Jun 15, 2016 at 05:28:54PM +0200, Bart Van Assche wrote:
>> On 06/15/2016 05:14 PM, Keith Busch wrote:
>>> I think the idea is have the irq_affinity mask match the CPU mapping on
>>> the submission side context associated with that particular vector. If
>>> two identical adapters generate the same submission CPU mapping, I don't
>>> think we can do better than matching irq_affinity masks.
>>
>> Has this been verified by measurements? Sorry but I'm not convinced that
>> using the same mapping for multiple identical adapters instead of spreading
>> interrupts will result in better performance.
>
> The interrupts automatically spread based on which CPU submitted the
> work. If you want to spread interrupts across more CPUs, then you can
> spread submissions to the CPUs you want to service the interrupts.
>
> Completing work on the same CPU that submitted it is quickest with
> its cache hot access. I have equipment available to demo this. What
> affinty_mask policy would you like to see compared with the proposal?

Hello Keith,

Sorry that I had not yet this made this clear but my concern is about a 
system equipped with two or more adapters and with more CPU cores than 
the number of MSI-X interrupts per adapter. Consider e.g. a system with 
two adapters (A and B), 8 interrupts per adapter (A0..A7 and B0..B7), 32 
CPU cores and two NUMA nodes. Assuming that hyperthreading is disabled, 
will the patches from this patch series generate the following interrupt 
assignment?

0: A0 B0
1: A1 B1
2: A2 B2
3: A3 B3
4: A4 B4
5: A5 B5
6: A6 B6
7: A7 B7
8: (none)
...
31: (none)

The mapping I would like to see is as follows (assuming CPU cores 0..15 
correspond to NUMA node 0 and CPU cores 16..31 correspond to NUMA node 1):

0: A0
1: B0
2: (none)
3: (none)
4: A1
5: B1
6: (none)
7: (none)
8: A2
9: B2
10: (none)
11: (none)
12: A3
13: B3
14: (none)
15: (none)
...
31: (none)

Do you agree that - ignoring other interrupt assignments - that the 
latter interrupt assignment scheme would result in higher throughput and 
lower interrupt processing latency?

Thanks,

Bart.

  reply	other threads:[~2016-06-15 19:37 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-14 19:58 automatic interrupt affinity for MSI/MSI-X capable devices V2 Christoph Hellwig
2016-06-14 19:58 ` [PATCH 01/13] irq/msi: Remove unused MSI_FLAG_IDENTITY_MAP Christoph Hellwig
2016-06-16  9:05   ` Bart Van Assche
2016-06-14 19:58 ` [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag Christoph Hellwig
2016-06-15  8:44   ` Bart Van Assche
2016-06-15 10:23     ` Christoph Hellwig
2016-06-15 10:42       ` Bart Van Assche
2016-06-15 15:14         ` Keith Busch
2016-06-15 15:28           ` Bart Van Assche
2016-06-15 16:03             ` Keith Busch
2016-06-15 19:36               ` Bart Van Assche [this message]
2016-06-15 20:06                 ` Keith Busch
2016-06-15 20:12                   ` Keith Busch
2016-06-15 20:50                     ` Bart Van Assche
2016-06-16 15:19                       ` Keith Busch
2016-06-22 11:56                         ` Alexander Gordeev
2016-06-16 15:20                 ` Christoph Hellwig
2016-06-16 15:39                   ` Bart Van Assche
2016-06-20 12:22                     ` Christoph Hellwig
2016-06-20 13:21                       ` Bart Van Assche
2016-06-21 14:31                         ` Christoph Hellwig
2016-06-16  9:08   ` Bart Van Assche
2016-06-14 19:58 ` [PATCH 03/13] irq: Add affinity hint to irq allocation Christoph Hellwig
2016-06-14 19:58 ` [PATCH 04/13] irq: Use affinity hint in irqdesc allocation Christoph Hellwig
2016-06-14 19:58 ` [PATCH 05/13] irq/msi: Make use of affinity aware allocations Christoph Hellwig
2016-06-14 19:58 ` [PATCH 06/13] irq: add a helper spread an affinity mask for MSI/MSI-X vectors Christoph Hellwig
2016-06-14 21:54   ` Guilherme G. Piccoli
2016-06-15  8:35     ` Bart Van Assche
2016-06-15 10:10     ` Christoph Hellwig
2016-06-15 13:09       ` Guilherme G. Piccoli
2016-06-16 15:16         ` Christoph Hellwig
2016-06-25 20:05   ` Alexander Gordeev
2016-06-30 17:48     ` Christoph Hellwig
2016-07-01  7:25       ` Alexander Gordeev
2016-06-14 19:59 ` [PATCH 07/13] pci: Provide sensible irq vector alloc/free routines Christoph Hellwig
2016-06-23 11:16   ` Alexander Gordeev
2016-06-30 16:54     ` Christoph Hellwig
2016-06-30 17:28       ` Alexander Gordeev
2016-06-30 17:35         ` Christoph Hellwig
2016-06-14 19:59 ` [PATCH 08/13] pci: spread interrupt vectors in pci_alloc_irq_vectors Christoph Hellwig
2016-06-25 20:22   ` Alexander Gordeev
2016-06-14 19:59 ` [PATCH 09/13] blk-mq: don't redistribute hardware queues on a CPU hotplug event Christoph Hellwig
2016-06-14 19:59 ` [PATCH 10/13] blk-mq: only allocate a single mq_map per tag_set Christoph Hellwig
2016-06-14 19:59 ` [PATCH 11/13] blk-mq: allow the driver to pass in an affinity mask Christoph Hellwig
2016-07-04  8:15   ` Alexander Gordeev
2016-07-04  8:38     ` Christoph Hellwig
2016-07-04  9:35       ` Alexander Gordeev
2016-07-10  3:41         ` Christoph Hellwig
2016-07-12  6:42           ` Alexander Gordeev
2016-06-14 19:59 ` [PATCH 12/13] nvme: switch to use pci_alloc_irq_vectors Christoph Hellwig
2016-06-14 19:59 ` [PATCH 13/13] nvme: remove the post_scan callout Christoph Hellwig
2016-06-16  9:45 ` automatic interrupt affinity for MSI/MSI-X capable devices V2 Bart Van Assche
2016-06-16 15:22   ` Christoph Hellwig
2016-06-26 19:40 ` Alexander Gordeev
2016-07-04  8:39 automatic interrupt affinity for MSI/MSI-X capable devices V3 Christoph Hellwig
2016-07-04  8:39 ` [PATCH 02/13] irq: Introduce IRQD_AFFINITY_MANAGED flag Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86aa652b-48d0-a7bb-683e-bf43939aa811@sandisk.com \
    --to=bart.vanassche@sandisk.com \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).