linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "jianchao.wang" <jianchao.w.wang@oracle.com>
To: Keith Busch <keith.busch@intel.com>
Cc: axboe@fb.com, linux-kernel@vger.kernel.org, hch@lst.de,
	linux-nvme@lists.infradead.org, sagi@grimberg.me
Subject: Re: [PATCH] nvme-pci: assign separate irq vectors for adminq and ioq0
Date: Wed, 28 Feb 2018 23:46:20 +0800	[thread overview]
Message-ID: <cc46058f-bb7a-e108-c6e9-621daddae577@oracle.com> (raw)
In-Reply-To: <8066e06c-90f4-c21b-e36f-89f6e8ca28c5@oracle.com>



On 02/28/2018 11:42 PM, jianchao.wang wrote:
> Hi Keith
> 
> Thanks for your kindly response and directive
> 
> On 02/28/2018 11:27 PM, Keith Busch wrote:
>> On Wed, Feb 28, 2018 at 10:53:31AM +0800, jianchao.wang wrote:
>>> On 02/27/2018 11:13 PM, Keith Busch wrote:
>>>> On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wrote:
>>>>> Currently, adminq and ioq0 share the same irq vector. This is
>>>>> unfair for both amdinq and ioq0.
>>>>>  - For adminq, its completion irq has to be bound on cpu0.
>>>>>  - For ioq0, when the irq fires for io completion, the adminq irq
>>>>>    action has to be checked also.
>>>>
>>>> This change log could use some improvements. Why is it bad if admin
>>>> interrupts affinity is with cpu0?
>>>
>>> adminq interrupts should be able to fire everywhere.
>>> do we have any reason to bound it on cpu0 ?
>>
>> Your patch will have the admin vector CPU affinity mask set to
>> 0xff..ff. The first set bit for an online CPU is the one the IRQ handler
>> will run on, so the admin queue will still only run on CPU 0.
> 
> hmmm...yes.
> When I test there is only one irq vector, I get following result:
>  124:          0          0     253541          0          0          0          0          0  IR-PCI-MSI 1048576-edge      nvme0q0, nvme0q1
> 

the irqbalance may migrate the adminq irq away from cpu0.

>>  
>>>> Are you able to measure _any_ performance difference on IO queue 1 vs IO
>>>> queue 2 that you can attribute to IO queue 1's sharing vector 0?
>>>
>>> Actually, I didn't get any performance improving on my own NVMe card.
>>> But it may be needed on some enterprise card, especially the media is persist memory.
>>> nvme_irq will be invoked twice when ioq0 irq fires, this will introduce another unnecessary DMA
>>> accessing on cq entry.
>>
>> A CPU reading its own memory isn't a DMA. It's just a cheap memory read.
> 
> Oh sorry, my bad, I mean it is operation on DMA address, it is uncached.
> nvme_irq
>   -> nvme_process_cq
>     -> nvme_read_cqe
>       -> nvme_cqe_valid
> 
> static inline bool nvme_cqe_valid(struct nvme_queue *nvmeq, u16 head,
> 		u16 phase)
> {
> 	return (le16_to_cpu(nvmeq->cqes[head].status) & 1) == phase;
> }
> 
> Sincerely
> Jianchao
> 

  reply	other threads:[~2018-02-28 15:54 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-27  8:46 [PATCH] nvme-pci: assign separate irq vectors for adminq and ioq0 Jianchao Wang
2018-02-27 15:13 ` Keith Busch
2018-02-28  2:53   ` jianchao.wang
2018-02-28 15:27     ` Keith Busch
2018-02-28 15:42       ` jianchao.wang
2018-02-28 15:46         ` jianchao.wang [this message]
2018-02-28 15:53           ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cc46058f-bb7a-e108-c6e9-621daddae577@oracle.com \
    --to=jianchao.w.wang@oracle.com \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).