All of lore.kernel.org
 help / color / mirror / Atom feed
From: "jianchao.wang" <jianchao.w.wang@oracle.com>
To: Sagi Grimberg <sagi@grimberg.me>, Christoph Hellwig <hch@lst.de>
Cc: keith.busch@intel.com, axboe@fb.com,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0
Date: Thu, 1 Mar 2018 18:05:53 +0800	[thread overview]
Message-ID: <66e4ad3e-4019-13ec-94c0-e168cc1d95b4@oracle.com> (raw)
In-Reply-To: <ddfc2ab4-1a51-c488-25e8-bc434edf8aba@grimberg.me>

Hi sagi

Thanks for your kindly response.

On 03/01/2018 05:28 PM, Sagi Grimberg wrote:
> 
>> Note that we originally allocates irqs this way, and Keith changed
>> it a while ago for good reasons.  So I'd really like to see good
>> reasons for moving away from this, and some heuristics to figure
>> out which way to use.  E.g. if the device supports more irqs than
>> I/O queues your scheme might always be fine.
> 
> I still don't understand what this buys us in practice. Seems redundant
> to allocate another vector without any (even marginal) difference.
> 

When the adminq is free, ioq0 irq completion path has to invoke nvme_irq twice, one for itself,
one for adminq completion irq action.
We are trying to save every cpu cycle across the nvme host path, why we waste nvme_irq cycles here.
If we have enough vectors, we could allocate another irq vector for adminq to avoid this.

Sincerely
Jianchao 

WARNING: multiple messages have this Message-ID (diff)
From: jianchao.w.wang@oracle.com (jianchao.wang)
Subject: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0
Date: Thu, 1 Mar 2018 18:05:53 +0800	[thread overview]
Message-ID: <66e4ad3e-4019-13ec-94c0-e168cc1d95b4@oracle.com> (raw)
In-Reply-To: <ddfc2ab4-1a51-c488-25e8-bc434edf8aba@grimberg.me>

Hi sagi

Thanks for your kindly response.

On 03/01/2018 05:28 PM, Sagi Grimberg wrote:
> 
>> Note that we originally allocates irqs this way, and Keith changed
>> it a while ago for good reasons.? So I'd really like to see good
>> reasons for moving away from this, and some heuristics to figure
>> out which way to use.? E.g. if the device supports more irqs than
>> I/O queues your scheme might always be fine.
> 
> I still don't understand what this buys us in practice. Seems redundant
> to allocate another vector without any (even marginal) difference.
> 

When the adminq is free, ioq0 irq completion path has to invoke nvme_irq twice, one for itself,
one for adminq completion irq action.
We are trying to save every cpu cycle across the nvme host path, why we waste nvme_irq cycles here.
If we have enough vectors, we could allocate another irq vector for adminq to avoid this.

Sincerely
Jianchao 

  reply	other threads:[~2018-03-01 10:06 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-28 15:48 [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0 Jianchao Wang
2018-02-28 15:48 ` Jianchao Wang
2018-02-28 15:59 ` Andy Shevchenko
2018-02-28 15:59   ` Andy Shevchenko
2018-03-02  3:06   ` jianchao.wang
2018-03-02  3:06     ` jianchao.wang
2018-03-12 18:59     ` Keith Busch
2018-03-12 18:59       ` Keith Busch
2018-03-13  1:47       ` jianchao.wang
2018-03-13  1:47         ` jianchao.wang
2018-02-28 16:47 ` Christoph Hellwig
2018-02-28 16:47   ` Christoph Hellwig
2018-03-01  9:28   ` Sagi Grimberg
2018-03-01  9:28     ` Sagi Grimberg
2018-03-01 10:05     ` jianchao.wang [this message]
2018-03-01 10:05       ` jianchao.wang
2018-03-01 15:15       ` Keith Busch
2018-03-01 15:15         ` Keith Busch
2018-03-02  3:11         ` jianchao.wang
2018-03-02  3:11           ` jianchao.wang
2018-03-01 15:03   ` Ming Lei
2018-03-01 15:03     ` Ming Lei
2018-03-01 16:10     ` Keith Busch
2018-03-01 16:10       ` Keith Busch
2018-03-08  7:42       ` Christoph Hellwig
2018-03-08  7:42         ` Christoph Hellwig
2018-03-09 17:24         ` Keith Busch
2018-03-09 17:24           ` Keith Busch
2018-03-12  9:09           ` Ming Lei
2018-03-12  9:09             ` Ming Lei
2018-10-08  5:05             ` nvme-pci: number of queues off by one Prasun Ratn
2018-10-08  5:59               ` Dongli Zhang
2018-10-08  5:59                 ` Dongli Zhang
2018-10-08  6:58                 ` Dongli Zhang
2018-10-08  6:58                   ` Dongli Zhang
2018-10-08 14:54                   ` Keith Busch
2018-10-08 14:54                     ` Keith Busch
2018-10-08 10:19                 ` Ming Lei
2018-10-08 10:19                   ` Ming Lei
2018-03-02  3:18   ` [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0 jianchao.wang
2018-03-02  3:18     ` jianchao.wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=66e4ad3e-4019-13ec-94c0-e168cc1d95b4@oracle.com \
    --to=jianchao.w.wang@oracle.com \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.