All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Keith Busch <kbusch@kernel.org>,
	Andrey Nikitin <nikitina@amazon.com>,
	linux-nvme@lists.infradead.org, davebuch@amazon.com
Subject: Re: [RFC PATCH 0/3] nvme sq associations
Date: Sat, 25 Sep 2021 09:36:13 +0100	[thread overview]
Message-ID: <YU7ffdxOYeUTY/0P@infradead.org> (raw)
In-Reply-To: <de8f41b03ff2c82f7013ef3b9e7dc3b044c2b69f.camel@kernel.crashing.org>

On Sat, Sep 25, 2021 at 06:31:58PM +1000, Benjamin Herrenschmidt wrote:
> On Sat, 2021-09-25 at 12:02 +0900, Keith Busch wrote:
> > 
> > Different submission queue groups per NVM Set sounds right for this
> > feature, but I'm not sure it makes sense for these to have their own
> > completion queues: completions from different sets would try to
> > schedule on the same CPU. I think it should be more efficient to
> > break the 1:1
> > SQ:CQ pairing, and instead have all the SQs with the same CPU
> > affinity share a single CQ so that completions from different
> > namespaces could be handled in a single interrupt.
> 
> Can this be an incremental improvement ?

Honestly I'd rather not merge this whole patchset at all.  It is a
completly frinde feature for a totally misdesigned part of the NVMe
spec.  Until actual controller in the hands of prosumers support
anything like that I'm very reluctant to bloat the driver fast path for
it.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-09-25  8:37 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-24 21:08 [RFC PATCH 0/3] nvme sq associations Andrey Nikitin
2021-09-24 21:08 ` [RFC PATCH 1/3] nvme: split admin queue in pci Andrey Nikitin
2021-09-24 21:08 ` [RFC PATCH 2/3] nvme: add NVM set structures Andrey Nikitin
2021-09-24 21:08 ` [RFC PATCH 3/3] nvme: implement SQ associations Andrey Nikitin
2021-09-25  3:02 ` [RFC PATCH 0/3] nvme sq associations Keith Busch
2021-09-25  8:31   ` Benjamin Herrenschmidt
2021-09-25  8:36     ` Christoph Hellwig [this message]
2021-09-29  6:07 ` Chaitanya Kulkarni
2021-09-29 13:17   ` Sagi Grimberg
2021-09-29  0:48 Nikitin, Andrey
2021-09-29  1:35 ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YU7ffdxOYeUTY/0P@infradead.org \
    --to=hch@infradead.org \
    --cc=benh@kernel.crashing.org \
    --cc=davebuch@amazon.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=nikitina@amazon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.