From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@fb.com>
Cc: Sagi Grimberg <sagi@grimberg.me>, Long Li <longli@microsoft.com>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Nadolski Edmund <edmund.nadolski@intel.com>,
Keith Busch <kbusch@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH V3 0/2] nvme-pci: check CQ after batch submission for Microsoft device
Date: Sat, 23 Nov 2019 05:49:54 +0800 [thread overview]
Message-ID: <20191122214954.GB8700@ming.t460p> (raw)
In-Reply-To: <b5148303-f05d-71c8-787a-597958c1909c@fb.com>
On Fri, Nov 22, 2019 at 02:04:52PM +0000, Jens Axboe wrote:
> On 11/22/19 3:25 AM, Ming Lei wrote:
> >> as that will still overload the one cpu that the interrupt handler was
> >> assigned to. A dumb fix would be a cpu mask for the threaded interrupt
> >
> > Actually one CPU is fast enough to handle several drive's interrupt handling.
> > Also there is per-queue depth limit, and the interrupt flood issue in network
> > can't be serious on storage.
>
> This is true today, but it won't be true in the future. Lets aim for a
> solution that's a little more future proof than just "enough today", if
> we're going to make changes in this area.
That should be a new feature for future hardware, and we don't know any
performance details, and it can be hard to prepare for it now. Maybe
such hardware or case never comes:
- storage device has queue depth, which limits the max in-flight requests
to be handled in each queue's interrupt handler.
- Suppose such fast hardware comes, it isn't reasonable for them
to support N:1 mapping(N is big).
- Also IRQ matrix has balanced interrupt handling loading already, that
said most of times, one CPU is just responsible for handing one hw queue's
interrupt. Even in Azure's case, 8 CPUs are mapped to one hw queue, but
there is just several CPUs which is for responsible for at most 2 hw queues.
So could we focus on now and fix the regression first?
Thanks,
Ming
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-11-22 21:50 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-14 2:59 [PATCH V3 0/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
2019-11-14 2:59 ` [PATCH V3 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei
2019-11-14 2:59 ` [PATCH V3 2/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
2019-11-14 4:56 ` Keith Busch
2019-11-14 8:56 ` Ming Lei
2019-11-21 3:11 ` [PATCH V3 0/2] " Ming Lei
2019-11-21 6:14 ` Christoph Hellwig
2019-11-21 7:46 ` Ming Lei
2019-11-21 15:45 ` Keith Busch
2019-11-22 9:44 ` Ming Lei
2019-11-22 9:57 ` Christoph Hellwig
2019-11-22 10:25 ` Ming Lei
2019-11-22 14:04 ` Jens Axboe
2019-11-22 21:49 ` Ming Lei [this message]
2019-11-22 21:58 ` Jens Axboe
2019-11-22 22:30 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191122214954.GB8700@ming.t460p \
--to=ming.lei@redhat.com \
--cc=axboe@fb.com \
--cc=edmund.nadolski@intel.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=longli@microsoft.com \
--cc=sagi@grimberg.me \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).