From: Ming Lei <ming.lei@redhat.com>
To: Keith Busch <kbusch@kernel.org>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
Damien Le Moal <Damien.LeMoal@wdc.com>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Tim Walker <tim.t.walker@seagate.com>,
linux-scsi <linux-scsi@vger.kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe HDD
Date: Fri, 14 Feb 2020 08:40:56 +0800 [thread overview]
Message-ID: <20200214004056.GC4907@ming.t460p> (raw)
In-Reply-To: <20200213163038.GB7634@redsun51.ssa.fujisawa.hgst.com>
On Fri, Feb 14, 2020 at 01:30:38AM +0900, Keith Busch wrote:
> On Thu, Feb 13, 2020 at 04:34:13PM +0800, Ming Lei wrote:
> > On Thu, Feb 13, 2020 at 08:24:36AM +0000, Damien Le Moal wrote:
> > > Got it. And since queue full will mean no more tags, submission will block
> > > on get_request() and there will be no chance in the elevator to merge
> > > anything (aside from opportunistic merging in plugs), isn't it ?
> > > So I guess NVMe HDDs will need some tuning in this area.
> >
> > scheduler queue depth is usually 2 times of hw queue depth, so requests
> > ar usually enough for merging.
> >
> > For NVMe, there isn't ns queue depth, such as scsi's device queue depth,
> > meantime the hw queue depth is big enough, so no chance to trigger merge.
>
> Most NVMe devices contain a single namespace anyway, so the shared tag
> queue depth is effectively the ns queue depth, and an NVMe HDD should
> advertise queue count and depth capabilities orders of magnitude lower
> than what we're used to with nvme SSDs. That should get merging and
> BLK_STS_DEV_RESOURCE handling to occur as desired, right?
Right.
The advertised queue depth might serve two purposes:
1) reflect the namespace's actual queueing capability, so block layer's merging
is possible
2) avoid timeout caused by too many in-flight IO
Thanks,
Ming
next prev parent reply other threads:[~2020-02-14 0:41 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-10 19:20 [LSF/MM/BPF TOPIC] NVMe HDD Tim Walker
2020-02-10 20:43 ` Keith Busch
2020-02-10 22:25 ` Finn Thain
2020-02-11 12:28 ` Ming Lei
2020-02-11 19:01 ` Tim Walker
2020-02-12 1:47 ` Damien Le Moal
2020-02-12 22:03 ` Ming Lei
2020-02-13 2:40 ` Damien Le Moal
2020-02-13 7:53 ` Ming Lei
2020-02-13 8:24 ` Damien Le Moal
2020-02-13 8:34 ` Ming Lei
2020-02-13 16:30 ` Keith Busch
2020-02-14 0:40 ` Ming Lei [this message]
2020-02-13 3:02 ` Martin K. Petersen
2020-02-13 3:12 ` Tim Walker
2020-02-13 4:17 ` Martin K. Petersen
2020-02-14 7:32 ` Hannes Reinecke
2020-02-14 14:40 ` Keith Busch
2020-02-14 16:04 ` Hannes Reinecke
2020-02-14 17:05 ` Keith Busch
2020-02-18 15:54 ` Tim Walker
2020-02-18 17:41 ` Keith Busch
2020-02-18 17:52 ` James Smart
2020-02-19 1:31 ` Ming Lei
2020-02-19 1:53 ` Damien Le Moal
2020-02-19 2:15 ` Ming Lei
2020-02-19 2:32 ` Damien Le Moal
2020-02-19 2:56 ` Tim Walker
2020-02-19 16:28 ` Tim Walker
2020-02-19 20:50 ` Keith Busch
2020-02-14 0:35 ` Ming Lei
2020-02-12 21:52 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200214004056.GC4907@ming.t460p \
--to=ming.lei@redhat.com \
--cc=Damien.LeMoal@wdc.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=tim.t.walker@seagate.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).