From: Damien Le Moal <Damien.LeMoal@wdc.com>
To: Ming Lei <ming.lei@redhat.com>, Keith Busch <kbusch@kernel.org>
Cc: Tim Walker <tim.t.walker@seagate.com>,
Hannes Reinecke <hare@suse.de>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
linux-scsi <linux-scsi@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe HDD
Date: Wed, 19 Feb 2020 01:53:53 +0000 [thread overview]
Message-ID: <BYAPR04MB58165C6B400AE30986F988D5E7100@BYAPR04MB5816.namprd04.prod.outlook.com> (raw)
In-Reply-To: 20200219013137.GA31488@ming.t460p
On 2020/02/19 10:32, Ming Lei wrote:
> On Wed, Feb 19, 2020 at 02:41:14AM +0900, Keith Busch wrote:
>> On Tue, Feb 18, 2020 at 10:54:54AM -0500, Tim Walker wrote:
>>> With regards to our discussion on queue depths, it's common knowledge
>>> that an HDD choses commands from its internal command queue to
>>> optimize performance. The HDD looks at things like the current
>>> actuator position, current media rotational position, power
>>> constraints, command age, etc to choose the best next command to
>>> service. A large number of commands in the queue gives the HDD a
>>> better selection of commands from which to choose to maximize
>>> throughput/IOPS/etc but at the expense of the added latency due to
>>> commands sitting in the queue.
>>>
>>> NVMe doesn't allow us to pull commands randomly from the SQ, so the
>>> HDD should attempt to fill its internal queue from the various SQs,
>>> according to the SQ servicing policy, so it can have a large number of
>>> commands to choose from for its internal command processing
>>> optimization.
>>
>> You don't need multiple queues for that. While the device has to fifo
>> fetch commands from a host's submission queue, it may reorder their
>> executuion and completion however it wants, which you can do with a
>> single queue.
>>
>>> It seems to me that the host would want to limit the total number of
>>> outstanding commands to an NVMe HDD
>>
>> The host shouldn't have to decide on limits. NVMe lets the device report
>> it's queue count and depth. It should the device's responsibility to
>
> Will NVMe HDD support multiple NS? If yes, this queue depth isn't
> enough, given all NSs share this single host queue depth.
>
>> report appropriate values that maximize iops within your latency limits,
>> and the host will react accordingly.
>
> Suppose NVMe HDD just wants to support single NS and there is single queue,
> if the device just reports one host queue depth, block layer IO sort/merge
> can only be done when there is device saturation feedback provided.
>
> So, looks either NS queue depth or per-NS device saturation feedback
> mechanism is needed, otherwise NVMe HDD may have to do internal IO
> sort/merge.
SAS and SATA HDDs today already do internal IO reordering and merging, a
lot. That is partly why even with "none" set as the scheduler, you can see
iops increasing with QD used.
But yes, I think you do have a point with the saturation feedback. This may
be necessary for better scheduling host-side.
>
>
> Thanks,
> Ming
>
>
--
Damien Le Moal
Western Digital Research
next prev parent reply other threads:[~2020-02-19 1:53 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-10 19:20 [LSF/MM/BPF TOPIC] NVMe HDD Tim Walker
2020-02-10 20:43 ` Keith Busch
2020-02-10 22:25 ` Finn Thain
2020-02-11 12:28 ` Ming Lei
2020-02-11 19:01 ` Tim Walker
2020-02-12 1:47 ` Damien Le Moal
2020-02-12 22:03 ` Ming Lei
2020-02-13 2:40 ` Damien Le Moal
2020-02-13 7:53 ` Ming Lei
2020-02-13 8:24 ` Damien Le Moal
2020-02-13 8:34 ` Ming Lei
2020-02-13 16:30 ` Keith Busch
2020-02-14 0:40 ` Ming Lei
2020-02-13 3:02 ` Martin K. Petersen
2020-02-13 3:12 ` Tim Walker
2020-02-13 4:17 ` Martin K. Petersen
2020-02-14 7:32 ` Hannes Reinecke
2020-02-14 14:40 ` Keith Busch
2020-02-14 16:04 ` Hannes Reinecke
2020-02-14 17:05 ` Keith Busch
2020-02-18 15:54 ` Tim Walker
2020-02-18 17:41 ` Keith Busch
2020-02-18 17:52 ` James Smart
2020-02-19 1:31 ` Ming Lei
2020-02-19 1:53 ` Damien Le Moal [this message]
2020-02-19 2:15 ` Ming Lei
2020-02-19 2:32 ` Damien Le Moal
2020-02-19 2:56 ` Tim Walker
2020-02-19 16:28 ` Tim Walker
2020-02-19 20:50 ` Keith Busch
2020-02-14 0:35 ` Ming Lei
2020-02-12 21:52 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BYAPR04MB58165C6B400AE30986F988D5E7100@BYAPR04MB5816.namprd04.prod.outlook.com \
--to=damien.lemoal@wdc.com \
--cc=hare@suse.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
--cc=tim.t.walker@seagate.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).