All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Tim Walker <tim.t.walker@seagate.com>
Cc: linux-block@vger.kernel.org,
	linux-scsi <linux-scsi@vger.kernel.org>,
	linux-nvme@lists.infradead.org
Subject: Re: [LSF/MM/BPF TOPIC] NVMe HDD
Date: Tue, 11 Feb 2020 20:28:21 +0800	[thread overview]
Message-ID: <20200211122821.GA29811@ming.t460p> (raw)
In-Reply-To: <CANo=J14resJ4U1nufoiDq+ULd0k-orRCsYah8Dve-y8uCjA62Q@mail.gmail.com>

On Mon, Feb 10, 2020 at 02:20:10PM -0500, Tim Walker wrote:
> Background:
> 
> NVMe specification has hardened over the decade and now NVMe devices
> are well integrated into our customers’ systems. As we look forward,
> moving HDDs to the NVMe command set eliminates the SAS IOC and driver
> stack, consolidating on a single access method for rotational and
> static storage technologies. PCIe-NVMe offers near-SATA interface
> costs, features and performance suitable for high-cap HDDs, and
> optimal interoperability for storage automation, tiering, and
> management. We will share some early conceptual results and proposed
> salient design goals and challenges surrounding an NVMe HDD.

HDD. performance is very sensitive to IO order. Could you provide some
background info about NVMe HDD? Such as:

- number of hw queues
- hw queue depth
- will NVMe sort/merge IO among all SQs or not?

> 
> 
> Discussion Proposal:
> 
> We’d like to share our views and solicit input on:
> 
> -What Linux storage stack assumptions do we need to be aware of as we
> develop these devices with drastically different performance
> characteristics than traditional NAND? For example, what schedular or
> device driver level changes will be needed to integrate NVMe HDDs?

IO merge is often important for HDD. IO merge is usually triggered when
.queue_rq() returns STS_RESOURCE, so far this condition won't be
triggered for NVMe SSD.

Also blk-mq kills BDI queue congestion and ioc batching, and causes
writeback performance regression[1][2].

What I am thinking is that if we need to switch to use independent IO
path for handling SSD and HDD. IO, given the two mediums are so
different from performance viewpoint.

[1] https://lore.kernel.org/linux-scsi/Pine.LNX.4.44L0.1909181213141.1507-100000@iolanthe.rowland.org/
[2] https://lore.kernel.org/linux-scsi/20191226083706.GA17974@ming.t460p/


Thanks, 
Ming


WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: Tim Walker <tim.t.walker@seagate.com>
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-scsi <linux-scsi@vger.kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe HDD
Date: Tue, 11 Feb 2020 20:28:21 +0800	[thread overview]
Message-ID: <20200211122821.GA29811@ming.t460p> (raw)
In-Reply-To: <CANo=J14resJ4U1nufoiDq+ULd0k-orRCsYah8Dve-y8uCjA62Q@mail.gmail.com>

On Mon, Feb 10, 2020 at 02:20:10PM -0500, Tim Walker wrote:
> Background:
> 
> NVMe specification has hardened over the decade and now NVMe devices
> are well integrated into our customers’ systems. As we look forward,
> moving HDDs to the NVMe command set eliminates the SAS IOC and driver
> stack, consolidating on a single access method for rotational and
> static storage technologies. PCIe-NVMe offers near-SATA interface
> costs, features and performance suitable for high-cap HDDs, and
> optimal interoperability for storage automation, tiering, and
> management. We will share some early conceptual results and proposed
> salient design goals and challenges surrounding an NVMe HDD.

HDD. performance is very sensitive to IO order. Could you provide some
background info about NVMe HDD? Such as:

- number of hw queues
- hw queue depth
- will NVMe sort/merge IO among all SQs or not?

> 
> 
> Discussion Proposal:
> 
> We’d like to share our views and solicit input on:
> 
> -What Linux storage stack assumptions do we need to be aware of as we
> develop these devices with drastically different performance
> characteristics than traditional NAND? For example, what schedular or
> device driver level changes will be needed to integrate NVMe HDDs?

IO merge is often important for HDD. IO merge is usually triggered when
.queue_rq() returns STS_RESOURCE, so far this condition won't be
triggered for NVMe SSD.

Also blk-mq kills BDI queue congestion and ioc batching, and causes
writeback performance regression[1][2].

What I am thinking is that if we need to switch to use independent IO
path for handling SSD and HDD. IO, given the two mediums are so
different from performance viewpoint.

[1] https://lore.kernel.org/linux-scsi/Pine.LNX.4.44L0.1909181213141.1507-100000@iolanthe.rowland.org/
[2] https://lore.kernel.org/linux-scsi/20191226083706.GA17974@ming.t460p/


Thanks, 
Ming


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2020-02-11 12:28 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-10 19:20 [LSF/MM/BPF TOPIC] NVMe HDD Tim Walker
2020-02-10 19:20 ` Tim Walker
2020-02-10 20:43 ` Keith Busch
2020-02-10 20:43   ` Keith Busch
2020-02-10 22:25   ` Finn Thain
2020-02-10 22:25     ` Finn Thain
2020-02-11 12:28 ` Ming Lei [this message]
2020-02-11 12:28   ` Ming Lei
2020-02-11 19:01   ` Tim Walker
2020-02-11 19:01     ` Tim Walker
2020-02-12  1:47     ` Damien Le Moal
2020-02-12  1:47       ` Damien Le Moal
2020-02-12 22:03       ` Ming Lei
2020-02-12 22:03         ` Ming Lei
2020-02-13  2:40         ` Damien Le Moal
2020-02-13  2:40           ` Damien Le Moal
2020-02-13  7:53           ` Ming Lei
2020-02-13  7:53             ` Ming Lei
2020-02-13  8:24             ` Damien Le Moal
2020-02-13  8:24               ` Damien Le Moal
2020-02-13  8:34               ` Ming Lei
2020-02-13  8:34                 ` Ming Lei
2020-02-13 16:30                 ` Keith Busch
2020-02-13 16:30                   ` Keith Busch
2020-02-14  0:40                   ` Ming Lei
2020-02-14  0:40                     ` Ming Lei
2020-02-13  3:02       ` Martin K. Petersen
2020-02-13  3:02         ` Martin K. Petersen
2020-02-13  3:12         ` Tim Walker
2020-02-13  3:12           ` Tim Walker
2020-02-13  4:17           ` Martin K. Petersen
2020-02-13  4:17             ` Martin K. Petersen
2020-02-14  7:32             ` Hannes Reinecke
2020-02-14  7:32               ` Hannes Reinecke
2020-02-14 14:40               ` Keith Busch
2020-02-14 14:40                 ` Keith Busch
2020-02-14 16:04                 ` Hannes Reinecke
2020-02-14 16:04                   ` Hannes Reinecke
2020-02-14 17:05                   ` Keith Busch
2020-02-14 17:05                     ` Keith Busch
2020-02-18 15:54                     ` Tim Walker
2020-02-18 15:54                       ` Tim Walker
2020-02-18 17:41                       ` Keith Busch
2020-02-18 17:41                         ` Keith Busch
2020-02-18 17:52                         ` James Smart
2020-02-18 17:52                           ` James Smart
2020-02-19  1:31                         ` Ming Lei
2020-02-19  1:31                           ` Ming Lei
2020-02-19  1:53                           ` Damien Le Moal
2020-02-19  1:53                             ` Damien Le Moal
2020-02-19  2:15                             ` Ming Lei
2020-02-19  2:15                               ` Ming Lei
2020-02-19  2:32                               ` Damien Le Moal
2020-02-19  2:32                                 ` Damien Le Moal
2020-02-19  2:56                                 ` Tim Walker
2020-02-19  2:56                                   ` Tim Walker
2020-02-19 16:28                                   ` Tim Walker
2020-02-19 16:28                                     ` Tim Walker
2020-02-19 20:50                                     ` Keith Busch
2020-02-19 20:50                                       ` Keith Busch
2020-02-14  0:35         ` Ming Lei
2020-02-14  0:35           ` Ming Lei
2020-02-12 21:52     ` Ming Lei
2020-02-12 21:52       ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200211122821.GA29811@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=tim.t.walker@seagate.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.