All of lore.kernel.org
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Hannes Reinecke <hare@suse.de>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>,
	Tim Walker <tim.t.walker@seagate.com>,
	Damien Le Moal <Damien.LeMoal@wdc.com>,
	Ming Lei <ming.lei@redhat.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	linux-scsi <linux-scsi@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe HDD
Date: Sat, 15 Feb 2020 02:05:14 +0900	[thread overview]
Message-ID: <20200214170514.GA10757@redsun51.ssa.fujisawa.hgst.com> (raw)
In-Reply-To: <d043a58d-6584-1792-4433-ac2cc39526ca@suse.de>

On Fri, Feb 14, 2020 at 05:04:25PM +0100, Hannes Reinecke wrote:
> On 2/14/20 3:40 PM, Keith Busch wrote:
> > On Fri, Feb 14, 2020 at 08:32:57AM +0100, Hannes Reinecke wrote:
> > > On 2/13/20 5:17 AM, Martin K. Petersen wrote:
> > > > People often artificially lower the queue depth to avoid timeouts. The
> > > > default timeout is 30 seconds from an I/O is queued. However, many
> > > > enterprise applications set the timeout to 3-5 seconds. Which means that
> > > > with deep queues you'll quickly start seeing timeouts if a drive
> > > > temporarily is having issues keeping up (media errors, excessive spare
> > > > track seeks, etc.).
> > > > 
> > > > Well-behaved devices will return QF/TSF if they have transient resource
> > > > starvation or exceed internal QoS limits. QF will cause the SCSI stack
> > > > to reduce the number of I/Os in flight. This allows the drive to recover
> > > > from its congested state and reduces the potential of application and
> > > > filesystem timeouts.
> > > > 
> > > This may even be a chance to revisit QoS / queue busy handling.
> > > NVMe has this SQ head pointer mechanism which was supposed to handle
> > > this kind of situations, but to my knowledge no-one has been
> > > implementing it.
> > > Might be worthwhile revisiting it; guess NVMe HDDs would profit from that.
> > 
> > We don't need that because we don't allocate enough tags to potentially
> > wrap the tail past the head. If you can allocate a tag, the queue is not
> > full. And convesely, no tag == queue full.
> > 
> It's not a problem on our side.
> It's a problem on the target/controller side.
> The target/controller might have a need to throttle I/O (due to QoS settings
> or competing resources from other hosts), but currently no means of
> signalling that to the host.
> Which, incidentally, is the underlying reason for the DNR handling
> discussion we had; NetApp tried to model QoS by sending "Namespace not
> ready" without the DNR bit set, which of course is a totally different
> use-case as the typical 'Namespace not ready' response we get (with the DNR
> bit set) when a namespace was unmapped.
> 
> And that is where SQ head pointer updates comes in; it would allow the
> controller to signal back to the host that it should hold off sending I/O
> for a bit.
> So this could / might be used for NVMe HDDs, too, which also might have a
> need to signal back to the host that I/Os should be throttled...

Okay, I see. I think this needs a new nvme AER notice as Martin
suggested. The desired host behavior is simiilar to what we do with a
"firmware activation notice" where we temporarily quiesce new requests
and reset IO timeouts for previously dispatched requests. Perhaps tie
this to the CSTS.PP register as well.

WARNING: multiple messages have this Message-ID (diff)
From: Keith Busch <kbusch@kernel.org>
To: Hannes Reinecke <hare@suse.de>
Cc: Damien Le Moal <Damien.LeMoal@wdc.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-scsi <linux-scsi@vger.kernel.org>,
	Tim Walker <tim.t.walker@seagate.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Ming Lei <ming.lei@redhat.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] NVMe HDD
Date: Sat, 15 Feb 2020 02:05:14 +0900	[thread overview]
Message-ID: <20200214170514.GA10757@redsun51.ssa.fujisawa.hgst.com> (raw)
In-Reply-To: <d043a58d-6584-1792-4433-ac2cc39526ca@suse.de>

On Fri, Feb 14, 2020 at 05:04:25PM +0100, Hannes Reinecke wrote:
> On 2/14/20 3:40 PM, Keith Busch wrote:
> > On Fri, Feb 14, 2020 at 08:32:57AM +0100, Hannes Reinecke wrote:
> > > On 2/13/20 5:17 AM, Martin K. Petersen wrote:
> > > > People often artificially lower the queue depth to avoid timeouts. The
> > > > default timeout is 30 seconds from an I/O is queued. However, many
> > > > enterprise applications set the timeout to 3-5 seconds. Which means that
> > > > with deep queues you'll quickly start seeing timeouts if a drive
> > > > temporarily is having issues keeping up (media errors, excessive spare
> > > > track seeks, etc.).
> > > > 
> > > > Well-behaved devices will return QF/TSF if they have transient resource
> > > > starvation or exceed internal QoS limits. QF will cause the SCSI stack
> > > > to reduce the number of I/Os in flight. This allows the drive to recover
> > > > from its congested state and reduces the potential of application and
> > > > filesystem timeouts.
> > > > 
> > > This may even be a chance to revisit QoS / queue busy handling.
> > > NVMe has this SQ head pointer mechanism which was supposed to handle
> > > this kind of situations, but to my knowledge no-one has been
> > > implementing it.
> > > Might be worthwhile revisiting it; guess NVMe HDDs would profit from that.
> > 
> > We don't need that because we don't allocate enough tags to potentially
> > wrap the tail past the head. If you can allocate a tag, the queue is not
> > full. And convesely, no tag == queue full.
> > 
> It's not a problem on our side.
> It's a problem on the target/controller side.
> The target/controller might have a need to throttle I/O (due to QoS settings
> or competing resources from other hosts), but currently no means of
> signalling that to the host.
> Which, incidentally, is the underlying reason for the DNR handling
> discussion we had; NetApp tried to model QoS by sending "Namespace not
> ready" without the DNR bit set, which of course is a totally different
> use-case as the typical 'Namespace not ready' response we get (with the DNR
> bit set) when a namespace was unmapped.
> 
> And that is where SQ head pointer updates comes in; it would allow the
> controller to signal back to the host that it should hold off sending I/O
> for a bit.
> So this could / might be used for NVMe HDDs, too, which also might have a
> need to signal back to the host that I/Os should be throttled...

Okay, I see. I think this needs a new nvme AER notice as Martin
suggested. The desired host behavior is simiilar to what we do with a
"firmware activation notice" where we temporarily quiesce new requests
and reset IO timeouts for previously dispatched requests. Perhaps tie
this to the CSTS.PP register as well.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-02-14 17:05 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-10 19:20 [LSF/MM/BPF TOPIC] NVMe HDD Tim Walker
2020-02-10 19:20 ` Tim Walker
2020-02-10 20:43 ` Keith Busch
2020-02-10 20:43   ` Keith Busch
2020-02-10 22:25   ` Finn Thain
2020-02-10 22:25     ` Finn Thain
2020-02-11 12:28 ` Ming Lei
2020-02-11 12:28   ` Ming Lei
2020-02-11 19:01   ` Tim Walker
2020-02-11 19:01     ` Tim Walker
2020-02-12  1:47     ` Damien Le Moal
2020-02-12  1:47       ` Damien Le Moal
2020-02-12 22:03       ` Ming Lei
2020-02-12 22:03         ` Ming Lei
2020-02-13  2:40         ` Damien Le Moal
2020-02-13  2:40           ` Damien Le Moal
2020-02-13  7:53           ` Ming Lei
2020-02-13  7:53             ` Ming Lei
2020-02-13  8:24             ` Damien Le Moal
2020-02-13  8:24               ` Damien Le Moal
2020-02-13  8:34               ` Ming Lei
2020-02-13  8:34                 ` Ming Lei
2020-02-13 16:30                 ` Keith Busch
2020-02-13 16:30                   ` Keith Busch
2020-02-14  0:40                   ` Ming Lei
2020-02-14  0:40                     ` Ming Lei
2020-02-13  3:02       ` Martin K. Petersen
2020-02-13  3:02         ` Martin K. Petersen
2020-02-13  3:12         ` Tim Walker
2020-02-13  3:12           ` Tim Walker
2020-02-13  4:17           ` Martin K. Petersen
2020-02-13  4:17             ` Martin K. Petersen
2020-02-14  7:32             ` Hannes Reinecke
2020-02-14  7:32               ` Hannes Reinecke
2020-02-14 14:40               ` Keith Busch
2020-02-14 14:40                 ` Keith Busch
2020-02-14 16:04                 ` Hannes Reinecke
2020-02-14 16:04                   ` Hannes Reinecke
2020-02-14 17:05                   ` Keith Busch [this message]
2020-02-14 17:05                     ` Keith Busch
2020-02-18 15:54                     ` Tim Walker
2020-02-18 15:54                       ` Tim Walker
2020-02-18 17:41                       ` Keith Busch
2020-02-18 17:41                         ` Keith Busch
2020-02-18 17:52                         ` James Smart
2020-02-18 17:52                           ` James Smart
2020-02-19  1:31                         ` Ming Lei
2020-02-19  1:31                           ` Ming Lei
2020-02-19  1:53                           ` Damien Le Moal
2020-02-19  1:53                             ` Damien Le Moal
2020-02-19  2:15                             ` Ming Lei
2020-02-19  2:15                               ` Ming Lei
2020-02-19  2:32                               ` Damien Le Moal
2020-02-19  2:32                                 ` Damien Le Moal
2020-02-19  2:56                                 ` Tim Walker
2020-02-19  2:56                                   ` Tim Walker
2020-02-19 16:28                                   ` Tim Walker
2020-02-19 16:28                                     ` Tim Walker
2020-02-19 20:50                                     ` Keith Busch
2020-02-19 20:50                                       ` Keith Busch
2020-02-14  0:35         ` Ming Lei
2020-02-14  0:35           ` Ming Lei
2020-02-12 21:52     ` Ming Lei
2020-02-12 21:52       ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200214170514.GA10757@redsun51.ssa.fujisawa.hgst.com \
    --to=kbusch@kernel.org \
    --cc=Damien.LeMoal@wdc.com \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=tim.t.walker@seagate.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.