linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: James Bottomley <jejb@linux.ibm.com>
To: John Garry <john.garry@huawei.com>,
	Ming Lei <ming.lei@redhat.com>,
	Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	Christoph Hellwig <hch@lst.de>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	Sathya Prakash <sathya.prakash@broadcom.com>,
	Sreekanth Reddy <sreekanth.reddy@broadcom.com>,
	Suganath Prabu Subramani  <suganath-prabu.subramani@broadcom.com>,
	PDL-MPT-FUSIONLINUX <MPT-FusionLinux.pdl@broadcom.com>
Cc: chenxiang <chenxiang66@hisilicon.com>
Subject: Re: About scsi device queue depth
Date: Mon, 11 Jan 2021 22:35:40 -0800	[thread overview]
Message-ID: <62b562eae9830830d87ea9f92dcc0018a1935583.camel@linux.ibm.com> (raw)
In-Reply-To: <b51fc658-b28a-d627-a2a3-b2835132ab13@huawei.com>

On Mon, 2021-01-11 at 17:11 +0000, John Garry wrote:
> On 11/01/2021 16:40, James Bottomley wrote:
> > > So initial sdev queue depth comes from cmd_per_lun by default or
> > > manually setting in the driver via scsi_change_queue_depth(). It
> > > seems to me that some drivers are not setting this optimally, as
> > > above.
> > > 
> > > Thoughts on guidance for setting sdev queue depth? Could blk-mq
> > > changed this behavior?
> 
> Hi James,
> 
> > In general, for spinning rust, you want the minimum queue depth
> > possible for keeping the device active because merging is a very
> > important performance enhancement and once the drive is fully
> > occupied simply sending more tags won't improve latency.  We used
> > to recommend a depth of about 4 for this reason.  A co-operative
> > device can help you find the optimal by returning QUEUE_FULL when
> > it's fully occupied so we have a mechanism to track the queue full
> > returns and change the depth interactively.
> > 
> > For high iops devices, these considerations went out of the window
> > and it's generally assumed (without varying evidence) the more tags
> > the better. 
> 
> For this case, it seems the opposite - less is more. And I seem to
> be hitting closer to the sweet spot there, with more merges.

I think cheaper SSDs have a write latency problem due to erase block
issues.  I suspect all SSDs have a channel problem in that there's a
certain number of parallel channels and once you go over that number
they can't actually work on any more operations even if they can queue
them.  For cheaper (as in fewer channels, and less spare erased block
capacity) SSDs there will be a benefit to reducing the depth to some
multiplier of the channels (I'd guess 2-4 as the multiplier).  When
SSDs become write throttled, there may be less benefit to us queueing
in the block layer (merging produces bigger packets with lower
overhead, but the erase block consumption will remain the same).

For the record, the internet thinks that cheap SSDs have 2-4 channels,
so that would argue a tag depth somewhere from 4-16

> > SSDs have a peculiar lifetime problem in that when they get
> > erase block starved they start behaving more like spinning rust in
> > that they reach a processing limit but only for writes, so lowering
> > the write queue depth (which we don't even have a knob for) might
> > be a good solution.  Trying to track the erase block problem has
> > been a constant bugbear.
> 
> I am only doing read performance test here, and the disks are SAS3.0 
> SSDs HUSMM1640ASS204, so not exactly slow.

Possibly ... the stats on most manufacturer SSDs don't give you
information about the channels or spare erase blocks.

> > I'm assuming you're using spinning rust in the above, so it sounds
> > like the firmware in the card might be eating the queue full
> > returns.  Icould see this happening in RAID mode, but it shouldn't
> > happen in jbod mode.
> 
> Not sure on that, but I didn't check too much. I did try to increase
> fio queue depth and sdev queue depth to be very large to clobber the
> disks, but still nothing.

If it's an SSD it's likely not giving the queue full you'd need to get
the mid-layer to throttle automatically.

James



  reply	other threads:[~2021-01-12  6:37 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 16:21 About scsi device queue depth John Garry
2021-01-11 16:40 ` James Bottomley
2021-01-11 17:11   ` John Garry
2021-01-12  6:35     ` James Bottomley [this message]
2021-01-12 10:27       ` John Garry
2021-01-12 16:40         ` Bryan Gurney
2021-01-12 16:47         ` James Bottomley
2021-01-12 17:20           ` Bryan Gurney
2021-01-11 17:31   ` Douglas Gilbert
2021-01-13  6:07   ` Martin K. Petersen
2021-01-13  6:36     ` Damien Le Moal
2021-01-12  1:42 ` Ming Lei
2021-01-12  8:56   ` John Garry
2021-01-12  9:06     ` Ming Lei
2021-01-12  9:23       ` John Garry
2021-01-12 11:44         ` Kashyap Desai
2021-01-13 12:17           ` John Garry
2021-01-13 13:34             ` Kashyap Desai
2021-01-13 15:39               ` John Garry
2021-01-12 17:44       ` John Garry
2021-01-12  7:23 ` Hannes Reinecke
2021-01-12  9:15   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=62b562eae9830830d87ea9f92dcc0018a1935583.camel@linux.ibm.com \
    --to=jejb@linux.ibm.com \
    --cc=MPT-FusionLinux.pdl@broadcom.com \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=john.garry@huawei.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=sathya.prakash@broadcom.com \
    --cc=sreekanth.reddy@broadcom.com \
    --cc=suganath-prabu.subramani@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).