linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	"James E.J. Bottomley" <jejb@linux.ibm.com>,
	Christoph Hellwig <hch@lst.de>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	Sathya Prakash <sathya.prakash@broadcom.com>,
	Sreekanth Reddy <sreekanth.reddy@broadcom.com>,
	Suganath Prabu Subramani  <suganath-prabu.subramani@broadcom.com>,
	PDL-MPT-FUSIONLINUX <MPT-FusionLinux.pdl@broadcom.com>,
	chenxiang <chenxiang66@hisilicon.com>
Subject: Re: About scsi device queue depth
Date: Tue, 12 Jan 2021 08:56:45 +0000	[thread overview]
Message-ID: <4b50f067-a368-2197-c331-a8c981f5cd02@huawei.com> (raw)
In-Reply-To: <20210112014203.GA60605@T590>

Hi Ming,

>>
>> I was looking at some IOMMU issue on a LSI RAID 3008 card, and noticed that
>> performance there is not what I get on other SAS HBAs - it's lower.
>>
>> After some debugging and fiddling with sdev queue depth in mpt3sas driver, I
>> am finding that performance changes appreciably with sdev queue depth:
>>
>> sdev qdepth	fio number jobs* 	1	10	20
>> 16					1590	1654	1660
>> 32					1545	1646	1654
>> 64					1436	1085	1070
>> 254 (default)				1436	1070	1050
> 
> What does the performance number mean? IOPS or others? What is the fio
> io test? random IO or sequential IO?

So those figures are x1K IOPs read performance; so 1590, above, is 1.59M 
IOPs read. Here's the fio script:

[global]
rw=read
direct=1
ioengine=libaio
iodepth=40
numjobs=20
bs=4k
;size=10240000m
;zero_buffers=1
group_reporting=1
;ioscheduler=noop
;cpumask=0xffe
;cpus_allowed=1-47
;gtod_reduce=1
;iodepth_batch=2
;iodepth_batch_complete=2
runtime=60
;thread
loops = 10000

>>
>> fio queue depth is 40, and I'm using 12x SAS SSDs.
>>
>> I got comparable disparity in results for fio queue depth = 128 and num jobs
>> = 1:
>>
>> sdev qdepth	fio number jobs* 	1	
>> 16					1640
>> 32					1618	
>> 64					1577	
>> 254 (default)				1437	
>>
>> IO sched = none.
>>
>> That driver also sets queue depth tracking = 1, but never seems to kick in.
>>
>> So it seems to me that the block layer is merging more bios per request, as
>> averge sg count per request goes up from 1 - > upto 6 or more. As I see,
>> when queue depth lowers the only thing that is really changing is that we
>> fail more often in getting the budget in
>> scsi_mq_get_budget()->scsi_dev_queue_ready().
> 
> Right, the behavior basically doesn't change compared with block legacy
> io path. And that is why sdev->queue_depth is a bit important for HDD.

OK

> 
>>
>> So initial sdev queue depth comes from cmd_per_lun by default or manually
>> setting in the driver via scsi_change_queue_depth(). It seems to me that
>> some drivers are not setting this optimally, as above.
>>
>> Thoughts on guidance for setting sdev queue depth? Could blk-mq changed this
>> behavior?
> 
> So far, the sdev queue depth is provided by SCSI layer, and blk-mq can
> queue one request only if budget is obtained via .get_budget().
> 

Well, based on my testing, default sdev queue depth seems too large for 
that LLDD ...

Thanks,
John


  reply	other threads:[~2021-01-12  8:58 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 16:21 About scsi device queue depth John Garry
2021-01-11 16:40 ` James Bottomley
2021-01-11 17:11   ` John Garry
2021-01-12  6:35     ` James Bottomley
2021-01-12 10:27       ` John Garry
2021-01-12 16:40         ` Bryan Gurney
2021-01-12 16:47         ` James Bottomley
2021-01-12 17:20           ` Bryan Gurney
2021-01-11 17:31   ` Douglas Gilbert
2021-01-13  6:07   ` Martin K. Petersen
2021-01-13  6:36     ` Damien Le Moal
2021-01-12  1:42 ` Ming Lei
2021-01-12  8:56   ` John Garry [this message]
2021-01-12  9:06     ` Ming Lei
2021-01-12  9:23       ` John Garry
2021-01-12 11:44         ` Kashyap Desai
2021-01-13 12:17           ` John Garry
2021-01-13 13:34             ` Kashyap Desai
2021-01-13 15:39               ` John Garry
2021-01-12 17:44       ` John Garry
2021-01-12  7:23 ` Hannes Reinecke
2021-01-12  9:15   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4b50f067-a368-2197-c331-a8c981f5cd02@huawei.com \
    --to=john.garry@huawei.com \
    --cc=MPT-FusionLinux.pdl@broadcom.com \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=jejb@linux.ibm.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=sathya.prakash@broadcom.com \
    --cc=sreekanth.reddy@broadcom.com \
    --cc=suganath-prabu.subramani@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).