All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: John Garry <john.garry@huawei.com>,
	Ming Lei <ming.lei@redhat.com>,
	Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	Kashyap Desai <kashyap.desai@broadcom.com>,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	"James E.J. Bottomley" <jejb@linux.ibm.com>,
	Christoph Hellwig <hch@lst.de>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	Sathya Prakash <sathya.prakash@broadcom.com>,
	Sreekanth Reddy <sreekanth.reddy@broadcom.com>,
	Suganath Prabu Subramani  <suganath-prabu.subramani@broadcom.com>,
	PDL-MPT-FUSIONLINUX <MPT-FusionLinux.pdl@broadcom.com>
Cc: chenxiang <chenxiang66@hisilicon.com>
Subject: Re: About scsi device queue depth
Date: Tue, 12 Jan 2021 08:23:36 +0100	[thread overview]
Message-ID: <2b9a90c4-17e6-4935-bf3f-4bef54de27cc@suse.de> (raw)
In-Reply-To: <9ff894da-cf2c-9094-2690-1973cc57835a@huawei.com>

On 1/11/21 5:21 PM, John Garry wrote:
> Hi,
> 
> I was looking at some IOMMU issue on a LSI RAID 3008 card, and noticed 
> that performance there is not what I get on other SAS HBAs - it's lower.
> 
> After some debugging and fiddling with sdev queue depth in mpt3sas 
> driver, I am finding that performance changes appreciably with sdev 
> queue depth:
> 
> sdev qdepth    fio number jobs*     1    10    20
> 16                    1590    1654    1660
> 32                    1545    1646    1654
> 64                    1436    1085    1070
> 254 (default)                1436    1070    1050
> 
> fio queue depth is 40, and I'm using 12x SAS SSDs.
> 
> I got comparable disparity in results for fio queue depth = 128 and num 
> jobs = 1:
> 
> sdev qdepth    fio number jobs*     1
> 16                    1640
> 32                    1618
> 64                    1577
> 254 (default)                1437
> 
> IO sched = none.
> 
> That driver also sets queue depth tracking = 1, but never seems to kick in.
> 
> So it seems to me that the block layer is merging more bios per request, 
> as averge sg count per request goes up from 1 - > upto 6 or more. As I 
> see, when queue depth lowers the only thing that is really changing is 
> that we fail more often in getting the budget in 
> scsi_mq_get_budget()->scsi_dev_queue_ready().
> 
> So initial sdev queue depth comes from cmd_per_lun by default or 
> manually setting in the driver via scsi_change_queue_depth(). It seems 
> to me that some drivers are not setting this optimally, as above.
> 
> Thoughts on guidance for setting sdev queue depth? Could blk-mq changed 
> this behavior?
> 
First of all: are these 'real' SAS SSDs?
The peak at 32 seems very ATA-ish, and I wouldn't put it past the LSI 
folks to optimize for that case :-)
Can you get a more detailed picture by changing the queue depth more 
finegrained?
(Will get you nicer graphs to boot :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

  parent reply	other threads:[~2021-01-12  7:24 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 16:21 About scsi device queue depth John Garry
2021-01-11 16:40 ` James Bottomley
2021-01-11 17:11   ` John Garry
2021-01-12  6:35     ` James Bottomley
2021-01-12 10:27       ` John Garry
2021-01-12 16:40         ` Bryan Gurney
2021-01-12 16:47         ` James Bottomley
2021-01-12 17:20           ` Bryan Gurney
2021-01-11 17:31   ` Douglas Gilbert
2021-01-13  6:07   ` Martin K. Petersen
2021-01-13  6:36     ` Damien Le Moal
2021-01-12  1:42 ` Ming Lei
2021-01-12  8:56   ` John Garry
2021-01-12  9:06     ` Ming Lei
2021-01-12  9:23       ` John Garry
2021-01-12 11:44         ` Kashyap Desai
2021-01-13 12:17           ` John Garry
2021-01-13 13:34             ` Kashyap Desai
2021-01-13 15:39               ` John Garry
2021-01-12 17:44       ` John Garry
2021-01-12  7:23 ` Hannes Reinecke [this message]
2021-01-12  9:15   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b9a90c4-17e6-4935-bf3f-4bef54de27cc@suse.de \
    --to=hare@suse.de \
    --cc=MPT-FusionLinux.pdl@broadcom.com \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=jejb@linux.ibm.com \
    --cc=john.garry@huawei.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=sathya.prakash@broadcom.com \
    --cc=sreekanth.reddy@broadcom.com \
    --cc=suganath-prabu.subramani@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.