From: John Garry <john.garry@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, <linux-block@vger.kernel.org>,
"Bart Van Assche" <bvanassche@acm.org>,
Hannes Reinecke <hare@suse.com>, "Christoph Hellwig" <hch@lst.de>,
Thomas Gleixner <tglx@linutronix.de>,
Keith Busch <keith.busch@intel.com>
Subject: Re: [PATCH V4 0/5] blk-mq: improvement on handling IO during CPU hotplug
Date: Tue, 22 Oct 2019 12:19:17 +0100 [thread overview]
Message-ID: <4a42a062-8bca-9e03-c158-1c149986d383@huawei.com> (raw)
In-Reply-To: <20191022001613.GA32193@ming.t460p>
On 22/10/2019 01:16, Ming Lei wrote:
> On Mon, Oct 21, 2019 at 03:02:56PM +0100, John Garry wrote:
>> On 21/10/2019 13:53, Ming Lei wrote:
>>> On Mon, Oct 21, 2019 at 12:49:53PM +0100, John Garry wrote:
>>>>>>>
>>>>>>
>>>>>> Yes, we share tags among all queues, but we generate the tag - known as IPTT
>>>>>> - in the LLDD now, as we can no longer use the request tag (as it is not
>>>>>> unique per all queues):
>>>>>>
>>>>>> https://github.com/hisilicon/kernel-dev/commit/087b95af374be6965583c1673032fb33bc8127e8#diff-f5d8fff19bc539a7387af5230d4e5771R188
>>>>>>
>>>>>> As I said, the branch is messy and I did have to fix 087b95af374.
>>>>>
>>>>> Firstly this way may waste lots of memory, especially the queue depth is
>>>>> big, such as, hisilicon V3's queue depth is 4096.
>>>>>
>>>>> Secondly, you have to deal with queue busy efficiently and correctly,
>>>>> for example, your real hw tags(IPTT) can be used up easily, and how
>>>>> will you handle these dispatched request?
>>>>
>>>> I have not seen scenario of exhausted IPTT. And IPTT count is same as SCSI
>>>> host.can_queue, so SCSI midlayer should ensure that this does not occur.
>>>
>>
>> Hi Ming,
Hi Ming,
>>
>>> That check isn't correct, and each hw queue should have allowed
>>> .can_queue in-flight requests.
>>
>> There always seems to be some confusion or disagreement on this topic.
>>
>> I work according to the comment in scsi_host.h:
>>
>> "Note: it is assumed that each hardware queue has a queue depth of
>> can_queue. In other words, the total queue depth per host
>> is nr_hw_queues * can_queue."
>>
>> So I set Scsi_host.can_queue = HISI_SAS_MAX_COMMANDS (=4096)
>
> I believe all current drivers set .can_queue as single hw queue's depth.
> If you set .can_queue as HISI_SAS_MAX_COMMANDS which is HBA's queue
> depth, the hisilicon sas driver will HISI_SAS_MAX_COMMANDS * nr_hw_queues
> in-flight requests.
Yeah, but the SCSI host should still limit max IOs over all queues to
.can_queue:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/scsi/scsi_mid_low_api.txt#n1083
>
>>
>>>
>>>>
>>>>>
>>>>> Finally, you have to evaluate the performance effect, this is highly
>>>>> related with how to deal with out-of-IPTT.
>>>>
>>>> Some figures from our previous testing:
>>>>
>>>> Managed interrupt without exposing multiple queues: 3M IOPs
>>>> Managed interrupt with exposing multiple queues: 2.6M IOPs
>>>
>>> Then you see the performance regression.
>>
>> Let's discuss this when I send the patches, so we don't get sidetracked on
>> this blk-mq improvement topic.
>
> OK, what I meant is to use correct driver to test the patches, otherwise
> it might be hard to investigate.
Of course. I'm working on this now, and it looks like it will turn out
complicated... you'll see.
BTW, I reran the test and never say the new WARN trigger (while SCSI
timeouts did occur).
Thanks again,
John
>
>
> Thanks,
> Ming
>
> .
>
next prev parent reply other threads:[~2019-10-22 11:19 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-14 1:50 [PATCH V4 0/5] blk-mq: improvement on handling IO during CPU hotplug Ming Lei
2019-10-14 1:50 ` [PATCH V4 1/5] blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED Ming Lei
2019-10-14 1:50 ` [PATCH V4 2/5] blk-mq: prepare for draining IO when hctx's all CPUs are offline Ming Lei
2019-10-14 1:50 ` [PATCH V4 3/5] blk-mq: stop to handle IO and drain IO before hctx becomes dead Ming Lei
2019-11-28 9:29 ` John Garry
2019-10-14 1:50 ` [PATCH V4 4/5] blk-mq: re-submit IO in case that hctx is dead Ming Lei
2019-10-14 1:50 ` [PATCH V4 5/5] blk-mq: handle requests dispatched from IO scheduler " Ming Lei
2019-10-16 8:58 ` [PATCH V4 0/5] blk-mq: improvement on handling IO during CPU hotplug John Garry
2019-10-16 12:07 ` Ming Lei
2019-10-16 16:19 ` John Garry
[not found] ` <55a84ea3-647d-0a76-596c-c6c6b2fc1b75@huawei.com>
2019-10-20 10:14 ` Ming Lei
2019-10-21 9:19 ` John Garry
2019-10-21 9:34 ` Ming Lei
2019-10-21 9:47 ` John Garry
2019-10-21 10:24 ` Ming Lei
2019-10-21 11:49 ` John Garry
2019-10-21 12:53 ` Ming Lei
2019-10-21 14:02 ` John Garry
2019-10-22 0:16 ` Ming Lei
2019-10-22 11:19 ` John Garry [this message]
2019-10-22 13:45 ` Ming Lei
2019-10-25 16:33 ` John Garry
2019-10-28 10:42 ` Ming Lei
2019-10-28 11:55 ` John Garry
2019-10-29 1:50 ` Ming Lei
2019-10-29 9:22 ` John Garry
2019-10-29 10:05 ` Ming Lei
2019-10-29 17:54 ` John Garry
2019-10-31 16:28 ` John Garry
2019-11-28 1:09 ` chenxiang (M)
2019-11-28 2:02 ` Ming Lei
2019-11-28 10:45 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4a42a062-8bca-9e03-c158-1c149986d383@huawei.com \
--to=john.garry@huawei.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=hare@suse.com \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).