linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>,
	<linux-block@vger.kernel.org>, <linux-scsi@vger.kernel.org>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	Jens Axboe <axboe@kernel.dk>,
	Douglas Gilbert <dgilbert@interlog.com>,
	Hannes Reinecke <hare@suse.com>
Subject: Re: [bug report] shared tags causes IO hang and performance drop
Date: Mon, 26 Apr 2021 11:53:45 +0100	[thread overview]
Message-ID: <c1d5abaa-c460-55f8-5351-16f09d6aa81f@huawei.com> (raw)
In-Reply-To: <0c85fe52-ebc7-68b3-2dbe-dfad5d604346@huawei.com>

On 23/04/2021 09:43, John Garry wrote:
>> 1) randread test on ibm-x3850x6[*] with deadline
>>
>>                |IOPS    | FIO CPU util
>> ------------------------------------------------
>> hosttags      | 94k    | usr=1.13%, sys=14.75%
>> ------------------------------------------------
>> non hosttags  | 124k   | usr=1.12%, sys=10.65%,
>>
> 
> Getting these results for mq-deadline:
> 
> hosttags
> 100K cpu 1.52 4.47
> 
> non-hosttags
> 109K cpu 1.74 5.49
> 
> So I still don't see the same CPU usage increase for hosttags.
> 
> But throughput is down, so at least I can check on that...
> 
>>
>> 2) randread test on ibm-x3850x6[*] with none
>>                |IOPS    | FIO CPU util
>> ------------------------------------------------
>> hosttags      | 120k   | usr=0.89%, sys=6.55%
>> ------------------------------------------------
>> non hosttags  | 121k   | usr=1.07%, sys=7.35%
>> ------------------------------------------------
>>
> 
> Here I get:
> hosttags
> 113K cpu 2.04 5.83
> 
> non-hosttags
> 108K cpu 1.71 5.05

Hi Ming,

One thing I noticed is that for the non-hosttags scenario is that I am 
hitting the IO scheduler tag exhaustion path in blk_mq_get_tag() often; 
here's some perf output:

|--15.88%--blk_mq_submit_bio
|     |
|     |--11.27%--__blk_mq_alloc_request
|     |      |
|     |       --11.19%--blk_mq_get_tag
|     |      |
|     |      |--6.00%--__blk_mq_delay_run_hw_queue
|     |      |     |

...

|     |      |
|     |      |--3.29%--io_schedule
|     |      |     |

....

|     |      |     |
|     |      |     --1.32%--io_schedule_prepare
|     |      |

...

|     |      |
|     |      |--0.60%--sbitmap_finish_wait
|     |      |
      --0.56%--sbitmap_get

I don't see this for hostwide tags - this may be because we have 
multiple hctx, and the IO sched tags are per hctx, so less chance of 
exhaustion. But this is not from hostwide tags specifically, but for 
multiple HW queues in general. As I understood, sched tags were meant to 
be per request queue, right? I am reading this correctly?

I can barely remember some debate on this, but could not find the 
thread. Hannes did have a patch related to topic, but was dropped:
https://lore.kernel.org/linux-scsi/20191202153914.84722-7-hare@suse.de/#t

Thanks,
John




  reply	other threads:[~2021-04-26 10:56 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-14  7:50 [bug report] shared tags causes IO hang and performance drop Ming Lei
2021-04-14 10:10 ` John Garry
2021-04-14 10:38   ` Ming Lei
2021-04-14 10:42   ` Kashyap Desai
2021-04-14 11:12     ` Ming Lei
2021-04-14 12:06       ` John Garry
2021-04-15  3:46         ` Ming Lei
2021-04-15 10:41           ` John Garry
2021-04-15 12:18             ` Ming Lei
2021-04-15 15:41               ` John Garry
2021-04-16  0:46                 ` Ming Lei
2021-04-16  8:29                   ` John Garry
2021-04-16  8:39                     ` Ming Lei
2021-04-16 14:59                       ` John Garry
2021-04-20  3:06                         ` Douglas Gilbert
2021-04-20  3:22                           ` Bart Van Assche
2021-04-20  4:54                             ` Douglas Gilbert
2021-04-20  6:52                               ` Ming Lei
2021-04-20 20:22                                 ` Douglas Gilbert
2021-04-21  1:40                                   ` Ming Lei
2021-04-23  8:43           ` John Garry
2021-04-26 10:53             ` John Garry [this message]
2021-04-26 14:48               ` Ming Lei
2021-04-26 15:52                 ` John Garry
2021-04-26 16:03                   ` Ming Lei
2021-04-26 17:02                     ` John Garry
2021-04-26 23:59                       ` Ming Lei
2021-04-27  7:52                         ` John Garry
2021-04-27  9:11                           ` Ming Lei
2021-04-27  9:37                             ` John Garry
2021-04-27  9:52                               ` Ming Lei
2021-04-27 10:15                                 ` John Garry
2021-07-07 17:06                                 ` John Garry
2021-04-14 13:59       ` Kashyap Desai
2021-04-14 17:03         ` Douglas Gilbert
2021-04-14 18:19           ` John Garry
2021-04-14 19:39             ` Douglas Gilbert
2021-04-15  0:58         ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1d5abaa-c460-55f8-5351-16f09d6aa81f@huawei.com \
    --to=john.garry@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=dgilbert@interlog.com \
    --cc=hare@suse.com \
    --cc=kashyap.desai@broadcom.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).