linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: "van der Linden, Frank" <fllinden@amazon.com>,
	"jianchao.wang" <jianchao.w.wang@oracle.com>,
	Anchal Agarwal <anchalag@amzn.com>
Cc: "mlinux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] blk-wbt: get back the missed wakeup from __wbt_done
Date: Fri, 24 Aug 2018 10:44:22 -0600	[thread overview]
Message-ID: <c2e96743-11a1-3e7e-0563-9789c5443f41@kernel.dk> (raw)
In-Reply-To: <347a7a07dc5f4122a37afd703ef2a3d0@EX13D13UWB002.ant.amazon.com>

On 8/24/18 10:40 AM, van der Linden, Frank wrote:
> On 8/23/18 10:56 PM, jianchao.wang wrote:
>>
>> On 08/24/2018 07:14 AM, Jens Axboe wrote:
>>> On 8/23/18 5:03 PM, Jens Axboe wrote:
>>>>> Hi Jens, This patch looks much cleaner for sure as Frank pointed out
>>>>> too. Basically this looks similar to wake_up_nr only making sure that
>>>>> those woken up requests won't get reordered. This does solves the
>>>>> thundering herd issue. However, I tested the patch against my
>>>>> application and lock contention numbers rose to around 10 times from
>>>>> what I had from your last 3 patches.  Again this did add to drop in
>>>>> of total files read by 0.12% and rate at which they were read by
>>>>> 0.02% but this is not a very significant drop. Is lock contention
>>>>> worth the tradeoff?  I also added missing
>>>>> __set_current_state(TASK_RUNNING) to the patch for testing.
>>>> Can you try this variant? I don't think we need a
>>>> __set_current_state() after io_schedule(), should be fine as-is.
>>>>
>>>> I'm not surprised this will raise contention a bit, since we're now
>>>> waking N tasks potentially, if N can queue. With the initial change,
>>>> we'd always just wake one.  That is arguably incorrect. You say it's
>>>> 10 times higher contention, how does that compare to before your
>>>> patch?
>>>>
>>>> Is it possible to run something that looks like your workload?
>>> Additionally, is the contention you are seeing the wait queue, or the
>>> atomic counter? When you say lock contention, I'm inclined to think it's
>>> the rqw->wait.lock.
>>>
>> I guess the increased lock contend is due to:
>> when the wake up is ongoing with wait head lock is held, there is still waiter
>> on wait queue, and __wbt_wait will go to wait and try to require the wait head lock.
>> This is necessary to keep the order on the rqw->wait queue.
>>
>> The attachment does following thing to try to avoid the scenario above.
>> "
>> Introduce wait queue rqw->delayed. Try to lock rqw->wait.lock firstly, if fails, add
>> the waiter on rqw->delayed. __wbt_done will pick the waiters on rqw->delayed up and
>> queue them on the tail of rqw->wait before it do wake up operation.
>> "
>>
> Hmm, I am not sure about this one. Sure, it will reduce lock contention
> for the waitq lock, but it also introduces more complexity.
> 
> It's expected that there will be more contention if the waitq lock is
> held longer. That's the tradeoff for waking up more throttled tasks and
> making progress faster. Is this added complexity worth the gains? My
> first inclination would be to say no.
> 
> If lock contention on a wait queue is an issue, then either the wait
> queue mechanism itself should be improved, or the code that uses the
> wait queue should be fixed. Also, the contention is still a lot lower
> than it used to be.

Hard to disagree with that. If you look at the blk-mq tagging code, it
spreads out the wait queues exactly because of this.

-- 
Jens Axboe


  reply	other threads:[~2018-08-24 16:44 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-23 13:08 [PATCH] blk-wbt: get back the missed wakeup from __wbt_done Jianchao Wang
2018-08-23 15:37 ` Jens Axboe
2018-08-23 16:24   ` van der Linden, Frank
     [not found]     ` <20180823210144.GB5624@kaos-source-ops-60001.pdx1.amazon.com>
2018-08-23 23:03       ` Jens Axboe
2018-08-23 23:14         ` Jens Axboe
2018-08-24  5:55           ` jianchao.wang
2018-08-24 16:40             ` van der Linden, Frank
2018-08-24 16:44               ` Jens Axboe [this message]
     [not found]         ` <20180824181223.GA9049@kaos-source-ops-60001.pdx1.amazon.com>
2018-08-24 18:50           ` Jens Axboe
     [not found]             ` <20180824203305.GA4690@kaos-source-ops-60001.pdx1.amazon.com>
2018-08-24 20:41               ` Jens Axboe
2018-08-25 15:41                 ` Jens Axboe
2018-08-27  3:52                   ` jianchao.wang
2018-08-27  6:15                     ` jianchao.wang
2018-08-27 14:51                       ` Jens Axboe
2018-08-28  2:52                         ` jianchao.wang
2018-08-27 15:37                     ` Jens Axboe
2018-08-23 15:42 ` Jens Axboe
2018-08-24  2:06   ` jianchao.wang
2018-08-24 14:40     ` Jens Axboe
2018-08-24 14:58       ` Jens Axboe
2018-08-24 17:14         ` Eduardo Valentin
2018-08-24 17:17           ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c2e96743-11a1-3e7e-0563-9789c5443f41@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=anchalag@amzn.com \
    --cc=fllinden@amazon.com \
    --cc=jianchao.w.wang@oracle.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).