From: Bart Van Assche <bvanassche@acm.org>
To: John Garry <john.garry@huawei.com>, Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v5 0/3] blk-mq: Fix a race between iterating over requests and freeing requests
Date: Tue, 6 Apr 2021 10:49:06 -0700 [thread overview]
Message-ID: <fd0359fd-37a5-1e60-0a2b-4e27d1d3ee33@acm.org> (raw)
In-Reply-To: <a4ffb3f0-414d-ba7b-db49-1660faa37873@huawei.com>
On 4/6/21 1:00 AM, John Garry wrote:
> Hi Bart,
>
>> Changes between v2 and v3:
>> - Converted the single v2 patch into a series of three patches.
>> - Switched from SRCU to a combination of RCU and semaphores.
>
> But can you mention why we made to changes from v3 onwards (versus
> v2)?
>
> The v2 patch just used SRCU as the sync mechanism, and the impression
> I got from Jens was that the marginal performance drop was tolerable.
> And the issues it tries to address seem to be solved. So why change?
> Maybe my impression of the performance drop being acceptable was
> wrong.
Hi John,
It seems like I should have done a better job of explaining that change.
On v2 I received the following feedback from Hannes: "What I don't
particularly like is the global blk_sched_srcu here; can't
we make it per tagset?". My reply was as follows: "I'm concerned about
the additional memory required for one srcu_struct per tag set." Hence
the switch from SRCU to RCU + rwsem. See also
https://lore.kernel.org/linux-block/d1627890-fb10-7ebe-d805-621f925f80e7@suse.de/.
Regarding the 1% performance drop measured by Jens: with debugging
disabled srcu_dereference() is translated into READ_ONCE() and
rcu_assign_pointer() is translated into smp_store_release(). On x86
smp_store_release() is translated into a compiler barrier +
WRITE_ONCE(). In other words, I do not expect that the performance
difference came from the switch to SRCU but rather from the two new
hctx->tags->rqs[] assignments.
I think that the switch to READ_ONCE() / WRITE_ONCE() is unavoidable.
Even if cmpxchg() would be used to clear hctx->tags->rqs[] pointers then
we would need to convert all other hctx->tags->rqs[] accesses into
READ_ONCE() / WRITE_ONCE() to make that cmpxchg() call safe.
Bart.
next prev parent reply other threads:[~2021-04-06 17:49 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-05 0:28 [PATCH v5 0/3] blk-mq: Fix a race between iterating over requests and freeing requests Bart Van Assche
2021-04-05 0:28 ` [PATCH v5 1/3] blk-mq: Move the elevator_exit() definition Bart Van Assche
2021-04-05 0:28 ` [PATCH v5 2/3] blk-mq: Introduce atomic variants of the tag iteration functions Bart Van Assche
2021-04-05 0:28 ` [PATCH v5 3/3] blk-mq: Fix a race between iterating over requests and freeing requests Bart Van Assche
2021-04-05 18:08 ` Khazhy Kumykov
2021-04-05 21:34 ` Bart Van Assche
2021-04-06 0:24 ` Khazhy Kumykov
2021-04-05 21:34 ` Jens Axboe
2021-04-06 1:06 ` Bart Van Assche
2021-04-06 8:00 ` [PATCH v5 0/3] " John Garry
2021-04-06 17:49 ` Bart Van Assche [this message]
2021-04-07 15:12 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fd0359fd-37a5-1e60-0a2b-4e27d1d3ee33@acm.org \
--to=bvanassche@acm.org \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=john.garry@huawei.com \
--cc=linux-block@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).