From: Jens Axboe <firstname.lastname@example.org>
To: Kashyap Desai <email@example.com>,
Bart Van Assche <firstname.lastname@example.org>,
Cc: Ming Lei <email@example.com>,
Suganath Prabu Subramani <firstname.lastname@example.org>,
Sreekanth Reddy <email@example.com>,
Sathya Prakash Veerichetty <firstname.lastname@example.org>
Subject: Re: [PATCH V2] blk-mq: Set request mapping to NULL in blk_mq_put_driver_tag
Date: Tue, 18 Dec 2018 11:28:46 -0700 [thread overview]
Message-ID: <email@example.com> (raw)
On 12/18/18 11:22 AM, Kashyap Desai wrote:
>>> At the time of device removal, it requires reverse traversing. Find
>>> out if each requests associated with sdev is part of hctx->tags->rqs()
>>> and clear that entry.
>>> Not sure about atomic traverse if more than one device removal is
>>> happening in parallel. May be more error prone. ?
>>> Just wondering both the way we will be removing invalid request from
>>> Are you suspecting any performance issue if we do it per IO ?
>> It's an extra store, and it's a store to an area that's then now shared
>> issue and completion. Those are never a good idea. Besides, it's the kind
>> issue you solve in the SLOW path, not in the fast path. Since that's
>> doable, it
>> would be silly to do it for every IO.
>> This might not matter on mpt3sas, but on more efficient hw it definitely
> Understood your primary concern is to avoid per IO and do it if no better
>> I'm still trying to convince myself that this issue even exists. I can see
>> stale entries, but those should never be busy. Why are you finding them
>> the tag iteration? It must be because the tag is reused, and you are
>> finding it
>> before it's re-assigned?
> Stale entries will be forever if we remove scsi devices. It is not timing
> issue. If memory associated with request (freed due to device removal)
> reused, kernel panic occurs.
> We have 24 Drives behind Expander and follow expander reset which will
> remove all 24 drives and add it back. Add and removal of all the drives
> happens quickly.
> As part of Expander reset, <mpt3sas> driver process broadcast primitive
> event and that requires finding all outstanding scsi command.
> In some cases, we need firmware restart and that path also requires tag
I actually took a look at scsi_host_find_tag() - what I think needs
fixing here is that it should not return a tag that isn't allocated.
You're just looking up random stuff, that is a recipe for disaster.
But even with that, there's no guarantee that the tag isn't going away.
The mpt3sas use case is crap. It's iterating every tag, just in case it
needs to do something to it.
My suggestion would be to scrap that bad implementation and have
something available for iterating busy tags instead. That'd be more
appropriate and a lot more efficient that a random loop from 0..depth.
If you are flushing running commands, looking up tags that aren't even
active is silly and counterproductive.
next prev parent reply other threads:[~2018-12-18 18:28 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-18 7:08 [PATCH V2] blk-mq: Set request mapping to NULL in blk_mq_put_driver_tag Kashyap Desai
2018-12-18 8:27 ` Hannes Reinecke
2018-12-18 16:15 ` Bart Van Assche
2018-12-18 16:18 ` Jens Axboe
2018-12-18 17:08 ` Kashyap Desai
2018-12-18 17:13 ` Jens Axboe
2018-12-18 17:48 ` Kashyap Desai
2018-12-18 17:50 ` Jens Axboe
2018-12-18 18:08 ` Kashyap Desai
2018-12-18 18:11 ` Jens Axboe
2018-12-18 18:22 ` Kashyap Desai
2018-12-18 18:28 ` Jens Axboe [this message]
2018-12-18 19:04 ` Kashyap Desai
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).