All of lore.kernel.org
 help / color / mirror / Atom feed
From: Harris, James R <james.r.harris at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] spdk_bdev_close() threading
Date: Thu, 28 Mar 2019 20:40:59 +0000	[thread overview]
Message-ID: <2B3BE5D9-08D2-4170-806C-65E27C875EDB@intel.com> (raw)
In-Reply-To: FBE7E039FA50BF47A673AD0BD3CD56A8462F2F66@HASMSX105.ger.corp.intel.com

[-- Attachment #1: Type: text/plain, Size: 1897 bytes --]



On 3/28/19, 7:56 AM, "SPDK on behalf of Stojaczyk, Dariusz" <spdk-bounces(a)lists.01.org on behalf of dariusz.stojaczyk(a)intel.com> wrote:

<snip>

    > 
    >     Let spdk_bdev_close() to internally schedule itself on desc->thread looks
    > good at the first sight for me.
    > 
    > I agree we can just have it internally schedule itself.  I think we'll still need to
    > set desc->closed = true in the caller's thread through before we schedule it -
    > otherwise we'll have a race with the _remove_notify path.
    
    Okay, cool. Do we have any unit tests for that?

Not really.  We have unit tests for simulating multi-thread cases (i.e. multiple spdk_thread objects) but it's done from a single pthread.

Thinking about this more though - how do we avoid this race?  We don't want bdev to call the user's remove callback after they've closed the descriptor.  We ensure this today since the remove_callback gets called on the same thread as the descriptor was opened.  User will have to be aware that if the descriptor was opened on thread A, and is closed on thread B, that it must be ready to handle a remove callback on thread A at any point up until spdk_bdev_close() returns on thread B.  That sounds messy.  A typical first reaction to getting a remove callback is to close the descriptor - meaning the user has to add its own locking to ensure it doesn't try to close the same descriptor on two different threads.

What if we just add a helper function (spdk_bdev_desc_get_thread?) to return the thread that the descriptor was opened on.  Then at least the user doesn't have to keep track of where it was opened itself.  This may still add some work on the caller side, but I think trying to push this into the bdev layer itself will actually create even more work (and trickier work) on the caller side to handle the hot remove race.

-Jim



             reply	other threads:[~2019-03-28 20:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-28 20:40 Harris, James R [this message]
  -- strict thread matches above, loose matches on Subject: below --
2019-09-28  2:06 [SPDK] spdk_bdev_close() threading wuzhouhui
2019-04-08 14:45 Andrey Kuzmin
2019-04-05 19:32 Stojaczyk, Dariusz
2019-04-05 17:10 Walker, Benjamin
2019-04-05  8:13 Stojaczyk, Dariusz
2019-04-04 23:35 Harris, James R
2019-04-04 22:09 Stojaczyk, Dariusz
2019-03-28 14:56 Stojaczyk, Dariusz
2019-03-28 14:55 Stojaczyk, Dariusz
2019-03-28 14:41 Harris, James R
2019-03-28 10:40 Andrey Kuzmin
2019-03-28  9:13 Stojaczyk, Dariusz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2B3BE5D9-08D2-4170-806C-65E27C875EDB@intel.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.