All of lore.kernel.org
 help / color / mirror / Atom feed
* SRCU use from different contexts
@ 2022-11-23 11:46 Jan Kara
  2022-11-24  0:21 ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Kara @ 2022-11-23 11:46 UTC (permalink / raw)
  To: rcu; +Cc: Lai Jiangshan, Paul E. McKenney, Josh Triplett, Amir Goldstein

Hello!

We were pondering with Amir about some issues with fsnotify subsystem and
as a building block we would need a mechanism to make sure write(2) has
completed. For simplicity we could imagine it like a sequence

write(2)
  START
  do stuff to perform write
  END

and we need a mechanism to wait for all processes that already passed START
to reach END. Ideally without blocking new writes while we wait for the
pending ones. Now this seems like a good task for SRCU. We could do:

write(2)
  srcu_read_lock(&sb->s_write_rcu);
  do stuff to perform write
  srcu_read_unlock(&sb->s_write_rcu);

and use synchronize_srcu(&sb->s_write_rcu) for waiting.

But the trouble with writes is there are things like aio or io_uring where
the part with srcu_read_lock() happens from one task (the submitter) while
the part with srcu_read_unlock() happens from another context (usually worker
thread triggered by IRQ reporting that the HW has finished the IO).

Is there any chance to make SRCU work in a situation like this? It seems to
me in principle it should be possible to make this work but maybe there are
some implementation constraints I'm missing... 

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-23 11:46 SRCU use from different contexts Jan Kara
@ 2022-11-24  0:21 ` Paul E. McKenney
  2022-11-24  6:21   ` Amir Goldstein
  2022-11-24 10:18   ` Jan Kara
  0 siblings, 2 replies; 10+ messages in thread
From: Paul E. McKenney @ 2022-11-24  0:21 UTC (permalink / raw)
  To: Jan Kara; +Cc: rcu, Lai Jiangshan, Josh Triplett, Amir Goldstein

On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> Hello!
> 
> We were pondering with Amir about some issues with fsnotify subsystem and
> as a building block we would need a mechanism to make sure write(2) has
> completed. For simplicity we could imagine it like a sequence
> 
> write(2)
>   START
>   do stuff to perform write
>   END
> 
> and we need a mechanism to wait for all processes that already passed START
> to reach END. Ideally without blocking new writes while we wait for the
> pending ones. Now this seems like a good task for SRCU. We could do:
> 
> write(2)
>   srcu_read_lock(&sb->s_write_rcu);
>   do stuff to perform write
>   srcu_read_unlock(&sb->s_write_rcu);
> 
> and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> 
> But the trouble with writes is there are things like aio or io_uring where
> the part with srcu_read_lock() happens from one task (the submitter) while
> the part with srcu_read_unlock() happens from another context (usually worker
> thread triggered by IRQ reporting that the HW has finished the IO).
> 
> Is there any chance to make SRCU work in a situation like this? It seems to
> me in principle it should be possible to make this work but maybe there are
> some implementation constraints I'm missing... 

The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
will work for this, though that is not their intended purpose.  Plus you
might want to trace these functions, which, as their names indicate, is
not permitted.  I assume that you do not intend to use these functions
from NMI handlers, though that really could be accommodated.  (But why
would you need that?)

So how about srcu_down_read() and srcu_up_read(), as shown in the
(untested) patch below?

Note that you do still need to pass the return value from srcu_down_read()
into srcu_up_read().  I am guessing that io_uring has a convenient place
that this value can be placed.  No idea about aio.

Thoughts?

							Thanx, Paul

------------------------------------------------------------------------

commit 0efa1e7b5a862e9c2f1bf8c19db6bd142ad35355
Author: Paul E. McKenney <paulmck@kernel.org>
Date:   Wed Nov 23 15:49:55 2022 -0800

    rcu: Add srcu_down_read() and srcu_up_read()
    
    A pair of matching srcu_read_lock() and srcu_read_unlock() invocations
    must take place within the same context, for example, within the same
    task.  Otherwise, lockdep complains, as is the right thing to do for
    most use cases.
    
    However, there are use cases involving asynchronous I/O where the
    SRCU reader needs to begin on one task and end on another.  This commit
    therefore supplies the semaphore-like srcu_down_read() and srcu_up_read(),
    which act like srcu_read_lock() and srcu_read_unlock(), but permitting
    srcu_up_read() to be invoked in a different context than was the matching
    srcu_down_read().
    
    Neither srcu_down_read() nor srcu_up_read() may be invoked from an
    NMI handler.
    
    Reported-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/include/linux/srcu.h b/include/linux/srcu.h
index 9b9d0bbf1d3cf..74796cd7e7a9d 100644
--- a/include/linux/srcu.h
+++ b/include/linux/srcu.h
@@ -214,6 +214,34 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
 	return retval;
 }
 
+/**
+ * srcu_down_read - register a new reader for an SRCU-protected structure.
+ * @ssp: srcu_struct in which to register the new reader.
+ *
+ * Enter a semaphore-like SRCU read-side critical section.  Note that
+ * SRCU read-side critical sections may be nested.  However, it is
+ * illegal to call anything that waits on an SRCU grace period for the
+ * same srcu_struct, whether directly or indirectly.  Please note that
+ * one way to indirectly wait on an SRCU grace period is to acquire
+ * a mutex that is held elsewhere while calling synchronize_srcu() or
+ * synchronize_srcu_expedited().  But if you want lockdep to help you
+ * keep this stuff straight, you should instead use srcu_read_lock().
+ *
+ * The semaphore-like nature of srcu_down_read() means that the matching
+ * srcu_up_read() can be invoked from some other context, for example,
+ * from some other task or from an irq handler.  However, neither
+ * srcu_down_read() nor srcu_up_read() may be invoked from an NMI handler.
+ *
+ * Calls to srcu_down_read() may be nested, similar to the manner in
+ * which calls to down_read() may be nested.
+ */
+static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp)
+{
+	WARN_ON_ONCE(in_nmi());
+	srcu_check_nmi_safety(ssp, false);
+	return __srcu_read_lock(ssp);
+}
+
 /**
  * srcu_read_unlock - unregister a old reader from an SRCU-protected structure.
  * @ssp: srcu_struct in which to unregister the old reader.
@@ -254,6 +282,23 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp)
 	__srcu_read_unlock(ssp, idx);
 }
 
+/**
+ * srcu_up_read - unregister a old reader from an SRCU-protected structure.
+ * @ssp: srcu_struct in which to unregister the old reader.
+ * @idx: return value from corresponding srcu_read_lock().
+ *
+ * Exit an SRCU read-side critical section, but not necessarily from
+ * the same context as the maching srcu_down_read().
+ */
+static inline void srcu_up_read(struct srcu_struct *ssp, int idx)
+	__releases(ssp)
+{
+	WARN_ON_ONCE(idx & ~0x1);
+	WARN_ON_ONCE(in_nmi());
+	srcu_check_nmi_safety(ssp, false);
+	__srcu_read_unlock(ssp, idx);
+}
+
 /**
  * smp_mb__after_srcu_read_unlock - ensure full ordering after srcu_read_unlock
  *

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24  0:21 ` Paul E. McKenney
@ 2022-11-24  6:21   ` Amir Goldstein
  2022-11-24  9:58     ` Jan Kara
  2022-11-24 10:18   ` Jan Kara
  1 sibling, 1 reply; 10+ messages in thread
From: Amir Goldstein @ 2022-11-24  6:21 UTC (permalink / raw)
  To: paulmck; +Cc: Jan Kara, rcu, Lai Jiangshan, Josh Triplett, linux-fsdevel

[+fsdevel]

On Thu, Nov 24, 2022 at 2:21 AM Paul E. McKenney <paulmck@kernel.org> wrote:
>
> On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > Hello!
> >
> > We were pondering with Amir about some issues with fsnotify subsystem and
> > as a building block we would need a mechanism to make sure write(2) has
> > completed. For simplicity we could imagine it like a sequence
> >
> > write(2)
> >   START
> >   do stuff to perform write
> >   END
> >
> > and we need a mechanism to wait for all processes that already passed START
> > to reach END. Ideally without blocking new writes while we wait for the
> > pending ones. Now this seems like a good task for SRCU. We could do:
> >
> > write(2)
> >   srcu_read_lock(&sb->s_write_rcu);
> >   do stuff to perform write
> >   srcu_read_unlock(&sb->s_write_rcu);
> >
> > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> >
> > But the trouble with writes is there are things like aio or io_uring where
> > the part with srcu_read_lock() happens from one task (the submitter) while
> > the part with srcu_read_unlock() happens from another context (usually worker
> > thread triggered by IRQ reporting that the HW has finished the IO).
> >
> > Is there any chance to make SRCU work in a situation like this? It seems to
> > me in principle it should be possible to make this work but maybe there are
> > some implementation constraints I'm missing...
>
> The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> will work for this, though that is not their intended purpose.  Plus you
> might want to trace these functions, which, as their names indicate, is
> not permitted.  I assume that you do not intend to use these functions
> from NMI handlers, though that really could be accommodated.  (But why
> would you need that?)
>
> So how about srcu_down_read() and srcu_up_read(), as shown in the
> (untested) patch below?
>
> Note that you do still need to pass the return value from srcu_down_read()
> into srcu_up_read().  I am guessing that io_uring has a convenient place
> that this value can be placed.  No idea about aio.
>

Sure, aio completion has context.

> Thoughts?

That looks great! Thank you.

Followup question:
Both fs/aio.c:aio_write() and io_uring/rw.c:io_write() do this ugly
thing:

/*
 * Open-code file_start_write here to grab freeze protection,
 * which will be released by another thread in
 * aio_complete_rw().  Fool lockdep by telling it the lock got
 * released so that it doesn't complain about the held lock when
 * we return to userspace.
 */
if (S_ISREG(file_inode(file)->i_mode)) {
    sb_start_write(file_inode(file)->i_sb);
    __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
}

And in write completion:

/*
 * Tell lockdep we inherited freeze protection from submission
 * thread.
 */
if (S_ISREG(inode->i_mode))
    __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
file_end_write(kiocb->ki_filp);

I suppose we also need to "fool lockdep" w.r.t returning to userspace
with an acquired srcu?

Thanks,
Amir.

>
>                                                         Thanx, Paul
>
> ------------------------------------------------------------------------
>
> commit 0efa1e7b5a862e9c2f1bf8c19db6bd142ad35355
> Author: Paul E. McKenney <paulmck@kernel.org>
> Date:   Wed Nov 23 15:49:55 2022 -0800
>
>     rcu: Add srcu_down_read() and srcu_up_read()
>
>     A pair of matching srcu_read_lock() and srcu_read_unlock() invocations
>     must take place within the same context, for example, within the same
>     task.  Otherwise, lockdep complains, as is the right thing to do for
>     most use cases.
>
>     However, there are use cases involving asynchronous I/O where the
>     SRCU reader needs to begin on one task and end on another.  This commit
>     therefore supplies the semaphore-like srcu_down_read() and srcu_up_read(),
>     which act like srcu_read_lock() and srcu_read_unlock(), but permitting
>     srcu_up_read() to be invoked in a different context than was the matching
>     srcu_down_read().
>
>     Neither srcu_down_read() nor srcu_up_read() may be invoked from an
>     NMI handler.
>
>     Reported-by: Jan Kara <jack@suse.cz>
>     Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
>
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index 9b9d0bbf1d3cf..74796cd7e7a9d 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -214,6 +214,34 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
>         return retval;
>  }
>
> +/**
> + * srcu_down_read - register a new reader for an SRCU-protected structure.
> + * @ssp: srcu_struct in which to register the new reader.
> + *
> + * Enter a semaphore-like SRCU read-side critical section.  Note that
> + * SRCU read-side critical sections may be nested.  However, it is
> + * illegal to call anything that waits on an SRCU grace period for the
> + * same srcu_struct, whether directly or indirectly.  Please note that
> + * one way to indirectly wait on an SRCU grace period is to acquire
> + * a mutex that is held elsewhere while calling synchronize_srcu() or
> + * synchronize_srcu_expedited().  But if you want lockdep to help you
> + * keep this stuff straight, you should instead use srcu_read_lock().
> + *
> + * The semaphore-like nature of srcu_down_read() means that the matching
> + * srcu_up_read() can be invoked from some other context, for example,
> + * from some other task or from an irq handler.  However, neither
> + * srcu_down_read() nor srcu_up_read() may be invoked from an NMI handler.
> + *
> + * Calls to srcu_down_read() may be nested, similar to the manner in
> + * which calls to down_read() may be nested.
> + */
> +static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp)
> +{
> +       WARN_ON_ONCE(in_nmi());
> +       srcu_check_nmi_safety(ssp, false);
> +       return __srcu_read_lock(ssp);
> +}
> +
>  /**
>   * srcu_read_unlock - unregister a old reader from an SRCU-protected structure.
>   * @ssp: srcu_struct in which to unregister the old reader.
> @@ -254,6 +282,23 @@ srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp)
>         __srcu_read_unlock(ssp, idx);
>  }
>
> +/**
> + * srcu_up_read - unregister a old reader from an SRCU-protected structure.
> + * @ssp: srcu_struct in which to unregister the old reader.
> + * @idx: return value from corresponding srcu_read_lock().
> + *
> + * Exit an SRCU read-side critical section, but not necessarily from
> + * the same context as the maching srcu_down_read().
> + */
> +static inline void srcu_up_read(struct srcu_struct *ssp, int idx)
> +       __releases(ssp)
> +{
> +       WARN_ON_ONCE(idx & ~0x1);
> +       WARN_ON_ONCE(in_nmi());
> +       srcu_check_nmi_safety(ssp, false);
> +       __srcu_read_unlock(ssp, idx);
> +}
> +
>  /**
>   * smp_mb__after_srcu_read_unlock - ensure full ordering after srcu_read_unlock
>   *

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24  6:21   ` Amir Goldstein
@ 2022-11-24  9:58     ` Jan Kara
  2022-11-24 16:11       ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Kara @ 2022-11-24  9:58 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: paulmck, Jan Kara, rcu, Lai Jiangshan, Josh Triplett, linux-fsdevel

On Thu 24-11-22 08:21:13, Amir Goldstein wrote:
> [+fsdevel]
> 
> On Thu, Nov 24, 2022 at 2:21 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > > Hello!
> > >
> > > We were pondering with Amir about some issues with fsnotify subsystem and
> > > as a building block we would need a mechanism to make sure write(2) has
> > > completed. For simplicity we could imagine it like a sequence
> > >
> > > write(2)
> > >   START
> > >   do stuff to perform write
> > >   END
> > >
> > > and we need a mechanism to wait for all processes that already passed START
> > > to reach END. Ideally without blocking new writes while we wait for the
> > > pending ones. Now this seems like a good task for SRCU. We could do:
> > >
> > > write(2)
> > >   srcu_read_lock(&sb->s_write_rcu);
> > >   do stuff to perform write
> > >   srcu_read_unlock(&sb->s_write_rcu);
> > >
> > > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > >
> > > But the trouble with writes is there are things like aio or io_uring where
> > > the part with srcu_read_lock() happens from one task (the submitter) while
> > > the part with srcu_read_unlock() happens from another context (usually worker
> > > thread triggered by IRQ reporting that the HW has finished the IO).
> > >
> > > Is there any chance to make SRCU work in a situation like this? It seems to
> > > me in principle it should be possible to make this work but maybe there are
> > > some implementation constraints I'm missing...
> >
> > The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> > will work for this, though that is not their intended purpose.  Plus you
> > might want to trace these functions, which, as their names indicate, is
> > not permitted.  I assume that you do not intend to use these functions
> > from NMI handlers, though that really could be accommodated.  (But why
> > would you need that?)
> >
> > So how about srcu_down_read() and srcu_up_read(), as shown in the
> > (untested) patch below?
> >
> > Note that you do still need to pass the return value from srcu_down_read()
> > into srcu_up_read().  I am guessing that io_uring has a convenient place
> > that this value can be placed.  No idea about aio.
> >
> 
> Sure, aio completion has context.
> 
> > Thoughts?
> 
> That looks great! Thank you.
> 
> Followup question:
> Both fs/aio.c:aio_write() and io_uring/rw.c:io_write() do this ugly
> thing:
> 
> /*
>  * Open-code file_start_write here to grab freeze protection,
>  * which will be released by another thread in
>  * aio_complete_rw().  Fool lockdep by telling it the lock got
>  * released so that it doesn't complain about the held lock when
>  * we return to userspace.
>  */
> if (S_ISREG(file_inode(file)->i_mode)) {
>     sb_start_write(file_inode(file)->i_sb);
>     __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> }
> 
> And in write completion:
> 
> /*
>  * Tell lockdep we inherited freeze protection from submission
>  * thread.
>  */
> if (S_ISREG(inode->i_mode))
>     __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
> file_end_write(kiocb->ki_filp);
> 
> I suppose we also need to "fool lockdep" w.r.t returning to userspace
> with an acquired srcu?

So AFAICT the whole point of Paul's new helpers is to not use lockdep and
thus not have to play the "fool lockdep" games.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24  0:21 ` Paul E. McKenney
  2022-11-24  6:21   ` Amir Goldstein
@ 2022-11-24 10:18   ` Jan Kara
  2022-11-24 16:12     ` Paul E. McKenney
  1 sibling, 1 reply; 10+ messages in thread
From: Jan Kara @ 2022-11-24 10:18 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Jan Kara, rcu, Lai Jiangshan, Josh Triplett, Amir Goldstein

On Wed 23-11-22 16:21:28, Paul E. McKenney wrote:
> On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > Hello!
> > 
> > We were pondering with Amir about some issues with fsnotify subsystem and
> > as a building block we would need a mechanism to make sure write(2) has
> > completed. For simplicity we could imagine it like a sequence
> > 
> > write(2)
> >   START
> >   do stuff to perform write
> >   END
> > 
> > and we need a mechanism to wait for all processes that already passed START
> > to reach END. Ideally without blocking new writes while we wait for the
> > pending ones. Now this seems like a good task for SRCU. We could do:
> > 
> > write(2)
> >   srcu_read_lock(&sb->s_write_rcu);
> >   do stuff to perform write
> >   srcu_read_unlock(&sb->s_write_rcu);
> > 
> > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > 
> > But the trouble with writes is there are things like aio or io_uring where
> > the part with srcu_read_lock() happens from one task (the submitter) while
> > the part with srcu_read_unlock() happens from another context (usually worker
> > thread triggered by IRQ reporting that the HW has finished the IO).
> > 
> > Is there any chance to make SRCU work in a situation like this? It seems to
> > me in principle it should be possible to make this work but maybe there are
> > some implementation constraints I'm missing... 
> 
> The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> will work for this, though that is not their intended purpose.  Plus you
> might want to trace these functions, which, as their names indicate, is
> not permitted.  I assume that you do not intend to use these functions
> from NMI handlers, though that really could be accommodated.  (But why
> would you need that?)
> 
> So how about srcu_down_read() and srcu_up_read(), as shown in the
> (untested) patch below?

Great, that looks exactly like what we need! Thanks for the quick reply!

> Note that you do still need to pass the return value from srcu_down_read()
> into srcu_up_read().  I am guessing that io_uring has a convenient place
> that this value can be placed.  No idea about aio.

Sure, we know about the need to pass the return value but we can handle
that :)

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24  9:58     ` Jan Kara
@ 2022-11-24 16:11       ` Paul E. McKenney
  2022-11-24 17:46         ` Jan Kara
  0 siblings, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2022-11-24 16:11 UTC (permalink / raw)
  To: Jan Kara; +Cc: Amir Goldstein, rcu, Lai Jiangshan, Josh Triplett, linux-fsdevel

On Thu, Nov 24, 2022 at 10:58:40AM +0100, Jan Kara wrote:
> On Thu 24-11-22 08:21:13, Amir Goldstein wrote:
> > [+fsdevel]
> > 
> > On Thu, Nov 24, 2022 at 2:21 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> > >
> > > On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > > > Hello!
> > > >
> > > > We were pondering with Amir about some issues with fsnotify subsystem and
> > > > as a building block we would need a mechanism to make sure write(2) has
> > > > completed. For simplicity we could imagine it like a sequence
> > > >
> > > > write(2)
> > > >   START
> > > >   do stuff to perform write
> > > >   END
> > > >
> > > > and we need a mechanism to wait for all processes that already passed START
> > > > to reach END. Ideally without blocking new writes while we wait for the
> > > > pending ones. Now this seems like a good task for SRCU. We could do:
> > > >
> > > > write(2)
> > > >   srcu_read_lock(&sb->s_write_rcu);
> > > >   do stuff to perform write
> > > >   srcu_read_unlock(&sb->s_write_rcu);
> > > >
> > > > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > > >
> > > > But the trouble with writes is there are things like aio or io_uring where
> > > > the part with srcu_read_lock() happens from one task (the submitter) while
> > > > the part with srcu_read_unlock() happens from another context (usually worker
> > > > thread triggered by IRQ reporting that the HW has finished the IO).
> > > >
> > > > Is there any chance to make SRCU work in a situation like this? It seems to
> > > > me in principle it should be possible to make this work but maybe there are
> > > > some implementation constraints I'm missing...
> > >
> > > The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> > > will work for this, though that is not their intended purpose.  Plus you
> > > might want to trace these functions, which, as their names indicate, is
> > > not permitted.  I assume that you do not intend to use these functions
> > > from NMI handlers, though that really could be accommodated.  (But why
> > > would you need that?)
> > >
> > > So how about srcu_down_read() and srcu_up_read(), as shown in the
> > > (untested) patch below?
> > >
> > > Note that you do still need to pass the return value from srcu_down_read()
> > > into srcu_up_read().  I am guessing that io_uring has a convenient place
> > > that this value can be placed.  No idea about aio.
> > >
> > 
> > Sure, aio completion has context.
> > 
> > > Thoughts?
> > 
> > That looks great! Thank you.
> > 
> > Followup question:
> > Both fs/aio.c:aio_write() and io_uring/rw.c:io_write() do this ugly
> > thing:
> > 
> > /*
> >  * Open-code file_start_write here to grab freeze protection,
> >  * which will be released by another thread in
> >  * aio_complete_rw().  Fool lockdep by telling it the lock got
> >  * released so that it doesn't complain about the held lock when
> >  * we return to userspace.
> >  */
> > if (S_ISREG(file_inode(file)->i_mode)) {
> >     sb_start_write(file_inode(file)->i_sb);
> >     __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> > }
> > 
> > And in write completion:
> > 
> > /*
> >  * Tell lockdep we inherited freeze protection from submission
> >  * thread.
> >  */
> > if (S_ISREG(inode->i_mode))
> >     __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
> > file_end_write(kiocb->ki_filp);
> > 
> > I suppose we also need to "fool lockdep" w.r.t returning to userspace
> > with an acquired srcu?
> 
> So AFAICT the whole point of Paul's new helpers is to not use lockdep and
> thus not have to play the "fool lockdep" games.

Exactly!  ;-)

But if you do return to userspace after invoking srcu_down_read(), it
is your responsibility to make sure that -something- eventually invokes
srcu_up_read().  Which might or might not be able to rely on userspace
doing something sensible.

I would guess that you have a timeout or rely on close() for that purpose,
just as you presumably do for sb_start_write(), but figured I should
mention it.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24 10:18   ` Jan Kara
@ 2022-11-24 16:12     ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2022-11-24 16:12 UTC (permalink / raw)
  To: Jan Kara; +Cc: rcu, Lai Jiangshan, Josh Triplett, Amir Goldstein

On Thu, Nov 24, 2022 at 11:18:53AM +0100, Jan Kara wrote:
> On Wed 23-11-22 16:21:28, Paul E. McKenney wrote:
> > On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > > Hello!
> > > 
> > > We were pondering with Amir about some issues with fsnotify subsystem and
> > > as a building block we would need a mechanism to make sure write(2) has
> > > completed. For simplicity we could imagine it like a sequence
> > > 
> > > write(2)
> > >   START
> > >   do stuff to perform write
> > >   END
> > > 
> > > and we need a mechanism to wait for all processes that already passed START
> > > to reach END. Ideally without blocking new writes while we wait for the
> > > pending ones. Now this seems like a good task for SRCU. We could do:
> > > 
> > > write(2)
> > >   srcu_read_lock(&sb->s_write_rcu);
> > >   do stuff to perform write
> > >   srcu_read_unlock(&sb->s_write_rcu);
> > > 
> > > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > > 
> > > But the trouble with writes is there are things like aio or io_uring where
> > > the part with srcu_read_lock() happens from one task (the submitter) while
> > > the part with srcu_read_unlock() happens from another context (usually worker
> > > thread triggered by IRQ reporting that the HW has finished the IO).
> > > 
> > > Is there any chance to make SRCU work in a situation like this? It seems to
> > > me in principle it should be possible to make this work but maybe there are
> > > some implementation constraints I'm missing... 
> > 
> > The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> > will work for this, though that is not their intended purpose.  Plus you
> > might want to trace these functions, which, as their names indicate, is
> > not permitted.  I assume that you do not intend to use these functions
> > from NMI handlers, though that really could be accommodated.  (But why
> > would you need that?)
> > 
> > So how about srcu_down_read() and srcu_up_read(), as shown in the
> > (untested) patch below?
> 
> Great, that looks exactly like what we need! Thanks for the quick reply!
> 
> > Note that you do still need to pass the return value from srcu_down_read()
> > into srcu_up_read().  I am guessing that io_uring has a convenient place
> > that this value can be placed.  No idea about aio.
> 
> Sure, we know about the need to pass the return value but we can handle
> that :)

Very good, and looking forward to seeing a Tested-by.  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24 16:11       ` Paul E. McKenney
@ 2022-11-24 17:46         ` Jan Kara
  2022-11-25 12:45           ` Amir Goldstein
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Kara @ 2022-11-24 17:46 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Jan Kara, Amir Goldstein, rcu, Lai Jiangshan, Josh Triplett,
	linux-fsdevel

On Thu 24-11-22 08:11:47, Paul E. McKenney wrote:
> On Thu, Nov 24, 2022 at 10:58:40AM +0100, Jan Kara wrote:
> > On Thu 24-11-22 08:21:13, Amir Goldstein wrote:
> > > [+fsdevel]
> > > 
> > > On Thu, Nov 24, 2022 at 2:21 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > >
> > > > On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > > > > Hello!
> > > > >
> > > > > We were pondering with Amir about some issues with fsnotify subsystem and
> > > > > as a building block we would need a mechanism to make sure write(2) has
> > > > > completed. For simplicity we could imagine it like a sequence
> > > > >
> > > > > write(2)
> > > > >   START
> > > > >   do stuff to perform write
> > > > >   END
> > > > >
> > > > > and we need a mechanism to wait for all processes that already passed START
> > > > > to reach END. Ideally without blocking new writes while we wait for the
> > > > > pending ones. Now this seems like a good task for SRCU. We could do:
> > > > >
> > > > > write(2)
> > > > >   srcu_read_lock(&sb->s_write_rcu);
> > > > >   do stuff to perform write
> > > > >   srcu_read_unlock(&sb->s_write_rcu);
> > > > >
> > > > > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > > > >
> > > > > But the trouble with writes is there are things like aio or io_uring where
> > > > > the part with srcu_read_lock() happens from one task (the submitter) while
> > > > > the part with srcu_read_unlock() happens from another context (usually worker
> > > > > thread triggered by IRQ reporting that the HW has finished the IO).
> > > > >
> > > > > Is there any chance to make SRCU work in a situation like this? It seems to
> > > > > me in principle it should be possible to make this work but maybe there are
> > > > > some implementation constraints I'm missing...
> > > >
> > > > The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> > > > will work for this, though that is not their intended purpose.  Plus you
> > > > might want to trace these functions, which, as their names indicate, is
> > > > not permitted.  I assume that you do not intend to use these functions
> > > > from NMI handlers, though that really could be accommodated.  (But why
> > > > would you need that?)
> > > >
> > > > So how about srcu_down_read() and srcu_up_read(), as shown in the
> > > > (untested) patch below?
> > > >
> > > > Note that you do still need to pass the return value from srcu_down_read()
> > > > into srcu_up_read().  I am guessing that io_uring has a convenient place
> > > > that this value can be placed.  No idea about aio.
> > > >
> > > 
> > > Sure, aio completion has context.
> > > 
> > > > Thoughts?
> > > 
> > > That looks great! Thank you.
> > > 
> > > Followup question:
> > > Both fs/aio.c:aio_write() and io_uring/rw.c:io_write() do this ugly
> > > thing:
> > > 
> > > /*
> > >  * Open-code file_start_write here to grab freeze protection,
> > >  * which will be released by another thread in
> > >  * aio_complete_rw().  Fool lockdep by telling it the lock got
> > >  * released so that it doesn't complain about the held lock when
> > >  * we return to userspace.
> > >  */
> > > if (S_ISREG(file_inode(file)->i_mode)) {
> > >     sb_start_write(file_inode(file)->i_sb);
> > >     __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> > > }
> > > 
> > > And in write completion:
> > > 
> > > /*
> > >  * Tell lockdep we inherited freeze protection from submission
> > >  * thread.
> > >  */
> > > if (S_ISREG(inode->i_mode))
> > >     __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
> > > file_end_write(kiocb->ki_filp);
> > > 
> > > I suppose we also need to "fool lockdep" w.r.t returning to userspace
> > > with an acquired srcu?
> > 
> > So AFAICT the whole point of Paul's new helpers is to not use lockdep and
> > thus not have to play the "fool lockdep" games.
> 
> Exactly!  ;-)
> 
> But if you do return to userspace after invoking srcu_down_read(), it
> is your responsibility to make sure that -something- eventually invokes
> srcu_up_read().  Which might or might not be able to rely on userspace
> doing something sensible.
> 
> I would guess that you have a timeout or rely on close() for that purpose,
> just as you presumably do for sb_start_write(), but figured I should
> mention it.

Yes. We actually do not rely on userspace but rather on HW to eventually
signal IO completion. For misbehaving HW there are timeouts but the details
depend very much on the protocol etc.. But as you say it is the same
business as with sb_start_write() so nothing new here.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-24 17:46         ` Jan Kara
@ 2022-11-25 12:45           ` Amir Goldstein
  2022-11-29  0:12             ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Amir Goldstein @ 2022-11-25 12:45 UTC (permalink / raw)
  To: Paul E. McKenney, Jan Kara
  Cc: rcu, Lai Jiangshan, Josh Triplett, linux-fsdevel

On Thu, Nov 24, 2022 at 7:46 PM Jan Kara <jack@suse.cz> wrote:
>
> On Thu 24-11-22 08:11:47, Paul E. McKenney wrote:
> > On Thu, Nov 24, 2022 at 10:58:40AM +0100, Jan Kara wrote:
> > > On Thu 24-11-22 08:21:13, Amir Goldstein wrote:
> > > > [+fsdevel]
> > > >
> > > > On Thu, Nov 24, 2022 at 2:21 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > >
> > > > > On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > > > > > Hello!
> > > > > >
> > > > > > We were pondering with Amir about some issues with fsnotify subsystem and
> > > > > > as a building block we would need a mechanism to make sure write(2) has
> > > > > > completed. For simplicity we could imagine it like a sequence
> > > > > >
> > > > > > write(2)
> > > > > >   START
> > > > > >   do stuff to perform write
> > > > > >   END
> > > > > >
> > > > > > and we need a mechanism to wait for all processes that already passed START
> > > > > > to reach END. Ideally without blocking new writes while we wait for the
> > > > > > pending ones. Now this seems like a good task for SRCU. We could do:
> > > > > >
> > > > > > write(2)
> > > > > >   srcu_read_lock(&sb->s_write_rcu);
> > > > > >   do stuff to perform write
> > > > > >   srcu_read_unlock(&sb->s_write_rcu);
> > > > > >
> > > > > > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > > > > >
> > > > > > But the trouble with writes is there are things like aio or io_uring where
> > > > > > the part with srcu_read_lock() happens from one task (the submitter) while
> > > > > > the part with srcu_read_unlock() happens from another context (usually worker
> > > > > > thread triggered by IRQ reporting that the HW has finished the IO).
> > > > > >
> > > > > > Is there any chance to make SRCU work in a situation like this? It seems to
> > > > > > me in principle it should be possible to make this work but maybe there are
> > > > > > some implementation constraints I'm missing...
> > > > >
> > > > > The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> > > > > will work for this, though that is not their intended purpose.  Plus you
> > > > > might want to trace these functions, which, as their names indicate, is
> > > > > not permitted.  I assume that you do not intend to use these functions
> > > > > from NMI handlers, though that really could be accommodated.  (But why
> > > > > would you need that?)
> > > > >
> > > > > So how about srcu_down_read() and srcu_up_read(), as shown in the
> > > > > (untested) patch below?
> > > > >
> > > > > Note that you do still need to pass the return value from srcu_down_read()
> > > > > into srcu_up_read().  I am guessing that io_uring has a convenient place
> > > > > that this value can be placed.  No idea about aio.
> > > > >
> > > >
> > > > Sure, aio completion has context.
> > > >
> > > > > Thoughts?
> > > >
> > > > That looks great! Thank you.
> > > >
> > > > Followup question:
> > > > Both fs/aio.c:aio_write() and io_uring/rw.c:io_write() do this ugly
> > > > thing:
> > > >
> > > > /*
> > > >  * Open-code file_start_write here to grab freeze protection,
> > > >  * which will be released by another thread in
> > > >  * aio_complete_rw().  Fool lockdep by telling it the lock got
> > > >  * released so that it doesn't complain about the held lock when
> > > >  * we return to userspace.
> > > >  */
> > > > if (S_ISREG(file_inode(file)->i_mode)) {
> > > >     sb_start_write(file_inode(file)->i_sb);
> > > >     __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> > > > }
> > > >
> > > > And in write completion:
> > > >
> > > > /*
> > > >  * Tell lockdep we inherited freeze protection from submission
> > > >  * thread.
> > > >  */
> > > > if (S_ISREG(inode->i_mode))
> > > >     __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
> > > > file_end_write(kiocb->ki_filp);
> > > >
> > > > I suppose we also need to "fool lockdep" w.r.t returning to userspace
> > > > with an acquired srcu?
> > >
> > > So AFAICT the whole point of Paul's new helpers is to not use lockdep and
> > > thus not have to play the "fool lockdep" games.
> >
> > Exactly!  ;-)
> >
> > But if you do return to userspace after invoking srcu_down_read(), it
> > is your responsibility to make sure that -something- eventually invokes
> > srcu_up_read().  Which might or might not be able to rely on userspace
> > doing something sensible.
> >
> > I would guess that you have a timeout or rely on close() for that purpose,
> > just as you presumably do for sb_start_write(), but figured I should
> > mention it.
>
> Yes. We actually do not rely on userspace but rather on HW to eventually
> signal IO completion. For misbehaving HW there are timeouts but the details
> depend very much on the protocol etc.. But as you say it is the same
> business as with sb_start_write() so nothing new here.
>

FYI, here is my POC branch that uses srcu_down,up_read()
for aio writes:

https://github.com/amir73il/linux/commits/sb_write_barrier

Note that NOT all writes take s_write_srcu, but all writes that
generate fsnotify pre-modify events without sb_start_write() held
MUST take s_write_srcu, so there is an assertion in fsnotify():

if (mask & FS_PRE_VFS) {
    /* Avoid false positives with LOCK_STATE_UNKNOWN */
    lockdep_assert_once(sb_write_started(sb) != LOCK_STATE_HELD);
    if (mask & FSNOTIFY_PRE_MODIFY_EVENTS)
        lockdep_assert_once(sb_write_srcu_started(sb));
}

For testing, I've added synchronize_srcu(&sb->s_write_srcu) at
the beginning of syncfs() and freeze_super().

Even though syncfs() is not the intended UAPI, it is a UAPI that could
make sense in the future (think SYNC_FILE_RANGE_WAIT_BEFORE
for the vfs level).

I've run the fstests groups aio and freeze that exercises these code
paths on xfs and on overlayfs and (after fixing all my bugs) I have not
observed any regressions nor any lockdep splats.

So you may add:
Tested-by: Amir Goldstein <amir73il@gmail.com>

Thanks again for the patch Paul!
Amir.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: SRCU use from different contexts
  2022-11-25 12:45           ` Amir Goldstein
@ 2022-11-29  0:12             ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2022-11-29  0:12 UTC (permalink / raw)
  To: Amir Goldstein; +Cc: Jan Kara, rcu, Lai Jiangshan, Josh Triplett, linux-fsdevel

On Fri, Nov 25, 2022 at 02:45:53PM +0200, Amir Goldstein wrote:
> On Thu, Nov 24, 2022 at 7:46 PM Jan Kara <jack@suse.cz> wrote:
> >
> > On Thu 24-11-22 08:11:47, Paul E. McKenney wrote:
> > > On Thu, Nov 24, 2022 at 10:58:40AM +0100, Jan Kara wrote:
> > > > On Thu 24-11-22 08:21:13, Amir Goldstein wrote:
> > > > > [+fsdevel]
> > > > >
> > > > > On Thu, Nov 24, 2022 at 2:21 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > > > >
> > > > > > On Wed, Nov 23, 2022 at 12:46:45PM +0100, Jan Kara wrote:
> > > > > > > Hello!
> > > > > > >
> > > > > > > We were pondering with Amir about some issues with fsnotify subsystem and
> > > > > > > as a building block we would need a mechanism to make sure write(2) has
> > > > > > > completed. For simplicity we could imagine it like a sequence
> > > > > > >
> > > > > > > write(2)
> > > > > > >   START
> > > > > > >   do stuff to perform write
> > > > > > >   END
> > > > > > >
> > > > > > > and we need a mechanism to wait for all processes that already passed START
> > > > > > > to reach END. Ideally without blocking new writes while we wait for the
> > > > > > > pending ones. Now this seems like a good task for SRCU. We could do:
> > > > > > >
> > > > > > > write(2)
> > > > > > >   srcu_read_lock(&sb->s_write_rcu);
> > > > > > >   do stuff to perform write
> > > > > > >   srcu_read_unlock(&sb->s_write_rcu);
> > > > > > >
> > > > > > > and use synchronize_srcu(&sb->s_write_rcu) for waiting.
> > > > > > >
> > > > > > > But the trouble with writes is there are things like aio or io_uring where
> > > > > > > the part with srcu_read_lock() happens from one task (the submitter) while
> > > > > > > the part with srcu_read_unlock() happens from another context (usually worker
> > > > > > > thread triggered by IRQ reporting that the HW has finished the IO).
> > > > > > >
> > > > > > > Is there any chance to make SRCU work in a situation like this? It seems to
> > > > > > > me in principle it should be possible to make this work but maybe there are
> > > > > > > some implementation constraints I'm missing...
> > > > > >
> > > > > > The srcu_read_lock_notrace() and srcu_read_unlock_notrace() functions
> > > > > > will work for this, though that is not their intended purpose.  Plus you
> > > > > > might want to trace these functions, which, as their names indicate, is
> > > > > > not permitted.  I assume that you do not intend to use these functions
> > > > > > from NMI handlers, though that really could be accommodated.  (But why
> > > > > > would you need that?)
> > > > > >
> > > > > > So how about srcu_down_read() and srcu_up_read(), as shown in the
> > > > > > (untested) patch below?
> > > > > >
> > > > > > Note that you do still need to pass the return value from srcu_down_read()
> > > > > > into srcu_up_read().  I am guessing that io_uring has a convenient place
> > > > > > that this value can be placed.  No idea about aio.
> > > > > >
> > > > >
> > > > > Sure, aio completion has context.
> > > > >
> > > > > > Thoughts?
> > > > >
> > > > > That looks great! Thank you.
> > > > >
> > > > > Followup question:
> > > > > Both fs/aio.c:aio_write() and io_uring/rw.c:io_write() do this ugly
> > > > > thing:
> > > > >
> > > > > /*
> > > > >  * Open-code file_start_write here to grab freeze protection,
> > > > >  * which will be released by another thread in
> > > > >  * aio_complete_rw().  Fool lockdep by telling it the lock got
> > > > >  * released so that it doesn't complain about the held lock when
> > > > >  * we return to userspace.
> > > > >  */
> > > > > if (S_ISREG(file_inode(file)->i_mode)) {
> > > > >     sb_start_write(file_inode(file)->i_sb);
> > > > >     __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);
> > > > > }
> > > > >
> > > > > And in write completion:
> > > > >
> > > > > /*
> > > > >  * Tell lockdep we inherited freeze protection from submission
> > > > >  * thread.
> > > > >  */
> > > > > if (S_ISREG(inode->i_mode))
> > > > >     __sb_writers_acquired(inode->i_sb, SB_FREEZE_WRITE);
> > > > > file_end_write(kiocb->ki_filp);
> > > > >
> > > > > I suppose we also need to "fool lockdep" w.r.t returning to userspace
> > > > > with an acquired srcu?
> > > >
> > > > So AFAICT the whole point of Paul's new helpers is to not use lockdep and
> > > > thus not have to play the "fool lockdep" games.
> > >
> > > Exactly!  ;-)
> > >
> > > But if you do return to userspace after invoking srcu_down_read(), it
> > > is your responsibility to make sure that -something- eventually invokes
> > > srcu_up_read().  Which might or might not be able to rely on userspace
> > > doing something sensible.
> > >
> > > I would guess that you have a timeout or rely on close() for that purpose,
> > > just as you presumably do for sb_start_write(), but figured I should
> > > mention it.
> >
> > Yes. We actually do not rely on userspace but rather on HW to eventually
> > signal IO completion. For misbehaving HW there are timeouts but the details
> > depend very much on the protocol etc.. But as you say it is the same
> > business as with sb_start_write() so nothing new here.
> >
> 
> FYI, here is my POC branch that uses srcu_down,up_read()
> for aio writes:
> 
> https://github.com/amir73il/linux/commits/sb_write_barrier
> 
> Note that NOT all writes take s_write_srcu, but all writes that
> generate fsnotify pre-modify events without sb_start_write() held
> MUST take s_write_srcu, so there is an assertion in fsnotify():
> 
> if (mask & FS_PRE_VFS) {
>     /* Avoid false positives with LOCK_STATE_UNKNOWN */
>     lockdep_assert_once(sb_write_started(sb) != LOCK_STATE_HELD);
>     if (mask & FSNOTIFY_PRE_MODIFY_EVENTS)
>         lockdep_assert_once(sb_write_srcu_started(sb));
> }
> 
> For testing, I've added synchronize_srcu(&sb->s_write_srcu) at
> the beginning of syncfs() and freeze_super().
> 
> Even though syncfs() is not the intended UAPI, it is a UAPI that could
> make sense in the future (think SYNC_FILE_RANGE_WAIT_BEFORE
> for the vfs level).
> 
> I've run the fstests groups aio and freeze that exercises these code
> paths on xfs and on overlayfs and (after fixing all my bugs) I have not
> observed any regressions nor any lockdep splats.
> 
> So you may add:
> Tested-by: Amir Goldstein <amir73il@gmail.com>

Very good, and thank you!  I will apply this on my next rebase.

> Thanks again for the patch Paul!

I will be cherry-picking this on top of -rcu's srcunmisafe.2022.11.09a
branch and sending you a public branch.  I do -not- expect to push this
into the upcoming merge window unless you tell me that you need it.
Preferably sooner rather than later.  ;-)

							Thanx, Paul

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-11-29  0:13 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-23 11:46 SRCU use from different contexts Jan Kara
2022-11-24  0:21 ` Paul E. McKenney
2022-11-24  6:21   ` Amir Goldstein
2022-11-24  9:58     ` Jan Kara
2022-11-24 16:11       ` Paul E. McKenney
2022-11-24 17:46         ` Jan Kara
2022-11-25 12:45           ` Amir Goldstein
2022-11-29  0:12             ` Paul E. McKenney
2022-11-24 10:18   ` Jan Kara
2022-11-24 16:12     ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.