From: "Ahmed S. Darwish" <a.darwish@linutronix.de>
To: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
"Paul E. McKenney" <paulmck@kernel.org>,
"Sebastian A. Siewior" <bigeasy@linutronix.de>,
Steven Rostedt <rostedt@goodmis.org>,
LKML <linux-kernel@vger.kernel.org>,
"Ahmed S. Darwish" <a.darwish@linutronix.de>,
Jonathan Corbet <corbet@lwn.net>,
linux-doc@vger.kernel.org
Subject: [PATCH v4 05/24] seqlock: Add kernel-doc for seqcount_t and seqlock_t APIs
Date: Mon, 20 Jul 2020 17:55:11 +0200 [thread overview]
Message-ID: <20200720155530.1173732-6-a.darwish@linutronix.de> (raw)
In-Reply-To: <20200720155530.1173732-1-a.darwish@linutronix.de>
seqlock.h is now included by kernel's RST documentation, but a small
number of the the exported seqlock.h functions are kernel-doc annotated.
Add kernel-doc for all seqlock.h exported APIs.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
include/linux/seqlock.h | 425 ++++++++++++++++++++++++++++++++--------
1 file changed, 348 insertions(+), 77 deletions(-)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 4c1456008d89..85fb3ac93ffb 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -75,6 +75,10 @@ static inline void __seqcount_init(seqcount_t *s, const char *name,
# define SEQCOUNT_DEP_MAP_INIT(lockname) \
.dep_map = { .name = #lockname } \
+/**
+ * seqcount_init() - runtime initializer for seqcount_t
+ * @s: Pointer to the seqcount_t instance
+ */
# define seqcount_init(s) \
do { \
static struct lock_class_key __key; \
@@ -98,13 +102,15 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s)
# define seqcount_lockdep_reader_access(x)
#endif
-#define SEQCNT_ZERO(lockname) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(lockname)}
-
+/**
+ * SEQCNT_ZERO() - static initializer for seqcount_t
+ * @name: Name of the seqcount_t instance
+ */
+#define SEQCNT_ZERO(name) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(name) }
/**
- * __read_seqcount_begin - begin a seq-read critical section (without barrier)
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier
+ * @s: Pointer to seqcount_t
*
* __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb()
* barrier. Callers should ensure that smp_rmb() or equivalent ordering is
@@ -113,6 +119,8 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s)
*
* Use carefully, only in critical code, and comment how the barrier is
* provided.
+ *
+ * Return: count to be passed to read_seqcount_retry()
*/
static inline unsigned __read_seqcount_begin(const seqcount_t *s)
{
@@ -129,13 +137,10 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
}
/**
- * raw_read_seqcount_begin - start seq-read critical section w/o lockdep
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * raw_read_seqcount_begin() - begin a seqcount_t read section w/o lockdep
+ * @s: Pointer to seqcount_t
*
- * raw_read_seqcount_begin opens a read critical section of the given
- * seqcount, but without any lockdep checking. Validity of the critical
- * section is tested by checking read_seqcount_retry function.
+ * Return: count to be passed to read_seqcount_retry()
*/
static inline unsigned raw_read_seqcount_begin(const seqcount_t *s)
{
@@ -145,13 +150,10 @@ static inline unsigned raw_read_seqcount_begin(const seqcount_t *s)
}
/**
- * read_seqcount_begin - begin a seq-read critical section
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * read_seqcount_begin() - begin a seqcount_t read critical section
+ * @s: Pointer to seqcount_t
*
- * read_seqcount_begin opens a read critical section of the given seqcount.
- * Validity of the critical section is tested by checking read_seqcount_retry
- * function.
+ * Return: count to be passed to read_seqcount_retry()
*/
static inline unsigned read_seqcount_begin(const seqcount_t *s)
{
@@ -160,13 +162,15 @@ static inline unsigned read_seqcount_begin(const seqcount_t *s)
}
/**
- * raw_read_seqcount - Read the raw seqcount
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * raw_read_seqcount() - read the raw seqcount_t counter value
+ * @s: Pointer to seqcount_t
*
* raw_read_seqcount opens a read critical section of the given
- * seqcount without any lockdep checking and without checking or
- * masking the LSB. Calling code is responsible for handling that.
+ * seqcount_t, without any lockdep checking, and without checking or
+ * masking the sequence counter LSB. Calling code is responsible for
+ * handling that.
+ *
+ * Return: count to be passed to read_seqcount_retry()
*/
static inline unsigned raw_read_seqcount(const seqcount_t *s)
{
@@ -177,18 +181,21 @@ static inline unsigned raw_read_seqcount(const seqcount_t *s)
}
/**
- * raw_seqcount_begin - begin a seq-read critical section
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * raw_seqcount_begin() - begin a seqcount_t read critical section w/o
+ * lockdep and w/o counter stabilization
+ * @s: Pointer to seqcount_t
*
- * raw_seqcount_begin opens a read critical section of the given seqcount.
- * Validity of the critical section is tested by checking read_seqcount_retry
- * function.
+ * raw_seqcount_begin opens a read critical section of the given
+ * seqcount_t. Unlike read_seqcount_begin(), this function will not wait
+ * for the count to stabilize. If a writer is active when it begins, it
+ * will fail the read_seqcount_retry() at the end of the read critical
+ * section instead of stabilizing at the beginning of it.
*
- * Unlike read_seqcount_begin(), this function will not wait for the count
- * to stabilize. If a writer is active when we begin, we will fail the
- * read_seqcount_retry() instead of stabilizing at the beginning of the
- * critical section.
+ * Use this only in special kernel hot paths where the read section is
+ * small and has a high probability of success through other external
+ * means. It will save a single branching instruction.
+ *
+ * Return: count to be passed to read_seqcount_retry()
*/
static inline unsigned raw_seqcount_begin(const seqcount_t *s)
{
@@ -199,10 +206,9 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
}
/**
- * __read_seqcount_retry - end a seq-read critical section (without barrier)
- * @s: pointer to seqcount_t
- * @start: count, from read_seqcount_begin
- * Returns: 1 if retry is required, else 0
+ * __read_seqcount_retry() - end a seqcount_t read section w/o barrier
+ * @s: Pointer to seqcount_t
+ * @start: count, from read_seqcount_begin()
*
* __read_seqcount_retry is like read_seqcount_retry, but has no smp_rmb()
* barrier. Callers should ensure that smp_rmb() or equivalent ordering is
@@ -211,6 +217,8 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
*
* Use carefully, only in critical code, and comment how the barrier is
* provided.
+ *
+ * Return: true if a read section retry is required, else false
*/
static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
{
@@ -219,14 +227,15 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
}
/**
- * read_seqcount_retry - end a seq-read critical section
- * @s: pointer to seqcount_t
- * @start: count, from read_seqcount_begin
- * Returns: 1 if retry is required, else 0
+ * read_seqcount_retry() - end a seqcount_t read critical section
+ * @s: Pointer to seqcount_t
+ * @start: count, from read_seqcount_begin()
*
- * read_seqcount_retry closes a read critical section of the given seqcount.
- * If the critical section was invalid, it must be ignored (and typically
- * retried).
+ * read_seqcount_retry closes the read critical section of given
+ * seqcount_t. If the critical section was invalid, it must be ignored
+ * (and typically retried).
+ *
+ * Return: true if a read section retry is required, else false
*/
static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
{
@@ -234,6 +243,10 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
return __read_seqcount_retry(s, start);
}
+/**
+ * raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep
+ * @s: Pointer to seqcount_t
+ */
static inline void raw_write_seqcount_begin(seqcount_t *s)
{
kcsan_nestable_atomic_begin();
@@ -241,6 +254,10 @@ static inline void raw_write_seqcount_begin(seqcount_t *s)
smp_wmb();
}
+/**
+ * raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep
+ * @s: Pointer to seqcount_t
+ */
static inline void raw_write_seqcount_end(seqcount_t *s)
{
smp_wmb();
@@ -248,17 +265,42 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
kcsan_nestable_atomic_end();
}
+/**
+ * write_seqcount_begin_nested() - start a seqcount_t write section with
+ * custom lockdep nesting level
+ * @s: Pointer to seqcount_t
+ * @subclass: lockdep nesting level
+ *
+ * See Documentation/locking/lockdep-design.rst
+ */
static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass)
{
raw_write_seqcount_begin(s);
seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
}
+/**
+ * write_seqcount_begin() - start a seqcount_t write side critical section
+ * @s: Pointer to seqcount_t
+ *
+ * write_seqcount_begin opens a write side critical section of the given
+ * seqcount_t.
+ *
+ * Context: seqcount_t write side critical sections must be serialized and
+ * non-preemptible. If readers can be invoked from hardirq or softirq
+ * context, interrupts or bottom halves must be respectively disabled.
+ */
static inline void write_seqcount_begin(seqcount_t *s)
{
write_seqcount_begin_nested(s, 0);
}
+/**
+ * write_seqcount_end() - end a seqcount_t write side critical section
+ * @s: Pointer to seqcount_t
+ *
+ * The write section must've been opened with write_seqcount_begin().
+ */
static inline void write_seqcount_end(seqcount_t *s)
{
seqcount_release(&s->dep_map, _RET_IP_);
@@ -266,12 +308,12 @@ static inline void write_seqcount_end(seqcount_t *s)
}
/**
- * raw_write_seqcount_barrier - do a seq write barrier
- * @s: pointer to seqcount_t
+ * raw_write_seqcount_barrier() - do a seqcount_t write barrier
+ * @s: Pointer to seqcount_t
*
- * This can be used to provide an ordering guarantee instead of the
- * usual consistency guarantee. It is one wmb cheaper, because we can
- * collapse the two back-to-back wmb()s.
+ * This can be used to provide an ordering guarantee instead of the usual
+ * consistency guarantee. It is one wmb cheaper, because it can collapse
+ * the two back-to-back wmb()s.
*
* Note that writes surrounding the barrier should be declared atomic (e.g.
* via WRITE_ONCE): a) to ensure the writes become visible to other threads
@@ -316,11 +358,12 @@ static inline void raw_write_seqcount_barrier(seqcount_t *s)
}
/**
- * write_seqcount_invalidate - invalidate in-progress read-side seq operations
- * @s: pointer to seqcount_t
+ * write_seqcount_invalidate() - invalidate in-progress seqcount_t read
+ * side operations
+ * @s: Pointer to seqcount_t
*
- * After write_seqcount_invalidate, no read-side seq operations will complete
- * successfully and see data older than this.
+ * After write_seqcount_invalidate, no seqcount_t read side operations
+ * will complete successfully and see data older than this.
*/
static inline void write_seqcount_invalidate(seqcount_t *s)
{
@@ -330,6 +373,21 @@ static inline void write_seqcount_invalidate(seqcount_t *s)
kcsan_nestable_atomic_end();
}
+/**
+ * raw_read_seqcount_latch() - pick even/odd seqcount_t latch data copy
+ * @s: Pointer to seqcount_t
+ *
+ * Use seqcount_t latching to switch between two storage places protected
+ * by a sequence counter. Doing so allows having interruptible, preemptible,
+ * seqcount_t write side critical sections.
+ *
+ * Check raw_write_seqcount_latch() for more details and a full reader and
+ * writer usage example.
+ *
+ * Return: sequence counter raw value. Use the lowest bit as an index for
+ * picking which data copy to read. The full counter value must then be
+ * checked with read_seqcount_retry().
+ */
static inline int raw_read_seqcount_latch(seqcount_t *s)
{
/* Pairs with the first smp_wmb() in raw_write_seqcount_latch() */
@@ -338,8 +396,8 @@ static inline int raw_read_seqcount_latch(seqcount_t *s)
}
/**
- * raw_write_seqcount_latch - redirect readers to even/odd copy
- * @s: pointer to seqcount_t
+ * raw_write_seqcount_latch() - redirect readers to even/odd copy
+ * @s: Pointer to seqcount_t
*
* The latch technique is a multiversion concurrency control method that allows
* queries during non-atomic modifications. If you can guarantee queries never
@@ -446,17 +504,28 @@ typedef struct {
.lock = __SPIN_LOCK_UNLOCKED(lockname) \
}
-#define seqlock_init(x) \
+/**
+ * seqlock_init() - dynamic initializer for seqlock_t
+ * @sl: Pointer to the seqlock_t instance
+ */
+#define seqlock_init(sl) \
do { \
- seqcount_init(&(x)->seqcount); \
- spin_lock_init(&(x)->lock); \
+ seqcount_init(&(sl)->seqcount); \
+ spin_lock_init(&(sl)->lock); \
} while (0)
-#define DEFINE_SEQLOCK(x) \
- seqlock_t x = __SEQLOCK_UNLOCKED(x)
+/**
+ * DEFINE_SEQLOCK() - Define a statically allocated seqlock_t
+ * @sl: Name of the seqlock_t instance
+ */
+#define DEFINE_SEQLOCK(sl) \
+ seqlock_t sl = __SEQLOCK_UNLOCKED(sl)
-/*
- * Read side functions for starting and finalizing a read side section.
+/**
+ * read_seqbegin() - start a seqlock_t read side critical section
+ * @sl: Pointer to seqlock_t
+ *
+ * Return: count, to be passed to read_seqretry()
*/
static inline unsigned read_seqbegin(const seqlock_t *sl)
{
@@ -467,6 +536,17 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
return ret;
}
+/**
+ * read_seqretry() - end a seqlock_t read side section
+ * @sl: Pointer to seqlock_t
+ * @start: count, from read_seqbegin()
+ *
+ * read_seqretry closes the read side critical section of given seqlock_t.
+ * If the critical section was invalid, it must be ignored (and typically
+ * retried).
+ *
+ * Return: true if a read section retry is required, else false
+ */
static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
{
/*
@@ -478,10 +558,18 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
return read_seqcount_retry(&sl->seqcount, start);
}
-/*
- * Lock out other writers and update the count.
- * Acts like a normal spin_lock/unlock.
- * Don't need preempt_disable() because that is in the spin_lock already.
+/**
+ * write_seqlock() - start a seqlock_t write side critical section
+ * @sl: Pointer to seqlock_t
+ *
+ * write_seqlock opens a write side critical section for the given
+ * seqlock_t. It also implicitly acquires the spinlock_t embedded inside
+ * that sequential lock. All seqlock_t write side sections are thus
+ * automatically serialized and non-preemptible.
+ *
+ * Context: if the seqlock_t read section, or other write side critical
+ * sections, can be invoked from hardirq or softirq contexts, use the
+ * _irqsave or _bh variants of this function instead.
*/
static inline void write_seqlock(seqlock_t *sl)
{
@@ -489,30 +577,66 @@ static inline void write_seqlock(seqlock_t *sl)
write_seqcount_begin(&sl->seqcount);
}
+/**
+ * write_sequnlock() - end a seqlock_t write side critical section
+ * @sl: Pointer to seqlock_t
+ *
+ * write_sequnlock closes the (serialized and non-preemptible) write side
+ * critical section of given seqlock_t.
+ */
static inline void write_sequnlock(seqlock_t *sl)
{
write_seqcount_end(&sl->seqcount);
spin_unlock(&sl->lock);
}
+/**
+ * write_seqlock_bh() - start a softirqs-disabled seqlock_t write section
+ * @sl: Pointer to seqlock_t
+ *
+ * _bh variant of write_seqlock(). Use only if the read side section, or
+ * other write side sections, can be invoked from softirq contexts.
+ */
static inline void write_seqlock_bh(seqlock_t *sl)
{
spin_lock_bh(&sl->lock);
write_seqcount_begin(&sl->seqcount);
}
+/**
+ * write_sequnlock_bh() - end a softirqs-disabled seqlock_t write section
+ * @sl: Pointer to seqlock_t
+ *
+ * write_sequnlock_bh closes the serialized, non-preemptible, and
+ * softirqs-disabled, seqlock_t write side critical section opened with
+ * write_seqlock_bh().
+ */
static inline void write_sequnlock_bh(seqlock_t *sl)
{
write_seqcount_end(&sl->seqcount);
spin_unlock_bh(&sl->lock);
}
+/**
+ * write_seqlock_irq() - start a non-interruptible seqlock_t write section
+ * @sl: Pointer to seqlock_t
+ *
+ * _irq variant of write_seqlock(). Use only if the read side section, or
+ * other write sections, can be invoked from hardirq contexts.
+ */
static inline void write_seqlock_irq(seqlock_t *sl)
{
spin_lock_irq(&sl->lock);
write_seqcount_begin(&sl->seqcount);
}
+/**
+ * write_sequnlock_irq() - end a non-interruptible seqlock_t write section
+ * @sl: Pointer to seqlock_t
+ *
+ * write_sequnlock_irq closes the serialized and non-interruptible
+ * seqlock_t write side section opened with write_seqlock_irq().
+ */
static inline void write_sequnlock_irq(seqlock_t *sl)
{
write_seqcount_end(&sl->seqcount);
@@ -528,9 +652,28 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
return flags;
}
+/**
+ * write_seqlock_irqsave() - start a non-interruptible seqlock_t write
+ * section
+ * @lock: Pointer to seqlock_t
+ * @flags: Stack-allocated storage for saving caller's local interrupt
+ * state, to be passed to write_sequnlock_irqrestore().
+ *
+ * _irqsave variant of write_seqlock(). Use it only if the read side
+ * section, or other write sections, can be invoked from hardirq context.
+ */
#define write_seqlock_irqsave(lock, flags) \
do { flags = __write_seqlock_irqsave(lock); } while (0)
+/**
+ * write_sequnlock_irqrestore() - end non-interruptible seqlock_t write
+ * section
+ * @sl: Pointer to seqlock_t
+ * @flags: Caller's saved interrupt state, from write_seqlock_irqsave()
+ *
+ * write_sequnlock_irqrestore closes the serialized and non-interruptible
+ * seqlock_t write section previously opened with write_seqlock_irqsave().
+ */
static inline void
write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
{
@@ -538,36 +681,79 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
spin_unlock_irqrestore(&sl->lock, flags);
}
-/*
- * A locking reader exclusively locks out other writers and locking readers,
- * but doesn't update the sequence number. Acts like a normal spin_lock/unlock.
- * Don't need preempt_disable() because that is in the spin_lock already.
+/**
+ * read_seqlock_excl() - begin a seqlock_t locking reader section
+ * @sl: Pointer to seqlock_t
+ *
+ * read_seqlock_excl opens a seqlock_t locking reader critical section. A
+ * locking reader exclusively locks out *both* other writers *and* other
+ * locking readers, but it does not update the embedded sequence number.
+ *
+ * Locking readers act like a normal spin_lock()/spin_unlock().
+ *
+ * Context: if the seqlock_t write section, *or other read sections*, can
+ * be invoked from hardirq or softirq contexts, use the _irqsave or _bh
+ * variant of this function instead.
+ *
+ * The opened read section must be closed with read_sequnlock_excl().
*/
static inline void read_seqlock_excl(seqlock_t *sl)
{
spin_lock(&sl->lock);
}
+/**
+ * read_sequnlock_excl() - end a seqlock_t locking reader critical section
+ * @sl: Pointer to seqlock_t
+ */
static inline void read_sequnlock_excl(seqlock_t *sl)
{
spin_unlock(&sl->lock);
}
+/**
+ * read_seqlock_excl_bh() - start a seqlock_t locking reader section with
+ * softirqs disabled
+ * @sl: Pointer to seqlock_t
+ *
+ * _bh variant of read_seqlock_excl(). Use this variant only if the
+ * seqlock_t write side section, *or other read sections*, can be invoked
+ * from softirq contexts.
+ */
static inline void read_seqlock_excl_bh(seqlock_t *sl)
{
spin_lock_bh(&sl->lock);
}
+/**
+ * read_sequnlock_excl_bh() - stop a seqlock_t softirq-disabled locking
+ * reader section
+ * @sl: Pointer to seqlock_t
+ */
static inline void read_sequnlock_excl_bh(seqlock_t *sl)
{
spin_unlock_bh(&sl->lock);
}
+/**
+ * read_seqlock_excl_irq() - start a non-interruptible seqlock_t locking
+ * reader section
+ * @sl: Pointer to seqlock_t
+ *
+ * _irq variant of read_seqlock_excl(). Use this only if the seqlock_t
+ * write side section, *or other read sections*, can be invoked from a
+ * hardirq context.
+ */
static inline void read_seqlock_excl_irq(seqlock_t *sl)
{
spin_lock_irq(&sl->lock);
}
+/**
+ * read_sequnlock_excl_irq() - end an interrupts-disabled seqlock_t
+ * locking reader section
+ * @sl: Pointer to seqlock_t
+ */
static inline void read_sequnlock_excl_irq(seqlock_t *sl)
{
spin_unlock_irq(&sl->lock);
@@ -581,9 +767,26 @@ static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl)
return flags;
}
+/**
+ * read_seqlock_excl_irqsave() - start a non-interruptible seqlock_t
+ * locking reader section
+ * @lock: Pointer to seqlock_t
+ * @flags: Stack-allocated storage for saving caller's local interrupt
+ * state, to be passed to read_sequnlock_excl_irqrestore().
+ *
+ * _irqsave variant of read_seqlock_excl(). Use this only if the seqlock_t
+ * write side section, *or other read sections*, can be invoked from a
+ * hardirq context.
+ */
#define read_seqlock_excl_irqsave(lock, flags) \
do { flags = __read_seqlock_excl_irqsave(lock); } while (0)
+/**
+ * read_sequnlock_excl_irqrestore() - end non-interruptible seqlock_t
+ * locking reader section
+ * @sl: Pointer to seqlock_t
+ * @flags: Caller saved interrupt state, from read_seqlock_excl_irqsave()
+ */
static inline void
read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
{
@@ -591,14 +794,35 @@ read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
}
/**
- * read_seqbegin_or_lock - begin a sequence number check or locking block
- * @lock: sequence lock
- * @seq : sequence number to be checked
- *
- * First try it once optimistically without taking the lock. If that fails,
- * take the lock. The sequence number is also used as a marker for deciding
- * whether to be a reader (even) or writer (odd).
- * N.B. seq must be initialized to an even number to begin with.
+ * read_seqbegin_or_lock() - begin a seqlock_t lockless or locking reader
+ * @lock: Pointer to seqlock_t
+ * @seq : Marker and return parameter. If the passed value is even, the
+ * reader will become a *lockless* seqlock_t reader as in read_seqbegin().
+ * If the passed value is odd, the reader will become a *locking* reader
+ * as in read_seqlock_excl(). In the first call to this function, the
+ * caller *must* initialize and pass an even value to @seq; this way, a
+ * lockless read can be optimistically tried first.
+ *
+ * read_seqbegin_or_lock is an API designed to optimistically try a normal
+ * lockless seqlock_t read section first. If an odd counter is found, the
+ * lockless read trial has failed, and the next read iteration transforms
+ * itself into a full seqlock_t locking reader.
+ *
+ * This is typically used to avoid seqlock_t lockless readers starvation
+ * (too much retry loops) in the case of a sharp spike in write side
+ * activity.
+ *
+ * Context: if the seqlock_t write section, *or other read sections*, can
+ * be invoked from hardirq or softirq contexts, use the _irqsave or _bh
+ * variant of this function instead.
+ *
+ * Check Documentation/locking/seqlock.rst for template example code.
+ *
+ * Return: the encountered sequence counter value, through the @seq
+ * parameter, which is overloaded as a return parameter. This returned
+ * value must be checked with need_seqretry(). If the read section need to
+ * be retried, this returned value must also be passed as the @seq
+ * parameter of the next read_seqbegin_or_lock() iteration.
*/
static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
{
@@ -608,17 +832,52 @@ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
read_seqlock_excl(lock);
}
+/**
+ * need_seqretry() - validate seqlock_t "locking or lockless" read section
+ * @lock: Pointer to seqlock_t
+ * @seq: sequence count, from read_seqbegin_or_lock()
+ *
+ * Return: true if a read section retry is required, false otherwise
+ */
static inline int need_seqretry(seqlock_t *lock, int seq)
{
return !(seq & 1) && read_seqretry(lock, seq);
}
+/**
+ * done_seqretry() - end seqlock_t "locking or lockless" reader section
+ * @lock: Pointer to seqlock_t
+ * @seq: count, from read_seqbegin_or_lock()
+ *
+ * done_seqretry finishes the seqlock_t read side critical section started
+ * with read_seqbegin_or_lock() and validated by need_seqretry().
+ */
static inline void done_seqretry(seqlock_t *lock, int seq)
{
if (seq & 1)
read_sequnlock_excl(lock);
}
+/**
+ * read_seqbegin_or_lock_irqsave() - begin a seqlock_t lockless reader, or
+ * a non-interruptible locking reader
+ * @lock: Pointer to seqlock_t
+ * @seq: Marker and return parameter. Check read_seqbegin_or_lock().
+ *
+ * This is the _irqsave variant of read_seqbegin_or_lock(). Use it only if
+ * the seqlock_t write section, *or other read sections*, can be invoked
+ * from hardirq context.
+ *
+ * Note: Interrupts will be disabled only for "locking reader" mode.
+ *
+ * Return:
+ *
+ * 1. The saved local interrupts state in case of a locking reader, to
+ * be passed to done_seqretry_irqrestore().
+ *
+ * 2. The encountered sequence counter value, returned through @seq
+ * overloaded as a return parameter. Check read_seqbegin_or_lock().
+ */
static inline unsigned long
read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
{
@@ -632,6 +891,18 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
return flags;
}
+/**
+ * done_seqretry_irqrestore() - end a seqlock_t lockless reader, or a
+ * non-interruptible locking reader section
+ * @lock: Pointer to seqlock_t
+ * @seq: Count, from read_seqbegin_or_lock_irqsave()
+ * @flags: Caller's saved local interrupt state in case of a locking
+ * reader, also from read_seqbegin_or_lock_irqsave()
+ *
+ * This is the _irqrestore variant of done_seqretry(). The read section
+ * must've been opened with read_seqbegin_or_lock_irqsave(), and validated
+ * by need_seqretry().
+ */
static inline void
done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags)
{
--
2.20.1
next prev parent reply other threads:[~2020-07-20 15:56 UTC|newest]
Thread overview: 258+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
2020-05-19 22:01 ` Stephen Hemminger
2020-05-19 22:23 ` Thomas Gleixner
2020-05-19 23:11 ` Stephen Hemminger
2020-05-19 23:42 ` Thomas Gleixner
2020-05-20 0:06 ` Stephen Hemminger
2020-05-20 1:55 ` Thomas Gleixner
2020-05-20 2:57 ` David Miller
2020-05-20 3:18 ` Eric Dumazet
2020-05-20 4:36 ` Stephen Hemminger
2020-05-20 19:37 ` Thomas Gleixner
2020-05-20 21:36 ` Stephen Hemminger
2020-05-20 2:01 ` Eric Dumazet
2020-05-20 6:42 ` Ahmed S. Darwish
2020-05-20 12:51 ` Eric Dumazet
2020-06-03 14:33 ` Ahmed S. Darwish
2020-05-20 14:37 ` Dan Carpenter
2020-05-25 16:22 ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API Ahmed S. Darwish
2020-05-20 12:22 ` Konstantin Khlebnikov
2020-05-20 13:05 ` Peter Zijlstra
2020-05-22 14:57 ` Peter Zijlstra
2020-05-22 15:17 ` Sebastian A. Siewior
2020-05-22 16:23 ` Peter Zijlstra
2020-05-25 15:24 ` Ahmed S. Darwish
2020-05-25 15:45 ` Peter Zijlstra
2020-05-25 16:10 ` John Ogness
2020-09-10 15:08 ` [tip: locking/core] mm/swap: Do not abuse the seqcount_t " tip-bot2 for Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 03/25] net: phy: fixed_phy: Remove unused seqcount Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write Ahmed S. Darwish
2020-05-22 16:39 ` Peter Zijlstra
2020-05-25 9:56 ` Ahmed S. Darwish
[not found] ` <20200522001237.A00E8206BE@mail.kernel.org>
2020-05-25 10:12 ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 05/25] u64_stats: Document writer non-preemptibility requirement Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 06/25] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 07/25] lockdep: Add preemption disabled assertion API Ahmed S. Darwish
2020-05-22 17:55 ` Peter Zijlstra
2020-05-23 14:59 ` Sebastian A. Siewior
2020-05-23 22:41 ` Peter Zijlstra
2020-05-24 10:50 ` Sebastian A. Siewior
2020-05-25 10:22 ` Peter Zijlstra
2020-05-26 0:52 ` Ahmed S. Darwish
2020-05-26 8:13 ` Peter Zijlstra
2020-05-26 9:45 ` Ahmed S. Darwish
2020-06-03 15:30 ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 08/25] seqlock: lockdep assert non-preemptibility on seqcount_t write Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
2020-05-22 18:01 ` Peter Zijlstra
2020-05-22 22:24 ` Steven Rostedt
2020-05-25 10:50 ` Ahmed S. Darwish
2020-05-25 11:02 ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes Ahmed S. Darwish
2020-05-22 18:02 ` Peter Zijlstra
2020-05-22 18:03 ` Peter Zijlstra
2020-05-22 18:26 ` Thomas Gleixner
2020-05-22 18:32 ` Peter Zijlstra
2020-05-25 9:36 ` Ahmed S. Darwish
2020-05-25 13:44 ` Peter Zijlstra
2020-05-25 14:07 ` Peter Zijlstra
2020-05-19 21:45 ` [PATCH v1 11/25] seqlock: Add missing kernel-doc annotations Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 12/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
2020-05-20 10:48 ` Christian König
2020-05-21 0:09 ` Ahmed S. Darwish
2020-05-21 13:20 ` Christian König
2020-05-19 21:45 ` [PATCH v1 14/25] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 15/25] netfilter: conntrack: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 16/25] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 17/25] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 18/25] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 19/25] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 20/25] raid5: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 21/25] iocost: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 22/25] NFSv4: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 23/25] userfaultfd: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 24/25] kvm/eventfd: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 25/25] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 00/18] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 01/18] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 02/18] seqlock: Properly format kernel-doc code samples Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 03/18] seqlock: Add missing kernel-doc annotations Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 04/18] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 05/18] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 06/18] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
2020-06-08 14:32 ` Daniel Vetter
2020-06-08 0:57 ` [PATCH v2 07/18] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 08/18] netfilter: conntrack: " Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 09/18] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 10/18] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 11/18] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 12/18] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 13/18] raid5: " Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 14/18] iocost: " Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 15/18] NFSv4: " Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 16/18] userfaultfd: " Ahmed S. Darwish
2020-06-08 0:57 ` [PATCH v2 17/18] kvm/eventfd: " Ahmed S. Darwish
2020-06-08 12:57 ` Paolo Bonzini
2020-06-08 0:57 ` [PATCH v2 18/18] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 00/20] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 01/20] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
2020-07-06 21:04 ` Peter Zijlstra
2020-07-06 21:12 ` Jonathan Corbet
2020-07-06 21:16 ` Peter Zijlstra
2020-07-07 10:12 ` Ahmed S. Darwish
2020-07-07 12:47 ` Peter Zijlstra
2020-06-30 5:44 ` [PATCH v3 02/20] seqlock: Properly format kernel-doc code samples Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 03/20] seqlock: Add missing kernel-doc annotations Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 04/20] lockdep: Add preemption enabled/disabled assertion APIs Ahmed S. Darwish
2020-07-06 20:50 ` Peter Zijlstra
2020-07-07 7:34 ` Sebastian A. Siewior
2020-06-30 5:44 ` [PATCH v3 05/20] seqlock: lockdep assert non-preemptibility on seqcount_t write Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 06/20] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-07-06 21:21 ` Peter Zijlstra
2020-07-07 8:40 ` Ahmed S. Darwish
2020-07-07 13:04 ` Peter Zijlstra
2020-07-07 14:37 ` Peter Zijlstra
2020-07-08 9:12 ` Peter Zijlstra
2020-07-08 10:43 ` Ahmed S. Darwish
2020-07-08 10:33 ` Ahmed S. Darwish
2020-07-08 12:29 ` Peter Zijlstra
2020-07-08 14:13 ` Peter Zijlstra
2020-07-08 14:25 ` Peter Zijlstra
2020-07-08 15:09 ` Ahmed S. Darwish
2020-07-08 15:35 ` Peter Zijlstra
2020-07-08 15:58 ` Ahmed S. Darwish
2020-07-08 16:16 ` Peter Zijlstra
2020-07-08 16:18 ` Peter Zijlstra
2020-07-08 16:01 ` Peter Zijlstra
2020-06-30 5:44 ` [PATCH v3 07/20] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 08/20] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 09/20] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 10/20] netfilter: conntrack: " Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 11/20] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 12/20] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 13/20] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 14/20] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 15/20] raid5: " Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 16/20] iocost: " Ahmed S. Darwish
2020-06-30 7:11 ` Daniel Wagner
2020-06-30 5:44 ` [PATCH v3 17/20] NFSv4: " Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 18/20] userfaultfd: " Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 19/20] kvm/eventfd: " Ahmed S. Darwish
2020-06-30 5:44 ` [PATCH v3 20/20] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 00/24] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 01/24] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
2020-07-21 1:35 ` Steven Rostedt
2020-07-21 1:37 ` Steven Rostedt
2020-07-21 5:34 ` Ahmed S. Darwish
2020-07-21 1:44 ` Steven Rostedt
2020-07-21 1:51 ` Steven Rostedt
2020-07-21 7:15 ` Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 02/24] seqlock: Properly format kernel-doc code samples Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 03/24] seqlock: seqcount_t latch: End read sections with read_seqcount_retry() Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 04/24] seqlock: Reorder seqcount_t and seqlock_t API definitions Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` Ahmed S. Darwish [this message]
2020-07-29 14:33 ` [tip: locking/core] seqlock: Add kernel-doc for seqcount_t and seqlock_t APIs tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 06/24] seqlock: Implement raw_seqcount_begin() in terms of raw_read_seqcount() Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 07/24] lockdep: Add preemption enabled/disabled assertion APIs Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 08/24] seqlock: lockdep assert non-preemptibility on seqcount_t write Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-08 23:21 ` [PATCH v4 08/24] " Guenter Roeck
2020-08-08 23:23 ` Guenter Roeck
2020-08-09 18:42 ` Ahmed S. Darwish
2020-08-10 8:59 ` Greg KH
2020-08-10 9:48 ` peterz
2020-08-10 10:03 ` Greg KH
2020-08-10 9:54 ` [PATCH] Revert "seqlock: lockdep assert non-preemptibility on seqcount_t write" Ahmed S. Darwish
2020-08-10 10:05 ` Greg KH
2020-08-10 10:35 ` Ahmed S. Darwish
2020-08-10 14:10 ` Guenter Roeck
2020-08-18 22:51 ` Valdis Klētnieks
2020-08-19 0:56 ` Guenter Roeck
2020-08-19 7:00 ` Sebastian Andrzej Siewior
2020-08-19 7:34 ` Valdis Klētnieks
2020-08-19 16:15 ` Guenter Roeck
2020-08-10 19:55 ` [PATCH v4 08/24] seqlock: lockdep assert non-preemptibility on seqcount_t write Thomas Gleixner
2020-08-11 10:06 ` Greg KH
2020-07-20 15:55 ` [PATCH v4 09/24] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 10/24] seqlock: Align multi-line macros newline escapes at 72 columns Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 11/24] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 12/24] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 13/24] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 14/24] netfilter: conntrack: " Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 15/24] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 16/24] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 17/24] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 18/24] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 19/24] raid5: " Ahmed S. Darwish
2020-07-22 6:40 ` Song Liu
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 20/24] iocost: " Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 21/24] NFSv4: " Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 22/24] userfaultfd: " Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 23/24] kvm/eventfd: " Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 15:55 ` [PATCH v4 24/24] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-07-29 14:33 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-07-20 16:49 ` [PATCH v4 00/24] seqlock: Extend seqcount API with associated locks Eric Biggers
2020-07-20 17:33 ` Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 0/8] seqlock: Introduce seqcount_latch_t Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 1/8] time/sched_clock: Use raw_read_seqcount_latch() during suspend Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 2/8] mm/swap: Do not abuse the seqcount_t latching API Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 3/8] seqlock: Introduce seqcount_latch_t Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 4/8] time/sched_clock: Use seqcount_latch_t Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 5/8] timekeeping: " Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 6/8] x86/tsc: " Ahmed S. Darwish
2020-09-04 7:41 ` peterz
2020-09-04 8:03 ` peterz
2020-09-07 16:29 ` Ahmed S. Darwish
2020-09-07 17:30 ` peterz
2020-09-08 6:23 ` Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 7/8] rbtree_latch: " Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-27 11:40 ` [PATCH v1 8/8] seqlock: seqcount latch APIs: Only allow seqcount_latch_t Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] " tip-bot2 for Ahmed S. Darwish
2020-08-28 1:07 ` [PATCH v1 0/5] seqlock: Introduce PREEMPT_RT support Ahmed S. Darwish
2020-08-28 1:07 ` [PATCH v1 1/5] seqlock: seqcount_LOCKTYPE_t: Standardize naming convention Ahmed S. Darwish
2020-08-28 8:18 ` peterz
2020-08-28 8:24 ` Ahmed S. Darwish
2020-08-28 1:07 ` [PATCH v1 2/5] seqlock: Use unique prefix for seqcount_t property accessors Ahmed S. Darwish
2020-08-28 8:27 ` peterz
2020-08-28 8:59 ` Ahmed S. Darwish
2020-08-28 1:07 ` [PATCH v1 3/5] seqlock: seqcount_t: Implement all read APIs as statement expressions Ahmed S. Darwish
2020-08-28 8:30 ` peterz
2020-08-28 8:37 ` Ahmed S. Darwish
2020-08-28 1:07 ` [PATCH v1 4/5] seqlock: seqcount_LOCKTYPE_t: Introduce PREEMPT_RT support Ahmed S. Darwish
2020-08-28 8:57 ` peterz
2020-08-28 8:59 ` peterz
2020-08-28 9:31 ` Ahmed S. Darwish
2020-08-28 14:36 ` Ahmed S. Darwish
2020-08-28 1:07 ` [PATCH v1 5/5] seqlock: PREEMPT_RT: Do not starve seqlock_t writers Ahmed S. Darwish
2020-09-04 6:52 ` [PATCH v1 0/5] seqlock: Introduce PREEMPT_RT support peterz
2020-09-04 7:30 ` Ahmed S. Darwish
2020-09-10 15:08 ` [tip: locking/core] seqlock: seqcount_LOCKNAME_t: " tip-bot2 for Ahmed S. Darwish
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200720155530.1173732-6-a.darwish@linutronix.de \
--to=a.darwish@linutronix.de \
--cc=bigeasy@linutronix.de \
--cc=corbet@lwn.net \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).