LKML Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks
@ 2020-05-19 21:45 Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
                   ` (24 more replies)
  0 siblings, 25 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, David S. Miller,
	Andrew Morton, Jens Axboe, Jonathan Corbet, Alexander Viro,
	David Airlie, Daniel Vetter, netdev, linux-mm, linux-block,
	dri-devel, linux-fsdevel, linux-doc

Hi,

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the write side critical section.

There is no built-in debugging mechanism to verify that the lock used
for writer serialization is held and preemption is disabled. Some usage
sites like dma-buf have explicit lockdep checks for the writer-side
lock, but this covers only a small portion of the sequence counter usage
in the kernel.

Add new sequence counter types which allows to associate a lock to the
sequence counter at initialization time. The seqcount API functions are
extended to provide appropriate lockdep assertions depending on the
seqcount/lock type.

For sequence counters with associated locks that do not implicitly
disable preemption, preemption protection is enforced in the sequence
counter write side functions. This removes the need to explicitly add
preempt_disable/enable() around the write side critical sections: the
write_begin/end() functions for these new sequence counter types
automatically do this.

Extend the lockdep API with a macro asserting that preemption is
disabled.  Use it to verify that preemption is disabled for all sequence
counters write side critical sections.

If lockdep is disabled, these lock associations and non-preemptibility
checks are compiled out and have neither storage size nor runtime
overhead. If lockdep is enabled, a pointer to the lock is stored in the
seqcount and the write side API functions enable lockdep assertions.

The following seqcount types with associated locks are introduced:

     seqcount_spinlock_t
     seqcount_raw_spinlock_t
     seqcount_rwlock_t
     seqcount_mutex_t
     seqcount_ww_mutex_t

This lock association is not only useful for debugging purposes, it also
provides a mechanism for PREEMPT_RT to prevent writer starvation. On RT
kernels spinlocks and rwlocks are substituted with sleeping locks and
the code sections protected by these locks become preemptible, which has
the same problem as write side critical section with preemption enabled
on a non-RT kernel. RT utilizes this association by storing the provided
lock pointer and in case that a reader sees an active writer (seqcount
is odd), it does not spin, but blocks on the associated lock similar to
read_seqbegin_or_lock().

By using the lockdep debugging mechanisms added in this patch series, a
number of erroneous seqcount call-sites were discovered across the
kernel. The fixes are included at the beginning of the series.

Thanks,

8<--------------

Ahmed S. Darwish (25):
  net: core: device_rename: Use rwsem instead of a seqcount
  mm/swap: Don't abuse the seqcount latching API
  net: phy: fixed_phy: Remove unused seqcount
  block: nr_sects_write(): Disable preemption on seqcount write
  u64_stats: Document writer non-preemptibility requirement
  dma-buf: Remove custom seqcount lockdep class key
  lockdep: Add preemption disabled assertion API
  seqlock: lockdep assert non-preemptibility on seqcount_t write
  Documentation: locking: Describe seqlock design and usage
  seqlock: Add RST directives to kernel-doc code samples and notes
  seqlock: Add missing kernel-doc annotations
  seqlock: Extend seqcount API with associated locks
  dma-buf: Use sequence counter with associated wound/wait mutex
  sched: tasks: Use sequence counter with associated spinlock
  netfilter: conntrack: Use sequence counter with associated spinlock
  netfilter: nft_set_rbtree: Use sequence counter with associated rwlock
  xfrm: policy: Use sequence counters with associated lock
  timekeeping: Use sequence counter with associated raw spinlock
  vfs: Use sequence counter with associated spinlock
  raid5: Use sequence counter with associated spinlock
  iocost: Use sequence counter with associated spinlock
  NFSv4: Use sequence counter with associated spinlock
  userfaultfd: Use sequence counter with associated spinlock
  kvm/eventfd: Use sequence counter with associated spinlock
  hrtimer: Use sequence counter with associated raw spinlock

 Documentation/locking/index.rst               |   1 +
 Documentation/locking/seqlock.rst             | 239 +++++
 MAINTAINERS                                   |   2 +-
 block/blk-iocost.c                            |   5 +-
 block/blk.h                                   |   2 +
 drivers/dma-buf/dma-resv.c                    |  15 +-
 .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  |   2 -
 drivers/md/raid5.c                            |   2 +-
 drivers/md/raid5.h                            |   2 +-
 drivers/net/phy/fixed_phy.c                   |  25 +-
 fs/dcache.c                                   |   2 +-
 fs/fs_struct.c                                |   4 +-
 fs/nfs/nfs4_fs.h                              |   2 +-
 fs/nfs/nfs4state.c                            |   2 +-
 fs/userfaultfd.c                              |   4 +-
 include/linux/dcache.h                        |   2 +-
 include/linux/dma-resv.h                      |   4 +-
 include/linux/fs_struct.h                     |   2 +-
 include/linux/hrtimer.h                       |   2 +-
 include/linux/kvm_irqfd.h                     |   2 +-
 include/linux/lockdep.h                       |   9 +
 include/linux/sched.h                         |   2 +-
 include/linux/seqlock.h                       | 882 +++++++++++++++---
 include/linux/seqlock_types_internal.h        | 187 ++++
 include/linux/u64_stats_sync.h                |  38 +-
 include/net/netfilter/nf_conntrack.h          |   2 +-
 init/init_task.c                              |   3 +-
 kernel/fork.c                                 |   2 +-
 kernel/locking/lockdep.c                      |  15 +
 kernel/time/hrtimer.c                         |  13 +-
 kernel/time/timekeeping.c                     |  19 +-
 lib/Kconfig.debug                             |   1 +
 mm/swap.c                                     |  57 +-
 net/core/dev.c                                |  30 +-
 net/netfilter/nf_conntrack_core.c             |   5 +-
 net/netfilter/nft_set_rbtree.c                |   4 +-
 net/xfrm/xfrm_policy.c                        |  10 +-
 virt/kvm/eventfd.c                            |   2 +-
 38 files changed, 1325 insertions(+), 277 deletions(-)
 create mode 100644 Documentation/locking/seqlock.rst
 create mode 100644 include/linux/seqlock_types_internal.h

base-commit: 2ef96a5bb12be62ef75b5828c0aab838ebb29cb8
--
2.20.1

^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 22:01   ` Stephen Hemminger
                     ` (2 more replies)
  2020-05-19 21:45 ` [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API Ahmed S. Darwish
                   ` (23 subsequent siblings)
  24 siblings, 3 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, David S. Miller,
	Jakub Kicinski, netdev

Sequence counters write paths are critical sections that must never be
preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.

Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
netdev name retrieval.") handled a deadlock, observed with
CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
infinitely spinning: it got scheduled after the seqcount write side
blocked inside its own critical section.

To fix that deadlock, among other issues, the commit added a
cond_resched() inside the read side section. While this will get the
non-preemptible kernel eventually unstuck, the seqcount reader is fully
exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.

The fix is also still broken: if the seqcount reader belongs to a
real-time scheduling policy, it can spin forever and the kernel will
livelock.

Disabling preemption over the seqcount write side critical section will
not work: inside it are a number of GFP_KERNEL allocations and mutex
locking through the drivers/base/ :: device_rename() call chain.

From all the above, replace the seqcount with a rwsem.

Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
Cc: <stable@vger.kernel.org>
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 net/core/dev.c | 30 ++++++++++++------------------
 1 file changed, 12 insertions(+), 18 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 522288177bbd..e18a4c23df0e 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -79,6 +79,7 @@
 #include <linux/sched.h>
 #include <linux/sched/mm.h>
 #include <linux/mutex.h>
+#include <linux/rwsem.h>
 #include <linux/string.h>
 #include <linux/mm.h>
 #include <linux/socket.h>
@@ -194,7 +195,7 @@ static DEFINE_SPINLOCK(napi_hash_lock);
 static unsigned int napi_gen_id = NR_CPUS;
 static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
 
-static seqcount_t devnet_rename_seq;
+static DECLARE_RWSEM(devnet_rename_sem);
 
 static inline void dev_base_seq_inc(struct net *net)
 {
@@ -930,18 +931,13 @@ EXPORT_SYMBOL(dev_get_by_napi_id);
  *	@net: network namespace
  *	@name: a pointer to the buffer where the name will be stored.
  *	@ifindex: the ifindex of the interface to get the name from.
- *
- *	The use of raw_seqcount_begin() and cond_resched() before
- *	retrying is required as we want to give the writers a chance
- *	to complete when CONFIG_PREEMPTION is not set.
  */
 int netdev_get_name(struct net *net, char *name, int ifindex)
 {
 	struct net_device *dev;
-	unsigned int seq;
 
-retry:
-	seq = raw_seqcount_begin(&devnet_rename_seq);
+	down_read(&devnet_rename_sem);
+
 	rcu_read_lock();
 	dev = dev_get_by_index_rcu(net, ifindex);
 	if (!dev) {
@@ -951,10 +947,8 @@ int netdev_get_name(struct net *net, char *name, int ifindex)
 
 	strcpy(name, dev->name);
 	rcu_read_unlock();
-	if (read_seqcount_retry(&devnet_rename_seq, seq)) {
-		cond_resched();
-		goto retry;
-	}
+
+	up_read(&devnet_rename_sem);
 
 	return 0;
 }
@@ -1228,10 +1222,10 @@ int dev_change_name(struct net_device *dev, const char *newname)
 	    likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK)))
 		return -EBUSY;
 
-	write_seqcount_begin(&devnet_rename_seq);
+	down_write(&devnet_rename_sem);
 
 	if (strncmp(newname, dev->name, IFNAMSIZ) == 0) {
-		write_seqcount_end(&devnet_rename_seq);
+		up_write(&devnet_rename_sem);
 		return 0;
 	}
 
@@ -1239,7 +1233,7 @@ int dev_change_name(struct net_device *dev, const char *newname)
 
 	err = dev_get_valid_name(net, dev, newname);
 	if (err < 0) {
-		write_seqcount_end(&devnet_rename_seq);
+		up_write(&devnet_rename_sem);
 		return err;
 	}
 
@@ -1254,11 +1248,11 @@ int dev_change_name(struct net_device *dev, const char *newname)
 	if (ret) {
 		memcpy(dev->name, oldname, IFNAMSIZ);
 		dev->name_assign_type = old_assign_type;
-		write_seqcount_end(&devnet_rename_seq);
+		up_write(&devnet_rename_sem);
 		return ret;
 	}
 
-	write_seqcount_end(&devnet_rename_seq);
+	up_write(&devnet_rename_sem);
 
 	netdev_adjacent_rename_links(dev, oldname);
 
@@ -1279,7 +1273,7 @@ int dev_change_name(struct net_device *dev, const char *newname)
 		/* err >= 0 after dev_alloc_name() or stores the first errno */
 		if (err >= 0) {
 			err = ret;
-			write_seqcount_begin(&devnet_rename_seq);
+			down_write(&devnet_rename_sem);
 			memcpy(dev->name, oldname, IFNAMSIZ);
 			memcpy(oldname, newname, IFNAMSIZ);
 			dev->name_assign_type = old_assign_type;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-20 12:22   ` Konstantin Khlebnikov
  2020-05-22 14:57   ` Peter Zijlstra
  2020-05-19 21:45 ` [PATCH v1 03/25] net: phy: fixed_phy: Remove unused seqcount Ahmed S. Darwish
                   ` (22 subsequent siblings)
  24 siblings, 2 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Andrew Morton,
	Konstantin Khlebnikov, linux-mm

Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
implemented an optimization mechanism to exit the to-be-started LRU
drain operation (name it A) if another drain operation *started and
finished* while (A) was blocked on the LRU draining mutex.

This was done through a seqcount latch, which is an abuse of its
semantics:

  1. Seqcount latching should be used for the purpose of switching
     between two storage places with sequence protection to allow
     interruptible, preemptible writer sections. The optimization
     mechanism has absolutely nothing to do with that.

  2. The used raw_write_seqcount_latch() has two smp write memory
     barriers to always insure one consistent storage place out of the
     two storage places available. This extra smp_wmb() is redundant for
     the optimization use case.

Beside the API abuse, the semantics of a latch sequence counter was
force fitted into the optimization. What was actually meant is to track
generations of LRU draining operations, where "current lru draining
generation = x" implies that all generations 0 < n <= x are already
*scheduled* for draining.

Remove the conceptually-inappropriate seqcount latch usage and manually
implement the optimization using a counter and SMP memory barriers.

Link: https://lkml.kernel.org/r/CALYGNiPSr-cxV9MX9czaVh6Wz_gzSv3H_8KPvgjBTGbJywUJpA@mail.gmail.com
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 mm/swap.c | 57 +++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 47 insertions(+), 10 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index bf9a79fed62d..d6910eeed43d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -713,10 +713,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
  */
 void lru_add_drain_all(void)
 {
-	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
-	static DEFINE_MUTEX(lock);
+	/*
+	 * lru_drain_gen - Current generation of pages that could be in vectors
+	 *
+	 * (A) Definition: lru_drain_gen = x implies that all generations
+	 *     0 < n <= x are already scheduled for draining.
+	 *
+	 * This is an optimization for the highly-contended use case where a
+	 * user space workload keeps constantly generating a flow of pages
+	 * for each CPU.
+	 */
+	static unsigned int lru_drain_gen;
 	static struct cpumask has_work;
-	int cpu, seq;
+	static DEFINE_MUTEX(lock);
+	int cpu, this_gen;
 
 	/*
 	 * Make sure nobody triggers this path before mm_percpu_wq is fully
@@ -725,21 +735,48 @@ void lru_add_drain_all(void)
 	if (WARN_ON(!mm_percpu_wq))
 		return;
 
-	seq = raw_read_seqcount_latch(&seqcount);
+	/*
+	 * (B) Cache the LRU draining generation number
+	 *
+	 * smp_rmb() ensures that the counter is loaded before the mutex is
+	 * taken. It pairs with the smp_wmb() inside the mutex critical section
+	 * at (D).
+	 */
+	this_gen = READ_ONCE(lru_drain_gen);
+	smp_rmb();
 
 	mutex_lock(&lock);
 
 	/*
-	 * Piggyback on drain started and finished while we waited for lock:
-	 * all pages pended at the time of our enter were drained from vectors.
+	 * (C) Exit the draining operation if a newer generation, from another
+	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
 	 */
-	if (__read_seqcount_retry(&seqcount, seq))
+	if (unlikely(this_gen != lru_drain_gen))
 		goto done;
 
-	raw_write_seqcount_latch(&seqcount);
+	/*
+	 * (D) Increment generation number
+	 *
+	 * Pairs with READ_ONCE() and smp_rmb() at (B), outside of the critical
+	 * section.
+	 *
+	 * This pairing must be done here, before the for_each_online_cpu loop
+	 * below which drains the page vectors.
+	 *
+	 * Let x, y, and z represent some system CPU numbers, where x < y < z.
+	 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
+	 * below and has already reached CPU #y's per-cpu data. CPU #x comes
+	 * along, adds some pages to its per-cpu vectors, then calls
+	 * lru_add_drain_all().
+	 *
+	 * If the paired smp_wmb() below is done at any later step, e.g. after
+	 * the loop, CPU #x will just exit at (C) and miss flushing out all of
+	 * its added pages.
+	 */
+	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
+	smp_wmb();
 
 	cpumask_clear(&has_work);
-
 	for_each_online_cpu(cpu) {
 		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
 
@@ -766,7 +803,7 @@ void lru_add_drain_all(void)
 {
 	lru_add_drain();
 }
-#endif
+#endif /* CONFIG_SMP */
 
 /**
  * release_pages - batched put_page()
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 03/25] net: phy: fixed_phy: Remove unused seqcount
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write Ahmed S. Darwish
                   ` (21 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Andrew Lunn,
	Florian Fainelli, Heiner Kallweit, Russell King, David S. Miller,
	netdev

Commit bf7afb29d545 ("phy: improve safety of fixed-phy MII register
reading") protected the fixed PHY status with a sequence counter.

Two years later, commit d2b977939b18 ("net: phy: fixed-phy: remove
fixed_phy_update_state()") removed the sequence counter's write side
critical section -- neutralizing its read side retry loop.

Remove the unused seqcount.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/net/phy/fixed_phy.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/net/phy/fixed_phy.c b/drivers/net/phy/fixed_phy.c
index 4a3d34f40cb9..f55365c9d1f7 100644
--- a/drivers/net/phy/fixed_phy.c
+++ b/drivers/net/phy/fixed_phy.c
@@ -34,7 +34,6 @@ struct fixed_mdio_bus {
 struct fixed_phy {
 	int addr;
 	struct phy_device *phydev;
-	seqcount_t seqcount;
 	struct fixed_phy_status status;
 	bool no_carrier;
 	int (*link_update)(struct net_device *, struct fixed_phy_status *);
@@ -80,19 +79,17 @@ static int fixed_mdio_read(struct mii_bus *bus, int phy_addr, int reg_num)
 	list_for_each_entry(fp, &fmb->phys, node) {
 		if (fp->addr == phy_addr) {
 			struct fixed_phy_status state;
-			int s;
 
-			do {
-				s = read_seqcount_begin(&fp->seqcount);
-				fp->status.link = !fp->no_carrier;
-				/* Issue callback if user registered it. */
-				if (fp->link_update)
-					fp->link_update(fp->phydev->attached_dev,
-							&fp->status);
-				/* Check the GPIO for change in status */
-				fixed_phy_update(fp);
-				state = fp->status;
-			} while (read_seqcount_retry(&fp->seqcount, s));
+			fp->status.link = !fp->no_carrier;
+
+			/* Issue callback if user registered it. */
+			if (fp->link_update)
+				fp->link_update(fp->phydev->attached_dev,
+						&fp->status);
+
+			/* Check the GPIO for change in status */
+			fixed_phy_update(fp);
+			state = fp->status;
 
 			return swphy_read_reg(reg_num, &state);
 		}
@@ -150,8 +147,6 @@ static int fixed_phy_add_gpiod(unsigned int irq, int phy_addr,
 	if (!fp)
 		return -ENOMEM;
 
-	seqcount_init(&fp->seqcount);
-
 	if (irq != PHY_POLL)
 		fmb->mii_bus->irq[phy_addr] = irq;
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (2 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 03/25] net: phy: fixed_phy: Remove unused seqcount Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-22 16:39   ` Peter Zijlstra
       [not found]   ` <20200522001237.A00E8206BE@mail.kernel.org>
  2020-05-19 21:45 ` [PATCH v1 05/25] u64_stats: Document writer non-preemptibility requirement Ahmed S. Darwish
                   ` (20 subsequent siblings)
  24 siblings, 2 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Jens Axboe, Phillip Susi,
	Vivek Goyal, linux-block

For optimized block readers not holding a mutex, the "number of sectors"
64-bit value is protected from tearing on 32-bit architectures by a
sequence counter.

Disable preemption before entering that sequence counter's write side
critical section. Otherwise, the read side can preempt the write side
section and spin for the entire scheduler tick. If the reader belongs to
a real-time scheduling class, it can spin forever and the kernel will
livelock.

Fixes: c83f6bf98dc1 ("block: add partition resize function to blkpg ioctl")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 block/blk.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/block/blk.h b/block/blk.h
index 0a94ec68af32..151f86932547 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -470,9 +470,11 @@ static inline sector_t part_nr_sects_read(struct hd_struct *part)
 static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
 {
 #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
+	preempt_disable();
 	write_seqcount_begin(&part->nr_sects_seq);
 	part->nr_sects = size;
 	write_seqcount_end(&part->nr_sects_seq);
+	preempt_enable();
 #elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
 	preempt_disable();
 	part->nr_sects = size;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 05/25] u64_stats: Document writer non-preemptibility requirement
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (3 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 06/25] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, David S. Miller,
	Jakub Kicinski, netdev

The u64_stats mechanism uses sequence counters to protect against 64-bit
values tearing on 32-bit architectures. Updating such statistics is a
sequence counter write side critical section.

Preemption must be disabled before entering this seqcount write critical
section.  Failing to do so, the seqcount read side can preempt the write
side section and spin for the entire scheduler tick.  If that reader
belongs to a real-time scheduling class, it can spin forever and the
kernel will livelock.

Document this statistics update side non-preemptibility requirement.

Reword the u64_stats header file top comment to always mention "Reader"
or "Writer" at the start of each bullet point, making it easier to
follow which side each point is actually for.

Fix the statement "whole thing is a NOOP on 64bit arches or UP kernels".
For 32-bit UP kernels, preemption is always disabled for the statistics
read side section.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/u64_stats_sync.h | 38 ++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 18 deletions(-)

diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
index 9de5c10293f5..30358ce3d8fe 100644
--- a/include/linux/u64_stats_sync.h
+++ b/include/linux/u64_stats_sync.h
@@ -7,29 +7,31 @@
  * we provide a synchronization point, that is a noop on 64bit or UP kernels.
  *
  * Key points :
- * 1) Use a seqcount on SMP 32bits, with low overhead.
- * 2) Whole thing is a noop on 64bit arches or UP kernels.
- * 3) Write side must ensure mutual exclusion or one seqcount update could
+ *
+ * 1) Use a seqcount on 32-bit SMP, only disable preemption for 32-bit UP.
+ *
+ * 2) The whole thing is a no-op on 64-bit architectures.
+ *
+ * 3) Write side must ensure mutual exclusion, or one seqcount update could
  *    be lost, thus blocking readers forever.
- *    If this synchronization point is not a mutex, but a spinlock or
- *    spinlock_bh() or disable_bh() :
- * 3.1) Write side should not sleep.
- * 3.2) Write side should not allow preemption.
- * 3.3) If applicable, interrupts should be disabled.
  *
- * 4) If reader fetches several counters, there is no guarantee the whole values
- *    are consistent (remember point 1) : this is a noop on 64bit arches anyway)
+ * 4) Write side must disable preemption, or a seqcount reader can preempt the
+ *    writer and also spin forever.
  *
- * 5) readers are allowed to sleep or be preempted/interrupted : They perform
- *    pure reads. But if they have to fetch many values, it's better to not allow
- *    preemptions/interruptions to avoid many retries.
+ * 5) Write side must use the _irqsave() variant if other writers, or a reader,
+ *    can be invoked from an IRQ context.
  *
- * 6) If counter might be written by an interrupt, readers should block interrupts.
- *    (On UP, there is no seqcount_t protection, a reader allowing interrupts could
- *     read partial values)
+ * 6) If reader fetches several counters, there is no guarantee the whole values
+ *    are consistent w.r.t. each other (remember point #2: seqcounts are not
+ *    used for 64bit architectures).
  *
- * 7) For irq and softirq uses, readers can use u64_stats_fetch_begin_irq() and
- *    u64_stats_fetch_retry_irq() helpers
+ * 7) Readers are allowed to sleep or be preempted/interrupted: they perform
+ *    pure reads.
+ *
+ * 8) Readers must use both u64_stats_fetch_{begin,retry}_irq() if the stats
+ *    might be updated from a hardirq or softirq context (remember point #1:
+ *    seqcounts are not used for UP kernels). 32-bit UP stat readers could read
+ *    corrupted 64-bit values otherwise.
  *
  * Usage :
  *
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 06/25] dma-buf: Remove custom seqcount lockdep class key
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (4 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 05/25] u64_stats: Document writer non-preemptibility requirement Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 07/25] lockdep: Add preemption disabled assertion API Ahmed S. Darwish
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Sumit Semwal,
	David Airlie, Daniel Vetter, linux-media, dri-devel

Commit 3c3b177a9369 ("reservation: add support for read-only access
using rcu") introduced a sequence counter to manage updates to
reservations. Back then, the reservation object initializer
reservation_object_init() was always inlined.

Having the sequence counter initialization inlined meant that each of
the call sites would have a different lockdep class key, which would've
broken lockdep's deadlock detection. The aforementioned commit thus
introduced, and exported, a custom seqcount lockdep class key and name.

The commit 8735f16803f00 ("dma-buf: cleanup reservation_object_init...")
transformed the reservation object initializer to a normal non-inlined C
function. seqcount_init(), which automatically defines the seqcount
lockdep class key and must be called non-inlined, can now be safely used.

Remove the seqcount custom lockdep class key, name, and export. Use
seqcount_init() inside the dma reservation object initializer.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/dma-buf/dma-resv.c | 9 +--------
 include/linux/dma-resv.h   | 2 --
 2 files changed, 1 insertion(+), 10 deletions(-)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 4264e64788c4..590ce7ad60a0 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -50,12 +50,6 @@
 DEFINE_WD_CLASS(reservation_ww_class);
 EXPORT_SYMBOL(reservation_ww_class);
 
-struct lock_class_key reservation_seqcount_class;
-EXPORT_SYMBOL(reservation_seqcount_class);
-
-const char reservation_seqcount_string[] = "reservation_seqcount";
-EXPORT_SYMBOL(reservation_seqcount_string);
-
 /**
  * dma_resv_list_alloc - allocate fence list
  * @shared_max: number of fences we need space for
@@ -134,9 +128,8 @@ subsys_initcall(dma_resv_lockdep);
 void dma_resv_init(struct dma_resv *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
+	seqcount_init(&obj->seq);
 
-	__seqcount_init(&obj->seq, reservation_seqcount_string,
-			&reservation_seqcount_class);
 	RCU_INIT_POINTER(obj->fence, NULL);
 	RCU_INIT_POINTER(obj->fence_excl, NULL);
 }
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index ee50d10f052b..a6538ae7d93f 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -46,8 +46,6 @@
 #include <linux/rcupdate.h>
 
 extern struct ww_class reservation_ww_class;
-extern struct lock_class_key reservation_seqcount_class;
-extern const char reservation_seqcount_string[];
 
 /**
  * struct dma_resv_list - a list of shared fences
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (5 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 06/25] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-22 17:55   ` Peter Zijlstra
  2020-05-19 21:45 ` [PATCH v1 08/25] seqlock: lockdep assert non-preemptibility on seqcount_t write Ahmed S. Darwish
                   ` (17 subsequent siblings)
  24 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish

Asserting that preemption is disabled is a critical sanity check.
Developers are usually reluctant to add such a check in a fastpath, as
reading the preemption count can be costly.

Extend the lockdep API with a preemption disabled assertion. If lockdep
is disabled, or if the underlying architecture does not support kernel
preemption, this assert has no runtime overhead.

Since the lockdep assertion references sched.h task_struct current,
define it at lockdep.c instead of lockdep.h. This unbinds a potential
circular header dependency chain for call-sites, defined inlined, at
other header files already included and needed by sched.h.

Mark the exported assertion symbol with NOKPROBE_SYMBOL. Lockdep
functions can be involved in breakpoint handling and probing on those
functions can cause a breakpoint recursion.

References: f54bb2ec02c8 ("locking/lockdep: Add IRQs disabled/enabled assertion APIs: ...")
References: 2f43c6022d84 ("kprobes: Prohibit probing on lockdep functions")
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/linux/lockdep.h  |  9 +++++++++
 kernel/locking/lockdep.c | 15 +++++++++++++++
 lib/Kconfig.debug        |  1 +
 3 files changed, 25 insertions(+)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 206774ac6946..54c929ea5b98 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -702,6 +702,14 @@ do {									\
 			  "Not in hardirq as expected\n");		\
 	} while (0)
 
+/*
+ * Don't define this assertion here to avoid a call-site's header file
+ * dependency on sched.h task_struct current. This is needed by call
+ * sites that are inline defined at header files already included by
+ * sched.h.
+ */
+void lockdep_assert_preemption_disabled(void);
+
 #else
 # define might_lock(lock) do { } while (0)
 # define might_lock_read(lock) do { } while (0)
@@ -709,6 +717,7 @@ do {									\
 # define lockdep_assert_irqs_enabled() do { } while (0)
 # define lockdep_assert_irqs_disabled() do { } while (0)
 # define lockdep_assert_in_irq() do { } while (0)
+# define lockdep_assert_preemption_disabled() do { } while (0)
 #endif
 
 #ifdef CONFIG_PROVE_RAW_LOCK_NESTING
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ac10db66cc63..4dae65bc65c2 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5857,3 +5857,18 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	dump_stack();
 }
 EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
+
+#ifdef CONFIG_PROVE_LOCKING
+
+void lockdep_assert_preemption_disabled(void)
+{
+	WARN_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&
+		  debug_locks				&&
+		  !current->lockdep_recursion		&&
+		  (preempt_count() == 0 && current->hardirqs_enabled),
+		  "preemption not disabled as expected\n");
+}
+EXPORT_SYMBOL_GPL(lockdep_assert_preemption_disabled);
+NOKPROBE_SYMBOL(lockdep_assert_preemption_disabled);
+
+#endif
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 21d9c5f6e7ec..34d9d8896003 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1062,6 +1062,7 @@ config PROVE_LOCKING
 	select DEBUG_RWSEMS
 	select DEBUG_WW_MUTEX_SLOWPATH
 	select DEBUG_LOCK_ALLOC
+	select PREEMPT_COUNT if !ARCH_NO_PREEMPT
 	select TRACE_IRQFLAGS
 	default n
 	help
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 08/25] seqlock: lockdep assert non-preemptibility on seqcount_t write
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (6 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 07/25] lockdep: Add preemption disabled assertion API Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish

Preemption must be disabled before entering a sequence count write side
critical section.  Failing to do so, the seqcount read side can preempt
the write side section and spin for the entire scheduler tick.  If that
reader belongs to a real-time scheduling class, it can spin forever and
the kernel will livelock.

Assert through lockdep that preemption is disabled for seqcount writers.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/linux/seqlock.h | 30 ++++++++++++++++++++++++------
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 0491d963d47e..d35be7709403 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -369,14 +369,32 @@ static inline void raw_write_seqcount_latch(seqcount_t *s)
 
 /*
  * Sequence counter only version assumes that callers are using their
- * own mutexing.
+ * own locking and preemption is disabled.
  */
-static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass)
+
+static inline void __write_seqcount_begin_nested(seqcount_t *s, int subclass)
 {
 	raw_write_seqcount_begin(s);
 	seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
 }
 
+static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass)
+{
+	lockdep_assert_preemption_disabled();
+	__write_seqcount_begin_nested(s, subclass);
+}
+
+/*
+ * write_seqcount_begin() without lockdep non-preemptibility checks.
+ *
+ * Use for internal seqlock.h code where it's known that preemption
+ * is already disabled. For example, seqlock_t write functions.
+ */
+static inline void __write_seqcount_begin(seqcount_t *s)
+{
+	__write_seqcount_begin_nested(s, 0);
+}
+
 static inline void write_seqcount_begin(seqcount_t *s)
 {
 	write_seqcount_begin_nested(s, 0);
@@ -446,7 +464,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
 static inline void write_seqlock(seqlock_t *sl)
 {
 	spin_lock(&sl->lock);
-	write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_begin(&sl->seqcount);
 }
 
 static inline void write_sequnlock(seqlock_t *sl)
@@ -458,7 +476,7 @@ static inline void write_sequnlock(seqlock_t *sl)
 static inline void write_seqlock_bh(seqlock_t *sl)
 {
 	spin_lock_bh(&sl->lock);
-	write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_begin(&sl->seqcount);
 }
 
 static inline void write_sequnlock_bh(seqlock_t *sl)
@@ -470,7 +488,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
 static inline void write_seqlock_irq(seqlock_t *sl)
 {
 	spin_lock_irq(&sl->lock);
-	write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_begin(&sl->seqcount);
 }
 
 static inline void write_sequnlock_irq(seqlock_t *sl)
@@ -484,7 +502,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
 	unsigned long flags;
 
 	spin_lock_irqsave(&sl->lock, flags);
-	write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_begin(&sl->seqcount);
 	return flags;
 }
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (7 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 08/25] seqlock: lockdep assert non-preemptibility on seqcount_t write Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-22 18:01   ` Peter Zijlstra
  2020-05-19 21:45 ` [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes Ahmed S. Darwish
                   ` (15 subsequent siblings)
  24 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Jonathan Corbet,
	linux-doc

Proper documentation for the design and usage of sequence counters and
sequential locks does not exist. Complete the seqlock.h documentation as
follows:

  - Divide all documentation on a seqcount_t vs. seqlock_t basis. The
    description for both mechanisms was intermingled, which is incorrect
    since the usage constrains for each type are vastly different.

  - Add an introductory paragraph describing the internal design of, and
    rationale for, sequence counters.

  - Document seqcount_t writer non-preemptibility requirement, which was
    not previously documented anywhere, and provide a clear rationale.

  - Provide template code for seqcount_t and seqlock_t initialization
    and reader/writer critical sections.

  - Recommend using seqlock_t by default. It implicitly handles the
    serialization and non-preemptibility requirements of writers.

At seqlock.h:

  - Remove references to brlocks as they've long been removed from the
    kernel.

  - Remove references to gcc-3.x since the kernel's minimum supported
    gcc version is 4.6.

  - Remove the severely lacking top comment and reference the newly
    introduced Documentation/locking/seqlock.rst file instead.

References: 0f6ed63b1707 ("no need to keep brlock macros anymore...")
References: cafa0010cd51 ("Raise the minimum required gcc version to 4.6")
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 Documentation/locking/index.rst   |   1 +
 Documentation/locking/seqlock.rst | 181 ++++++++++++++++++++++++++++++
 include/linux/seqlock.h           |  73 +++++-------
 3 files changed, 213 insertions(+), 42 deletions(-)
 create mode 100644 Documentation/locking/seqlock.rst

diff --git a/Documentation/locking/index.rst b/Documentation/locking/index.rst
index 5d6800a723dc..aad15fc81ccd 100644
--- a/Documentation/locking/index.rst
+++ b/Documentation/locking/index.rst
@@ -14,6 +14,7 @@ locking
     mutex-design
     rt-mutex-design
     rt-mutex
+    seqlock
     spinlocks
     ww-mutex-design
 
diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
new file mode 100644
index 000000000000..2242ae00e7bf
--- /dev/null
+++ b/Documentation/locking/seqlock.rst
@@ -0,0 +1,181 @@
+======================================
+Sequence counters and sequential locks
+======================================
+
+Introduction
+============
+
+Sequence counters are a reader-writer consistency mechanism with
+lockless readers (read-only retry loops), and no writer starvation. They
+are used for data that's rarely written to (e.g. system time), where the
+reader wants a consistent set of information and is willing to retry if
+that information changes.
+
+A data set is consistent when the sequence count at the beginning of the
+read side critical section is even and the same sequence count value is
+read again at the end of the critical section. The data in the set must
+be copied out inside the read side critical section. If the sequence
+count has changed between the start and the end of the critical section,
+the reader must retry.
+
+Writers increment the sequence count at the start and the end of their
+critical section. After starting the critical section the sequence count
+is odd and indicates to the readers that an update is in progress. At
+the end of the write side critical section the sequence count becomes
+even again which lets readers make progress.
+
+A sequence counter write side critical section must never be preempted
+or interrupted by read side sections. Otherwise the reader will spin for
+the entire scheduler tick due to the odd sequence count value and the
+interrupted writer. If that reader belongs to a real-time scheduling
+class, it can spin forever and the kernel will livelock.
+
+.. _seqcount_t:
+
+Sequence counters (:c:type:`seqcount_t`)
+========================================
+
+This is the the raw counting mechanism, which does not protect against
+multiple writers.  Write side critical sections must thus be serialized
+by an external lock.
+
+If the write serialization primitive is not implicitly disabling
+preemption, preemption must be explicitly disabled before entering the
+write side section. If the sequence counter read section can be invoked
+from hardirq or softirq contexts, interrupts or bottom halves must be
+respectively disabled before entering the write side section.
+
+If it's desired to automatically handle the sequence counter
+requirements of writer serialization and non-preemptibility, use a
+:ref:`sequential lock <seqlock_t>` instead.
+
+Initialization:
+
+.. code-block:: c
+
+	/* dynamic */
+	seqcount_t foo_seqcount;
+	seqcount_init(&foo_seqcount);
+
+	/* static */
+	static seqcount_t foo_seqcount = SEQCNT_ZERO(foo_seqcount);
+
+	/* C99 struct init */
+	struct {
+		.seq   = SEQCNT_ZERO(foo.seq),
+	} foo;
+
+Write path:
+
+.. code-block:: c
+
+	/* Serialized context with disabled preemption */
+
+	write_seqcount_begin(&foo_seqcount);
+
+	/* ... [[write-side critical section]] ... */
+
+	write_seqcount_end(&foo_seqcount);
+
+Read path:
+
+.. code-block:: c
+
+	do {
+		seq = read_seqcount_begin(&foo_seqcount);
+
+		/* ... [[read-side critical section]] ... */
+
+	} while (read_seqcount_retry(&foo_seqcount, seq));
+
+.. _seqlock_t:
+
+Sequential locks (:c:type:`seqlock_t`)
+======================================
+
+This contains the :ref:`sequence counting mechanism <seqcount_t>`
+earlier discussed, plus an embedded spinlock for writer serialization
+and non-preemptibility.
+
+If the read side section can be invoked from hardirq or softirq context,
+use the write side function variants which respectively disable
+interrupts or bottom halves.
+
+Initialization:
+
+.. code-block:: c
+
+	/* dynamic */
+	seqlock_t foo_seqlock;
+	seqlock_init(&foo_seqlock);
+
+	/* static */
+	static DEFINE_SEQLOCK(foo_seqlock);
+
+	/* C99 struct init */
+	struct {
+		.seql   = __SEQLOCK_UNLOCKED(foo.seql)
+	} foo;
+
+Write path:
+
+.. code-block:: c
+
+	write_seqlock(&foo_seqlock);
+
+	/* ... [[write-side critical section]] ... */
+
+	write_sequnlock(&foo_seqlock);
+
+Read path, three categories:
+
+1. Normal Sequence readers which never block a writer but they must
+   retry if a writer is in progress by detecting change in the sequence
+   number.  Writers do not wait for a sequence reader.
+
+   .. code-block:: c
+
+	do {
+		seq = read_seqbegin(&foo_seqlock);
+
+		/* ... [[read-side critical section]] ... */
+
+	} while (read_seqretry(&foo_seqlock, seq));
+
+2. Locking readers which will wait if a writer or another locking reader
+   is in progress. A locking reader in progress will also block a writer
+   from entering its critical section. This read lock is
+   exclusive. Unlike rwlock_t, only one locking reader can acquire it.
+
+   .. code-block:: c
+
+	read_seqlock_excl(&foo_seqlock);
+
+	/* ... [[read-side critical section]] ... */
+
+	read_sequnlock_excl(&foo_seqlock);
+
+3. Conditional lockless reader (as in 1), or locking reader (as in 2),
+   according to a passed marker. This is used to avoid lockless readers
+   starvation (too much retry loops) in case of a sharp spike in write
+   activity. First, a lockless read is tried (even marker passed). If
+   that trial fails (odd sequence counter is returned, which is used as
+   the next iteration marker), the lockless read is transformed to a
+   full locking read and no retry loop is necessary.
+
+   .. code-block:: c
+
+	/* marker; even initialization */
+	int seq = 0;
+	do {
+		read_seqbegin_or_lock(&foo_seqlock, &seq);
+
+		/* ... [[read-side critical section]] ... */
+
+	} while (need_seqretry(&foo_seqlock, seq));
+	done_seqretry(&foo_seqlock, seq);
+
+API documentation
+=================
+
+.. kernel-doc:: include/linux/seqlock.h
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index d35be7709403..2a4af746b1da 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -1,36 +1,15 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #ifndef __LINUX_SEQLOCK_H
 #define __LINUX_SEQLOCK_H
+
 /*
- * Reader/writer consistent mechanism without starving writers. This type of
- * lock for data where the reader wants a consistent set of information
- * and is willing to retry if the information changes. There are two types
- * of readers:
- * 1. Sequence readers which never block a writer but they may have to retry
- *    if a writer is in progress by detecting change in sequence number.
- *    Writers do not wait for a sequence reader.
- * 2. Locking readers which will wait if a writer or another locking reader
- *    is in progress. A locking reader in progress will also block a writer
- *    from going forward. Unlike the regular rwlock, the read lock here is
- *    exclusive so that only one locking reader can get it.
+ * seqcount_t / seqlock_t - a reader-writer consistency mechanism with
+ * lockless readers (read-only retry loops), and no writer starvation.
  *
- * This is not as cache friendly as brlock. Also, this may not work well
- * for data that contains pointers, because any writer could
- * invalidate a pointer that a reader was following.
+ * See Documentation/locking/seqlock.rst for full description.
  *
- * Expected non-blocking reader usage:
- * 	do {
- *	    seq = read_seqbegin(&foo);
- * 	...
- *      } while (read_seqretry(&foo, seq));
- *
- *
- * On non-SMP the spin locks disappear but the writer still needs
- * to increment the sequence variables because an interrupt routine could
- * change the state of the data.
- *
- * Based on x86_64 vsyscall gettimeofday 
- * by Keith Owens and Andrea Arcangeli
+ * Copyrights:
+ * - Based on x86_64 vsyscall gettimeofday: Keith Owens, Andrea Arcangeli
  */
 
 #include <linux/spinlock.h>
@@ -40,11 +19,23 @@
 #include <asm/processor.h>
 
 /*
- * Version using sequence counter only.
- * This can be used when code has its own mutex protecting the
- * updating starting before the write_seqcountbeqin() and ending
- * after the write_seqcount_end().
+ * Sequence counters (seqcount_t)
+ *
+ * The raw counting mechanism without any writer protection. Write side
+ * critical sections must be serialized and readers on the same CPU
+ * (e.g. through preemption or interrupts) must be excluded.
+ *
+ * If the write serialization mechanism is one of the common kernel
+ * locking primitives, use a sequence counter with associated lock
+ * (seqcount_LOCKTYPE_t) instead.
+ *
+ * If it's desired to automatically handle the sequence counter writer
+ * serialization and non-preemptibility requirements, use a sequential
+ * lock (seqlock_t) instead.
+ *
+ * See Documentation/locking/seqlock.rst
  */
+
 typedef struct seqcount {
 	unsigned sequence;
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
@@ -221,8 +212,6 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
 	return __read_seqcount_retry(s, start);
 }
 
-
-
 static inline void raw_write_seqcount_begin(seqcount_t *s)
 {
 	s->sequence++;
@@ -367,11 +356,6 @@ static inline void raw_write_seqcount_latch(seqcount_t *s)
        smp_wmb();      /* increment "sequence" before following stores */
 }
 
-/*
- * Sequence counter only version assumes that callers are using their
- * own locking and preemption is disabled.
- */
-
 static inline void __write_seqcount_begin_nested(seqcount_t *s, int subclass)
 {
 	raw_write_seqcount_begin(s);
@@ -419,15 +403,20 @@ static inline void write_seqcount_invalidate(seqcount_t *s)
 	s->sequence+=2;
 }
 
+/*
+ * Sequential locks (seqlock_t)
+ *
+ * Sequence counters with an embedded spinlock for writer serialization
+ * and non-preemptibility.
+ *
+ * See Documentation/locking/seqlock.rst
+ */
+
 typedef struct {
 	struct seqcount seqcount;
 	spinlock_t lock;
 } seqlock_t;
 
-/*
- * These macros triggered gcc-3.x compile-time problems.  We think these are
- * OK now.  Be cautious.
- */
 #define __SEQLOCK_UNLOCKED(lockname)			\
 	{						\
 		.seqcount = SEQCNT_ZERO(lockname),	\
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (8 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-22 18:02   ` Peter Zijlstra
  2020-05-19 21:45 ` [PATCH v1 11/25] seqlock: Add missing kernel-doc annotations Ahmed S. Darwish
                   ` (14 subsequent siblings)
  24 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Jonathan Corbet,
	linux-doc

Mark all C code samples inside seqlock.h kernel-doc text with the RST
'code-block: c' directive. Sphinx won't properly format the example code
and will produce noisy text indentation warnings otherwise.

Mark all kernel-doc "NOTE" sections with the RST 'attention' directive.
Otherwise Sphinx produces "duplicate section name 'NOTE'" warnings.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/seqlock.h | 82 +++++++++++++++++++++++------------------
 1 file changed, 47 insertions(+), 35 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 2a4af746b1da..dfec0c9c19c4 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -232,6 +232,8 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
  * usual consistency guarantee. It is one wmb cheaper, because we can
  * collapse the two back-to-back wmb()s.
  *
+ * .. code-block:: c
+ *
  *      seqcount_t seq;
  *      bool X = true, Y = false;
  *
@@ -292,62 +294,72 @@ static inline int raw_read_seqcount_latch(seqcount_t *s)
  *
  * The basic form is a data structure like:
  *
- * struct latch_struct {
- *	seqcount_t		seq;
- *	struct data_struct	data[2];
- * };
+ * .. code-block:: c
+ *
+ *	struct latch_struct {
+ *		seqcount_t		seq;
+ *		struct data_struct	data[2];
+ *	};
  *
  * Where a modification, which is assumed to be externally serialized, does the
  * following:
  *
- * void latch_modify(struct latch_struct *latch, ...)
- * {
- *	smp_wmb();	<- Ensure that the last data[1] update is visible
- *	latch->seq++;
- *	smp_wmb();	<- Ensure that the seqcount update is visible
+ * .. code-block:: c
  *
- *	modify(latch->data[0], ...);
+ *	void latch_modify(struct latch_struct *latch, ...)
+ *	{
+ *		smp_wmb();	// Ensure that the last data[1] update is visible
+ *		latch->seq++;
+ *		smp_wmb();	// Ensure that the seqcount update is visible
  *
- *	smp_wmb();	<- Ensure that the data[0] update is visible
- *	latch->seq++;
- *	smp_wmb();	<- Ensure that the seqcount update is visible
+ *		modify(latch->data[0], ...);
  *
- *	modify(latch->data[1], ...);
- * }
+ *		smp_wmb();	// Ensure that the data[0] update is visible
+ *		latch->seq++;
+ *		smp_wmb();	// Ensure that the seqcount update is visible
+ *
+ *		modify(latch->data[1], ...);
+ *	}
  *
  * The query will have a form like:
  *
- * struct entry *latch_query(struct latch_struct *latch, ...)
- * {
- *	struct entry *entry;
- *	unsigned seq, idx;
+ * .. code-block:: c
  *
- *	do {
- *		seq = raw_read_seqcount_latch(&latch->seq);
+ *	struct entry *latch_query(struct latch_struct *latch, ...)
+ *	{
+ *		struct entry *entry;
+ *		unsigned seq, idx;
  *
- *		idx = seq & 0x01;
- *		entry = data_query(latch->data[idx], ...);
+ *		do {
+ *			seq = raw_read_seqcount_latch(&latch->seq);
  *
- *		smp_rmb();
- *	} while (seq != latch->seq);
+ *			idx = seq & 0x01;
+ *			entry = data_query(latch->data[idx], ...);
  *
- *	return entry;
- * }
+ *			smp_rmb();
+ *		} while (seq != latch->seq);
+ *
+ *		return entry;
+ *	}
  *
  * So during the modification, queries are first redirected to data[1]. Then we
  * modify data[0]. When that is complete, we redirect queries back to data[0]
  * and we can modify data[1].
  *
- * NOTE: The non-requirement for atomic modifications does _NOT_ include
- *       the publishing of new entries in the case where data is a dynamic
- *       data structure.
+ * .. attention::
  *
- *       An iteration might start in data[0] and get suspended long enough
- *       to miss an entire modification sequence, once it resumes it might
- *       observe the new entry.
+ *	The non-requirement for atomic modifications does _NOT_ include
+ *	the publishing of new entries in the case where data is a dynamic
+ *	data structure.
  *
- * NOTE: When data is a dynamic data structure; one should use regular RCU
- *       patterns to manage the lifetimes of the objects within.
+ *	An iteration might start in data[0] and get suspended long enough
+ *	to miss an entire modification sequence, once it resumes it might
+ *	observe the new entry.
+ *
+ * .. attention::
+ *
+ *	When data is a dynamic data structure; one should use regular RCU
+ *	patterns to manage the lifetimes of the objects within.
  */
 static inline void raw_write_seqcount_latch(seqcount_t *s)
 {
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 11/25] seqlock: Add missing kernel-doc annotations
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (9 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 12/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Jonathan Corbet,
	linux-doc

A small number of the the exported seqlock.h functions are kernel-doc
annotated.

Since seqlock.h is now included by the kernel's RST documentation, add
kernel-doc annotations for all of the remaining functions.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/linux/seqlock.h | 414 +++++++++++++++++++++++++++++++++++-----
 1 file changed, 361 insertions(+), 53 deletions(-)

diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index dfec0c9c19c4..dd55555ff607 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -57,6 +57,10 @@ static inline void __seqcount_init(seqcount_t *s, const char *name,
 # define SEQCOUNT_DEP_MAP_INIT(lockname) \
 		.dep_map = { .name = #lockname } \
 
+/**
+ * seqcount_init() - runtime initializer for seqcount_t
+ * @s: Pointer to the &typedef seqcount_t instance
+ */
 # define seqcount_init(s)				\
 	do {						\
 		static struct lock_class_key __key;	\
@@ -80,13 +84,17 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s)
 # define seqcount_lockdep_reader_access(x)
 #endif
 
-#define SEQCNT_ZERO(lockname) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(lockname)}
+/**
+ * SEQCNT_ZERO() - static initializer for seqcount_t
+ * @name: Name of the &typedef seqcount_t instance
+ */
+#define SEQCNT_ZERO(name) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(name) }
 
 
 /**
- * __read_seqcount_begin - begin a seq-read critical section (without barrier)
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * __read_seqcount_begin() - begin a seq-read critical section (without barrier)
+ * @s: Pointer to &typedef seqcount_t
+ * Returns: count to be passed to read_seqcount_retry()
  *
  * __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb()
  * barrier. Callers should ensure that smp_rmb() or equivalent ordering is
@@ -110,9 +118,9 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
 }
 
 /**
- * raw_read_seqcount - Read the raw seqcount
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * raw_read_seqcount() - Read the raw seqcount
+ * @s: Pointer to &typedef seqcount_t
+ * Returns: count to be passed to read_seqcount_retry()
  *
  * raw_read_seqcount opens a read critical section of the given
  * seqcount without any lockdep checking and without checking or
@@ -126,13 +134,13 @@ static inline unsigned raw_read_seqcount(const seqcount_t *s)
 }
 
 /**
- * raw_read_seqcount_begin - start seq-read critical section w/o lockdep
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * raw_read_seqcount_begin() - start seq-read critical section w/o lockdep
+ * @s: Pointer to &typedef seqcount_t
+ * Returns: count to be passed to read_seqcount_retry()
  *
  * raw_read_seqcount_begin opens a read critical section of the given
  * seqcount, but without any lockdep checking. Validity of the critical
- * section is tested by checking read_seqcount_retry function.
+ * section is tested by calling read_seqcount_retry().
  */
 static inline unsigned raw_read_seqcount_begin(const seqcount_t *s)
 {
@@ -142,13 +150,13 @@ static inline unsigned raw_read_seqcount_begin(const seqcount_t *s)
 }
 
 /**
- * read_seqcount_begin - begin a seq-read critical section
- * @s: pointer to seqcount_t
- * Returns: count to be passed to read_seqcount_retry
+ * read_seqcount_begin() - begin a seq-read critical section
+ * @s: Pointer to &typedef seqcount_t
+ * Returns: count to be passed to read_seqcount_retry()
  *
- * read_seqcount_begin opens a read critical section of the given seqcount.
- * Validity of the critical section is tested by checking read_seqcount_retry
- * function.
+ * read_seqcount_begin opens a read critical section of the given
+ * seqcount_t.  Validity of the critical section is tested by calling
+ * read_seqcount_retry().
  */
 static inline unsigned read_seqcount_begin(const seqcount_t *s)
 {
@@ -157,8 +165,8 @@ static inline unsigned read_seqcount_begin(const seqcount_t *s)
 }
 
 /**
- * raw_seqcount_begin - begin a seq-read critical section
- * @s: pointer to seqcount_t
+ * raw_seqcount_begin() - begin a seq-read critical section
+ * @s: Pointer to &typedef seqcount_t
  * Returns: count to be passed to read_seqcount_retry
  *
  * raw_seqcount_begin opens a read critical section of the given seqcount.
@@ -178,8 +186,8 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
 }
 
 /**
- * __read_seqcount_retry - end a seq-read critical section (without barrier)
- * @s: pointer to seqcount_t
+ * __read_seqcount_retry() - end a seq-read critical section (without barrier)
+ * @s: Pointer to &typedef seqcount_t
  * @start: count, from read_seqcount_begin
  * Returns: 1 if retry is required, else 0
  *
@@ -197,8 +205,8 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
 }
 
 /**
- * read_seqcount_retry - end a seq-read critical section
- * @s: pointer to seqcount_t
+ * read_seqcount_retry() - end a seq-read critical section
+ * @s: Pointer to &typedef seqcount_t
  * @start: count, from read_seqcount_begin
  * Returns: 1 if retry is required, else 0
  *
@@ -225,8 +233,8 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
 }
 
 /**
- * raw_write_seqcount_barrier - do a seq write barrier
- * @s: pointer to seqcount_t
+ * raw_write_seqcount_barrier() - do a seq write barrier
+ * @s: Pointer to &typedef seqcount_t
  *
  * This can be used to provide an ordering guarantee instead of the
  * usual consistency guarantee. It is one wmb cheaper, because we can
@@ -267,6 +275,21 @@ static inline void raw_write_seqcount_barrier(seqcount_t *s)
 	s->sequence++;
 }
 
+/**
+ * raw_read_seqcount_latch() - pick even or odd seqcount latch data copy
+ * @s: Pointer to &typedef seqcount_t
+ *
+ * Use seqcount latching to switch between two storage places with
+ * sequence protection to allow interruptible, preemptible, writer
+ * sections.
+ *
+ * Check raw_write_seqcount_latch() for more details and a full reader
+ * and writer usage example.
+ *
+ * Return: sequence counter. Use the lowest bit as index for picking
+ * which data copy to read. Full counter must then be passed to
+ * read_seqcount_retry().
+ */
 static inline int raw_read_seqcount_latch(seqcount_t *s)
 {
 	/* Pairs with the first smp_wmb() in raw_write_seqcount_latch() */
@@ -275,8 +298,8 @@ static inline int raw_read_seqcount_latch(seqcount_t *s)
 }
 
 /**
- * raw_write_seqcount_latch - redirect readers to even/odd copy
- * @s: pointer to seqcount_t
+ * raw_write_seqcount_latch() - redirect readers to even/odd copy
+ * @s: Pointer to &typedef seqcount_t
  *
  * The latch technique is a multiversion concurrency control method that allows
  * queries during non-atomic modifications. If you can guarantee queries never
@@ -336,8 +359,8 @@ static inline int raw_read_seqcount_latch(seqcount_t *s)
  *			idx = seq & 0x01;
  *			entry = data_query(latch->data[idx], ...);
  *
- *			smp_rmb();
- *		} while (seq != latch->seq);
+ *			// read_seqcount_retry() includes necessary smp_rmb()
+ *		} while (read_seqcount_retry(&latch->seq, seq);
  *
  *		return entry;
  *	}
@@ -391,11 +414,26 @@ static inline void __write_seqcount_begin(seqcount_t *s)
 	__write_seqcount_begin_nested(s, 0);
 }
 
+/**
+ * write_seqcount_begin() - start a seqcount write-side critical section
+ * @s: Pointer to &typedef seqcount_t
+ *
+ * write_seqcount_begin opens a write-side critical section of the given
+ * seqcount. Seqcount write-side critical sections must be externally
+ * serialized and non-preemptible.
+ */
 static inline void write_seqcount_begin(seqcount_t *s)
 {
 	write_seqcount_begin_nested(s, 0);
 }
 
+/**
+ * write_seqcount_end() - end a seqcount write-side critical section
+ * @s: Pointer to &typedef seqcount_t
+ *
+ * write_seqcount_end closes a write-side critical section of the given
+ * seqcount.
+ */
 static inline void write_seqcount_end(seqcount_t *s)
 {
 	seqcount_release(&s->dep_map, _RET_IP_);
@@ -403,8 +441,8 @@ static inline void write_seqcount_end(seqcount_t *s)
 }
 
 /**
- * write_seqcount_invalidate - invalidate in-progress read-side seq operations
- * @s: pointer to seqcount_t
+ * write_seqcount_invalidate() - invalidate in-progress read-side seq operations
+ * @s: Pointer to &typedef seqcount_t
  *
  * After write_seqcount_invalidate, no read-side seq operations will complete
  * successfully and see data older than this.
@@ -435,32 +473,68 @@ typedef struct {
 		.lock =	__SPIN_LOCK_UNLOCKED(lockname)	\
 	}
 
-#define seqlock_init(x)					\
+/**
+ * seqlock_init() - dynamic initializer for seqlock_t
+ * @sl: Pointer to the seqlock_t instance
+ */
+#define seqlock_init(sl)				\
 	do {						\
-		seqcount_init(&(x)->seqcount);		\
-		spin_lock_init(&(x)->lock);		\
+		seqcount_init(&(sl)->seqcount);		\
+		spin_lock_init(&(sl)->lock);		\
 	} while (0)
 
-#define DEFINE_SEQLOCK(x) \
-		seqlock_t x = __SEQLOCK_UNLOCKED(x)
+/**
+ * DEFINE_SEQLOCK() - Define a statically-allocated seqlock_t
+ * @sl: Name of the &typedef seqlock_t instance
+ */
+#define DEFINE_SEQLOCK(sl) \
+		seqlock_t sl = __SEQLOCK_UNLOCKED(sl)
 
-/*
- * Read side functions for starting and finalizing a read side section.
+/**
+ * read_seqbegin() - start a seqlock_t read-side critical section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_seqbegin opens a read side critical section of the given
+ * seqlock_t. Validity of the critical section is tested by checking
+ * read_seqretry().
+ *
+ * Return: count to be passed to read_seqretry()
  */
 static inline unsigned read_seqbegin(const seqlock_t *sl)
 {
 	return read_seqcount_begin(&sl->seqcount);
 }
 
+/**
+ * read_seqretry() - end a seqlock_t read side critical section
+ * @sl: Pointer to &typedef seqlock_t
+ * @start: count, from read_seqbegin()
+ *
+ * read_seqretry closes a read side critical section of the given
+ * seqlock_t. If the read side critical section was invalid, it must be
+ * ignored and retried.
+ *
+ * Return: 1 if a retry is required, 0 otherwise
+ */
 static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
 {
 	return read_seqcount_retry(&sl->seqcount, start);
 }
 
-/*
- * Lock out other writers and update the count.
- * Acts like a normal spin_lock/unlock.
- * Don't need preempt_disable() because that is in the spin_lock already.
+/**
+ * write_seqlock() - start a seqlock_t write side critical section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * write_seqlock opens a write side critical section of the given
+ * seqlock_t.  It also acquires the spinlock embedded inside the
+ * sequential lock. All seqlock_t write side critical sections are thus
+ * automatically serialized and non-preemptible.
+ *
+ * If the seqlock_t read side section can be invoked from a hardirq or
+ * softirq context, the ``_irqsave`` and ``_bh`` variants of this
+ * function must be respectively used instead.
+ *
+ * The opened write side section must be closed with write_sequnlock().
  */
 static inline void write_seqlock(seqlock_t *sl)
 {
@@ -468,30 +542,74 @@ static inline void write_seqlock(seqlock_t *sl)
 	__write_seqcount_begin(&sl->seqcount);
 }
 
+/**
+ * write_sequnlock() - end a seqlock_t write side critical section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * write_sequnlock closes the (serialized and non-preemptible) write
+ * side critical section of the given seqlock_t.
+ */
 static inline void write_sequnlock(seqlock_t *sl)
 {
 	write_seqcount_end(&sl->seqcount);
 	spin_unlock(&sl->lock);
 }
 
+/**
+ * write_seqlock_bh() - start a softirqs-disabled seqlock_t write  section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * write_seqlock_bh is a write_seqlock() variant that disables softirqs
+ * before opening the serialized seqlock_t write side critical section.
+ * Use it only if the read side section, or other writers, can be
+ * invoked from a softirq context.
+ *
+ * The opened write section must be closed with write_sequnlock_bh().
+ */
 static inline void write_seqlock_bh(seqlock_t *sl)
 {
 	spin_lock_bh(&sl->lock);
 	__write_seqcount_begin(&sl->seqcount);
 }
 
+/**
+ * write_sequnlock_bh() - end a softirqs-disabled seqlock_t write section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * write_sequnlock_bh closes the serialized, softirqs-disabled,
+ * seqlock_t write side critical section. It enables softirqs if they
+ * were already enabled before calling the paired write_seqlock_bh().
+ */
 static inline void write_sequnlock_bh(seqlock_t *sl)
 {
 	write_seqcount_end(&sl->seqcount);
 	spin_unlock_bh(&sl->lock);
 }
 
+/**
+ * write_seqlock_irq() - start a non-interruptible seqlock_t write side section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * write_seqlock_irq is a write_seqlock() variant where hardirqs are
+ * disabled before opening the serialized and non-preemptible seqlock_t
+ * write side critical section.
+ */
 static inline void write_seqlock_irq(seqlock_t *sl)
 {
 	spin_lock_irq(&sl->lock);
 	__write_seqcount_begin(&sl->seqcount);
 }
 
+/**
+ * write_sequnlock_irq() - end a non-interruptible seqlock_t write side section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * write_sequnlock_irq closes the serialized and non-interruptible write
+ * side critical section of the given seqlock_t. It enables local
+ * interrupts afterwards.
+ *
+ * The write critical section must've been opened with write_seqlock_irq().
+ */
 static inline void write_sequnlock_irq(seqlock_t *sl)
 {
 	write_seqcount_end(&sl->seqcount);
@@ -507,9 +625,36 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
 	return flags;
 }
 
+/**
+ * write_seqlock_irqsave() - start a non-interruptible seqlock_t write section
+ * @lock:  Pointer to &typedef seqlock_t
+ * @flags: Stack-allocated storage for saving caller's local interrupt
+ *         state, to be passed to write_sequnlock_irqrestore().
+ *
+ * write_seqlock_irqsave is a write_seqlock() variant where the caller's
+ * local interrupts state is saved, then local interrupts are disabled,
+ * before opening the serialized and non-preemptible seqlock_t write
+ * side critical section.
+ *
+ * Use this only if the read side section can be invoked from a hardirq
+ * context.
+ *
+ * The opened write section must be closed with write_sequnlock_irqrestore().
+ */
 #define write_seqlock_irqsave(lock, flags)				\
 	do { flags = __write_seqlock_irqsave(lock); } while (0)
 
+/**
+ * write_sequnlock_irqrestore() - end non-interruptible seqlock_t write section
+ * @sl:    Pointer to &typedef seqlock_t
+ * @flags: Caller's saved interrupt state, from write_seqlock_irqsave()
+ *
+ * write_sequnlock_irq closes the serialized and non-interruptible write
+ * side critical section of the given seqlock_t. It then restores the
+ * caller's local interrupts saved state.
+ *
+ * The write section must've been opened with write_seqlock_irqsave().
+ */
 static inline void
 write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
 {
@@ -517,30 +662,61 @@ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
 	spin_unlock_irqrestore(&sl->lock, flags);
 }
 
-/*
- * A locking reader exclusively locks out other writers and locking readers,
- * but doesn't update the sequence number. Acts like a normal spin_lock/unlock.
- * Don't need preempt_disable() because that is in the spin_lock already.
+/**
+ * read_seqlock_excl() - begin a seqlock_t locking reader critical section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_seqlock_excl opens a locking reader critical section for the
+ * given seqlock_t. A locking reader exclusively locks out other writers
+ * and other locking readers, but doesn't update the sequence number.
+ *
+ * Locking readers act like a normal spin_lock()/spin_unlock().
+ *
+ * The opened read side section must be closed with read_sequnlock_excl().
  */
 static inline void read_seqlock_excl(seqlock_t *sl)
 {
 	spin_lock(&sl->lock);
 }
 
+/**
+ * read_sequnlock_excl() - end a seqlock_t locking reader critical section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_sequnlock_excl closes a locking reader critical section.  The
+ * read section must've been opened with read_seqlock_excl().
+ */
 static inline void read_sequnlock_excl(seqlock_t *sl)
 {
 	spin_unlock(&sl->lock);
 }
 
 /**
- * read_seqbegin_or_lock - begin a sequence number check or locking block
- * @lock: sequence lock
- * @seq : sequence number to be checked
+ * read_seqbegin_or_lock() - begin a seqlock_t lockless or locking reader
+ * @lock: Pointer to &typedef seqlock_t
+ * @seq : Marker and return parameter. If the passed value is even, the
+ * reader will become a *lockless* seqlock_t sequence counter reader as
+ * in read_seqbegin(). If the passed value is odd, the reader will
+ * become a fully locking reader, as in read_seqlock_excl().  In the
+ * first call to read_seqbegin_or_lock(), the caller **must** initialize
+ * and pass an even value in @seq so a lockless read is optimistically
+ * tried first.
  *
- * First try it once optimistically without taking the lock. If that fails,
- * take the lock. The sequence number is also used as a marker for deciding
- * whether to be a reader (even) or writer (odd).
- * N.B. seq must be initialized to an even number to begin with.
+ * read_seqbegin_or_lock optimistically tries a lockless seqlock_t
+ * sequence counter read first. If an odd counter is found, the lockless
+ * read trial has failed, and the reader transforms to a full seqlock_t
+ * locking reader as in read_seqlock_excl().  This is typically used to
+ * avoid lockless seqlock_t readers starvation (too much retry loops) in
+ * the case of a sharp spike in write activity.
+ *
+ * The opened read section must be closed with done_seqretry().  Check
+ * Documentation/locking/seqlock.rst for template example code.
+ *
+ * Return: The read critical section status is returned through @seq,
+ * which is overloaded as a return parameter. This value must be passed
+ * to need_seqretry() to check the validity of the tried seqlock_t read
+ * section. If the read section must be retried, the returned value must
+ * also be passed to the next iteration of read_seqbegin_or_lock().
  */
 static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
 {
@@ -550,32 +726,98 @@ static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq)
 		read_seqlock_excl(lock);
 }
 
+/**
+ * need_seqretry() - validate seqlock_t "locking or lockless" reader section
+ * @lock: Pointer to &typedef seqlock_t
+ * @seq: count, from read_seqbegin_or_lock()
+ *
+ * need_seqretry checks if the seqlock_t read-side critical section
+ * started with read_seqbegin_or_lock() is valid. If it was not, the
+ * caller must retry the read-side section.
+ *
+ * Return: 1 if a retry is required, 0 otherwise
+ */
 static inline int need_seqretry(seqlock_t *lock, int seq)
 {
 	return !(seq & 1) && read_seqretry(lock, seq);
 }
 
+/**
+ * done_seqretry() - end seqlock_t "locking or lockless" reader section
+ * @lock: Pointer to &typedef seqlock_t
+ * @seq: count, from read_seqbegin_or_lock()
+ *
+ * done_seqretry finishes the seqlock_t read side critical section
+ * started by read_seqbegin_or_lock(). Before finishing the critical
+ * section, the validity of the read side section must've been already
+ * verified with need_seqretry().
+ */
 static inline void done_seqretry(seqlock_t *lock, int seq)
 {
 	if (seq & 1)
 		read_sequnlock_excl(lock);
 }
 
+/**
+ * read_seqlock_excl_bh() - start a locking reader seqlock_t section
+ *			    with softirqs disabled
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_seqlock_excl_bh is a variant of read_seqlock_excl() that saves
+ * softirqs state, then disables softirqs, before starting the locking
+ * reader read side section. Only use this variant if the seqlock_t
+ * write side section, *or other read sections*, can be invoked from a
+ * softirq context
+ *
+ * The opened section must be closed with read_sequnlock_excl_bh().
+ */
 static inline void read_seqlock_excl_bh(seqlock_t *sl)
 {
 	spin_lock_bh(&sl->lock);
 }
 
+/**
+ * read_sequnlock_excl_bh() - stop a seqlock_t softirq-disabled locking
+ *			      reader section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_sequnlock_excl_bh ends the softirq-disabled seqlock_t locking
+ * reader read side section. It restores the softirqs state saved by
+ * read_seqlock_excl_bh() afterwards.
+ */
 static inline void read_sequnlock_excl_bh(seqlock_t *sl)
 {
 	spin_unlock_bh(&sl->lock);
 }
 
+/**
+ * read_seqlock_excl_irq() - start a non-interruptible seqlock_t locking
+ *			     reader section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_seqlock_excl_irq is a variant of read_seqlock_excl() that
+ * disables interrupts before starting the locking reader read side
+ * section. Only use this variant if the seqlock_t write side section,
+ * *or other read sections*, can be invoked from a hardirq context
+ *
+ * The opened read section must be closed with read_sequnlock_excl_irq().
+ */
 static inline void read_seqlock_excl_irq(seqlock_t *sl)
 {
 	spin_lock_irq(&sl->lock);
 }
 
+/**
+ * read_sequnlock_excl_irq() - end an interrupts-disabled seqlock_t
+ *                             locking reader section
+ * @sl: Pointer to &typedef seqlock_t
+ *
+ * read_sequnlock_excl_irq ends the interrupts-disabled seqlock_t
+ * locking reader read side critical section. It enables local
+ * interrupts afterwards.
+ *
+ * The read section must've been started with read_seqlock_excl_irq().
+ */
 static inline void read_sequnlock_excl_irq(seqlock_t *sl)
 {
 	spin_unlock_irq(&sl->lock);
@@ -589,15 +831,68 @@ static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl)
 	return flags;
 }
 
+/**
+ * read_seqlock_excl_irqsave() - start a non-interruptible seqlock_t
+ *				 locking reader section
+ * @lock: Pointer to &typedef seqlock_t
+ * @flags: Stack-allocated storage for saving caller's local interrupt
+ *         state, to be passed to read_sequnlock_excl_irqrestore().
+ *
+ * read_seqlock_excl_irqsave is a read_seqlock_excl() variant which
+ * saves the caller's local interrupts state, then disables local
+ * interrupts, before opening the seqlock_t locking reader critical
+ * section.
+ *
+ * Use this only if the seqlock_t write side critical section, or other
+ * read side sections, can be invoked from a hardirq context.
+ *
+ * The opened locking reader critical section must be closed with
+ * read_sequnlock_excl_irqrestore().
+ */
 #define read_seqlock_excl_irqsave(lock, flags)				\
 	do { flags = __read_seqlock_excl_irqsave(lock); } while (0)
 
+/**
+ * read_sequnlock_excl_irqrestore() - end non-interruptible seqlock_t
+ *				      locking reader section
+ * @sl: Pointer to &typedef seqlock_t
+ * @flags: Caller's saved interrupt state, from
+ *	   read_seqlock_excl_irqsave()
+ *
+ * read_sequnlock_excl_irqrestore closes the non-interruptible seqlock_t
+ * locking reader section. It then restores the caller's local
+ * interrupts saved state.
+ *
+ * The read section must've been opened with read_seqlock_excl_irqsave().
+ */
 static inline void
 read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags)
 {
 	spin_unlock_irqrestore(&sl->lock, flags);
 }
 
+/**
+ * read_seqbegin_or_lock_irqsave() - begin a seqlock_t lockless reader, or
+ *                                   a non-interruptible locking reader
+ * @lock: Pointer to &typedef seqlock_t
+ * @seq: Marker and return parameter. Check read_seqbegin_or_lock().
+ *
+ * read_seqbegin_or_lock_irqsave is a variant of read_seqbegin_or_lock()
+ * which saves the local interrupts state, then disables local
+ * interrupts, before opening a seqlock_t *locking reader* critical
+ * section.
+ *
+ * The opened section must be closed with done_seqretry_irqrestore().
+ *
+ * Return:
+ *
+ *   1. The saved local interrupts state in case of a locking reader, to
+ *      be passed to done_seqretry_irqrestore().
+ *
+ *   2. The read critical section status, returned through @seq which is
+ *      overloaded as a return parameter. Check read_seqbegin_or_lock()
+ *      for more info.
+ */
 static inline unsigned long
 read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
 {
@@ -611,6 +906,19 @@ read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq)
 	return flags;
 }
 
+/**
+ * done_seqretry_irqrestore() - end a seqlock_t lockless reader, or a
+ *				non-interruptible locking reader section
+ * @lock:  Pointer to &typedef seqlock_t
+ * @seq:   Count, from read_seqbegin_or_lock_irqsave()
+ * @flags: Caller's saved local interrupt state in case of a locking
+ *	   reader, also from read_seqbegin_or_lock_irqsave()
+ *
+ * done_seqretry_irqrestore is a variant of done_seqretry() which
+ * restores the callers saved local interrupts state in case of a
+ * locking reader. Check done_seqretry() for more information. The read
+ * section must've been opened with read_seqbegin_or_lock_irqsave().
+ */
 static inline void
 done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags)
 {
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 12/25] seqlock: Extend seqcount API with associated locks
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (10 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 11/25] seqlock: Add missing kernel-doc annotations Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Jonathan Corbet,
	linux-doc

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the write side critical section.

There is no built-in debugging mechanism to verify that the lock used
for writer serialization is held and preemption is disabled. Some usage
sites like dma-buf have explicit lockdep checks for the writer-side
lock, but this covers only a small portion of the sequence counter usage
in the kernel.

Add new sequence counter types which allows to associate a lock to the
sequence counter at initialization time. The seqcount API functions are
extended to provide appropriate lockdep assertions depending on the
seqcount/lock type.

For sequence counters with associated locks that do not implicitly
disable preemption, preemption protection is enforced in the sequence
counter write side functions. This removes the need to explicitly add
preempt_disable/enable() around the write side critical sections: the
write_begin/end() functions for these new sequence counter types
automatically do this.

Introduce the following seqcount types with associated locks:

     seqcount_spinlock_t
     seqcount_raw_spinlock_t
     seqcount_rwlock_t
     seqcount_mutex_t
     seqcount_ww_mutex_t

Extend the seqcount read and write functions to branch out to the
specific seqcount_LOCKTYPE_t implementation at compile-time. This avoids
kernel API explosion per each new seqcount_LOCKTYPE_t added. Add such
compile-time type detection logic into a new, internal, seqlock header.

Document the proper seqcount_LOCKTYPE_t usage, and rationale, at
Documentation/locking/seqlock.rst.

If lockdep is disabled, this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 Documentation/locking/seqlock.rst      |  64 ++++-
 MAINTAINERS                            |   2 +-
 include/linux/seqlock.h                | 355 +++++++++++++++++++++----
 include/linux/seqlock_types_internal.h | 187 +++++++++++++
 4 files changed, 549 insertions(+), 59 deletions(-)
 create mode 100644 include/linux/seqlock_types_internal.h

diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
index 2242ae00e7bf..e6f8e4be7db8 100644
--- a/Documentation/locking/seqlock.rst
+++ b/Documentation/locking/seqlock.rst
@@ -45,9 +45,11 @@ write side section. If the sequence counter read section can be invoked
 from hardirq or softirq contexts, interrupts or bottom halves must be
 respectively disabled before entering the write side section.
 
-If it's desired to automatically handle the sequence counter
-requirements of writer serialization and non-preemptibility, use a
-:ref:`sequential lock <seqlock_t>` instead.
+If the write serialization mechanism is one of the common kernel locking
+primitives, use :ref:`sequence counters with associated locks
+<seqcount_locktype_t>` instead. If it's desired to automatically handle
+the sequence counter writer serialization and non-preemptibility
+requirements, use a :ref:`sequential lock <seqlock_t>`.
 
 Initialization:
 
@@ -67,6 +69,7 @@ Initialization:
 
 Write path:
 
+.. _seqcount_write_ops:
 .. code-block:: c
 
 	/* Serialized context with disabled preemption */
@@ -79,6 +82,7 @@ Write path:
 
 Read path:
 
+.. _seqcount_read_ops:
 .. code-block:: c
 
 	do {
@@ -88,6 +92,60 @@ Read path:
 
 	} while (read_seqcount_retry(&foo_seqcount, seq));
 
+.. _seqcount_locktype_t:
+
+Sequence counters with associated locks (:c:type:`seqcount_LOCKTYPE_t`)
+-----------------------------------------------------------------------
+
+As :ref:`earlier discussed <seqcount_t>`, seqcount write side critical
+sections must be serialized and non-preemptible. This variant of
+sequence counters associate the lock used for writer serialization at
+the seqcount initialization time. This enables lockdep to validate that
+the write side critical section is properly serialized.
+
+This lock association is a NOOP if lockdep is disabled and has neither
+storage nor runtime overhead. If lockdep is enabled, the lock pointer is
+stored in struct seqcount and lockdep's "lock is held" assertions are
+injected at the beginning of the write side critical section to validate
+that it is properly protected.
+
+For lock types which do not implicitly disable preemption, preemption
+protection is enforced in the write side function.
+
+The following seqcounts with associated locks are defined:
+
+  - :c:type:`seqcount_spinlock_t`
+  - :c:type:`seqcount_raw_spinlock_t`
+  - :c:type:`seqcount_rwlock_t`
+  - :c:type:`seqcount_mutex_t`
+  - :c:type:`seqcount_ww_mutex_t`
+
+The plain seqcount read and write APIs branch out to the specific
+seqcount_LOCKTYPE_t implementation at compile-time. This avoids kernel
+API explosion per each new seqcount LOCKTYPE.
+
+Initialization (replace "LOCKTYPE" with one of the supported locks):
+
+.. code-block:: c
+
+	/* dynamic */
+	seqcount_LOCKTYPE_t foo_seqcount;
+	seqcount_LOCKTYPE_init(&foo_seqcount, &lock);
+
+	/* static */
+	static seqcount_LOCKTYPE_t foo_seqcount =
+		SEQCNT_LOCKTYPE_ZERO(foo_seqcount, &lock);
+
+	/* C99 struct init */
+	struct {
+		.seq   = SEQCNT_LOCKTYPE_ZERO(foo.seq, &lock),
+	} foo;
+
+Write path: same as in :ref:`plain seqcount_t <seqcount_write_ops>`,
+while running from a context with the associated LOCKTYPE lock acquired.
+
+Read path: same as in :ref:`plain seqcount_t <seqcount_read_ops>`.
+
 .. _seqlock_t:
 
 Sequential locks (:c:type:`seqlock_t`)
diff --git a/MAINTAINERS b/MAINTAINERS
index 091ec22c1a23..f3ae546009ee 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9925,7 +9925,7 @@ F:	include/linux/lockdep.h
 F:	include/linux/mutex*.h
 F:	include/linux/rwlock*.h
 F:	include/linux/rwsem*.h
-F:	include/linux/seqlock.h
+F:	include/linux/seqlock*.h
 F:	include/linux/spinlock*.h
 F:	kernel/locking/
 F:	lib/locking*.[ch]
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index dd55555ff607..eca464ecf012 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -90,11 +90,10 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s)
  */
 #define SEQCNT_ZERO(name) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(name) }
 
-
 /**
  * __read_seqcount_begin() - begin a seq-read critical section (without barrier)
- * @s: Pointer to &typedef seqcount_t
- * Returns: count to be passed to read_seqcount_retry()
+ * @s: Pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
+ * Returns: count to be passed to read_seqcount_retry
  *
  * __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb()
  * barrier. Callers should ensure that smp_rmb() or equivalent ordering is
@@ -104,7 +103,9 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s)
  * Use carefully, only in critical code, and comment how the barrier is
  * provided.
  */
-static inline unsigned __read_seqcount_begin(const seqcount_t *s)
+#define __read_seqcount_begin(s)	do___read_seqcount_begin(s)
+
+static inline unsigned __read_seqcount_t_begin(const seqcount_t *s)
 {
 	unsigned ret;
 
@@ -119,14 +120,16 @@ static inline unsigned __read_seqcount_begin(const seqcount_t *s)
 
 /**
  * raw_read_seqcount() - Read the raw seqcount
- * @s: Pointer to &typedef seqcount_t
- * Returns: count to be passed to read_seqcount_retry()
+ * @s: Pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
+ * Returns: count to be passed to read_seqcount_retry
  *
  * raw_read_seqcount opens a read critical section of the given
  * seqcount without any lockdep checking and without checking or
  * masking the LSB. Calling code is responsible for handling that.
  */
-static inline unsigned raw_read_seqcount(const seqcount_t *s)
+#define raw_read_seqcount(s)	do_raw_read_seqcount(s)
+
+static inline unsigned raw_read_seqcount_t(const seqcount_t *s)
 {
 	unsigned ret = READ_ONCE(s->sequence);
 	smp_rmb();
@@ -135,38 +138,42 @@ static inline unsigned raw_read_seqcount(const seqcount_t *s)
 
 /**
  * raw_read_seqcount_begin() - start seq-read critical section w/o lockdep
- * @s: Pointer to &typedef seqcount_t
- * Returns: count to be passed to read_seqcount_retry()
+ * @s: Pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
+ * Returns: count to be passed to read_seqcount_retry
  *
  * raw_read_seqcount_begin opens a read critical section of the given
  * seqcount, but without any lockdep checking. Validity of the critical
  * section is tested by calling read_seqcount_retry().
  */
-static inline unsigned raw_read_seqcount_begin(const seqcount_t *s)
+#define raw_read_seqcount_begin(s)	do_raw_read_seqcount_begin(s)
+
+static inline unsigned raw_read_seqcount_t_begin(const seqcount_t *s)
 {
-	unsigned ret = __read_seqcount_begin(s);
+	unsigned ret = __read_seqcount_t_begin(s);
 	smp_rmb();
 	return ret;
 }
 
 /**
  * read_seqcount_begin() - begin a seq-read critical section
- * @s: Pointer to &typedef seqcount_t
- * Returns: count to be passed to read_seqcount_retry()
+ * @s: pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
+ * Returns: count to be passed to read_seqcount_retry
  *
  * read_seqcount_begin opens a read critical section of the given
  * seqcount_t.  Validity of the critical section is tested by calling
  * read_seqcount_retry().
  */
-static inline unsigned read_seqcount_begin(const seqcount_t *s)
+#define read_seqcount_begin(s)	do_read_seqcount_begin(s)
+
+static inline unsigned read_seqcount_t_begin(const seqcount_t *s)
 {
 	seqcount_lockdep_reader_access(s);
-	return raw_read_seqcount_begin(s);
+	return raw_read_seqcount_t_begin(s);
 }
 
 /**
  * raw_seqcount_begin() - begin a seq-read critical section
- * @s: Pointer to &typedef seqcount_t
+ * @s: pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  * Returns: count to be passed to read_seqcount_retry
  *
  * raw_seqcount_begin opens a read critical section of the given seqcount.
@@ -178,7 +185,9 @@ static inline unsigned read_seqcount_begin(const seqcount_t *s)
  * read_seqcount_retry() instead of stabilizing at the beginning of the
  * critical section.
  */
-static inline unsigned raw_seqcount_begin(const seqcount_t *s)
+#define raw_seqcount_begin(s)	do_raw_seqcount_begin(s)
+
+static inline unsigned raw_seqcount_t_begin(const seqcount_t *s)
 {
 	unsigned ret = READ_ONCE(s->sequence);
 	smp_rmb();
@@ -187,7 +196,7 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
 
 /**
  * __read_seqcount_retry() - end a seq-read critical section (without barrier)
- * @s: Pointer to &typedef seqcount_t
+ * @s: pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  * @start: count, from read_seqcount_begin
  * Returns: 1 if retry is required, else 0
  *
@@ -199,14 +208,16 @@ static inline unsigned raw_seqcount_begin(const seqcount_t *s)
  * Use carefully, only in critical code, and comment how the barrier is
  * provided.
  */
-static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
+#define __read_seqcount_retry(s, start)	do___read_seqcount_retry(s, start)
+
+static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start)
 {
 	return unlikely(s->sequence != start);
 }
 
 /**
  * read_seqcount_retry() - end a seq-read critical section
- * @s: Pointer to &typedef seqcount_t
+ * @s: pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  * @start: count, from read_seqcount_begin
  * Returns: 1 if retry is required, else 0
  *
@@ -214,19 +225,25 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
  * If the critical section was invalid, it must be ignored (and typically
  * retried).
  */
-static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
+#define read_seqcount_retry(s, start)	do_read_seqcount_retry(s, start)
+
+static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start)
 {
 	smp_rmb();
-	return __read_seqcount_retry(s, start);
+	return __read_seqcount_t_retry(s, start);
 }
 
-static inline void raw_write_seqcount_begin(seqcount_t *s)
+#define raw_write_seqcount_begin(s)	do_raw_write_seqcount_begin(s)
+
+static inline void raw_write_seqcount_t_begin(seqcount_t *s)
 {
 	s->sequence++;
 	smp_wmb();
 }
 
-static inline void raw_write_seqcount_end(seqcount_t *s)
+#define raw_write_seqcount_end(s)	do_raw_write_seqcount_end(s)
+
+static inline void raw_write_seqcount_t_end(seqcount_t *s)
 {
 	smp_wmb();
 	s->sequence++;
@@ -234,7 +251,7 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
 
 /**
  * raw_write_seqcount_barrier() - do a seq write barrier
- * @s: Pointer to &typedef seqcount_t
+ * @s: Pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  *
  * This can be used to provide an ordering guarantee instead of the
  * usual consistency guarantee. It is one wmb cheaper, because we can
@@ -268,7 +285,9 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
  *              X = false;
  *      }
  */
-static inline void raw_write_seqcount_barrier(seqcount_t *s)
+#define raw_write_seqcount_barrier(s)	do_raw_write_seqcount_barrier(s)
+
+static inline void raw_write_seqcount_t_barrier(seqcount_t *s)
 {
 	s->sequence++;
 	smp_wmb();
@@ -277,7 +296,7 @@ static inline void raw_write_seqcount_barrier(seqcount_t *s)
 
 /**
  * raw_read_seqcount_latch() - pick even or odd seqcount latch data copy
- * @s: Pointer to &typedef seqcount_t
+ * @s: pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  *
  * Use seqcount latching to switch between two storage places with
  * sequence protection to allow interruptible, preemptible, writer
@@ -290,7 +309,9 @@ static inline void raw_write_seqcount_barrier(seqcount_t *s)
  * which data copy to read. Full counter must then be passed to
  * read_seqcount_retry().
  */
-static inline int raw_read_seqcount_latch(seqcount_t *s)
+#define raw_read_seqcount_latch(s)	do_raw_read_seqcount_latch(s)
+
+static inline int raw_read_seqcount_t_latch(seqcount_t *s)
 {
 	/* Pairs with the first smp_wmb() in raw_write_seqcount_latch() */
 	int seq = READ_ONCE(s->sequence); /* ^^^ */
@@ -299,7 +320,7 @@ static inline int raw_read_seqcount_latch(seqcount_t *s)
 
 /**
  * raw_write_seqcount_latch() - redirect readers to even/odd copy
- * @s: Pointer to &typedef seqcount_t
+ * @s: pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  *
  * The latch technique is a multiversion concurrency control method that allows
  * queries during non-atomic modifications. If you can guarantee queries never
@@ -384,34 +405,39 @@ static inline int raw_read_seqcount_latch(seqcount_t *s)
  *	When data is a dynamic data structure; one should use regular RCU
  *	patterns to manage the lifetimes of the objects within.
  */
-static inline void raw_write_seqcount_latch(seqcount_t *s)
+#define raw_write_seqcount_latch(s)	do_raw_write_seqcount_latch(s)
+
+static inline void raw_write_seqcount_t_latch(seqcount_t *s)
 {
        smp_wmb();      /* prior stores before incrementing "sequence" */
        s->sequence++;
        smp_wmb();      /* increment "sequence" before following stores */
 }
 
-static inline void __write_seqcount_begin_nested(seqcount_t *s, int subclass)
+#define write_seqcount_begin_nested(s, subclass)		\
+	do_write_seqcount_begin_nested(s, subclass)
+
+static inline void __write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
 {
-	raw_write_seqcount_begin(s);
+	raw_write_seqcount_t_begin(s);
 	seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
 }
 
-static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass)
+static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
 {
 	lockdep_assert_preemption_disabled();
-	__write_seqcount_begin_nested(s, subclass);
+	__write_seqcount_t_begin_nested(s, subclass);
 }
 
 /*
- * write_seqcount_begin() without lockdep non-preemptibility checks.
+ * write_seqcount_t_begin() without lockdep non-preemptibility check.
  *
  * Use for internal seqlock.h code where it's known that preemption
- * is already disabled. For example, seqlock_t write functions.
+ * is already disabled. For example, seqlock_t write side functions.
  */
-static inline void __write_seqcount_begin(seqcount_t *s)
+static inline void __write_seqcount_t_begin(seqcount_t *s)
 {
-	__write_seqcount_begin_nested(s, 0);
+	__write_seqcount_t_begin_nested(s, 0);
 }
 
 /**
@@ -422,9 +448,11 @@ static inline void __write_seqcount_begin(seqcount_t *s)
  * seqcount. Seqcount write-side critical sections must be externally
  * serialized and non-preemptible.
  */
-static inline void write_seqcount_begin(seqcount_t *s)
+#define write_seqcount_begin(s)		do_write_seqcount_begin(s)
+
+static inline void write_seqcount_t_begin(seqcount_t *s)
 {
-	write_seqcount_begin_nested(s, 0);
+	write_seqcount_t_begin_nested(s, 0);
 }
 
 /**
@@ -434,25 +462,242 @@ static inline void write_seqcount_begin(seqcount_t *s)
  * write_seqcount_end closes a write-side critical section of the given
  * seqcount.
  */
-static inline void write_seqcount_end(seqcount_t *s)
+#define write_seqcount_end(s)		do_write_seqcount_end(s)
+
+static inline void write_seqcount_t_end(seqcount_t *s)
 {
 	seqcount_release(&s->dep_map, _RET_IP_);
-	raw_write_seqcount_end(s);
+	raw_write_seqcount_t_end(s);
 }
 
 /**
  * write_seqcount_invalidate() - invalidate in-progress read-side seq operations
- * @s: Pointer to &typedef seqcount_t
+ * @s: Pointer to &typedef seqcount_t or any of the seqcount_locktype_t variants
  *
  * After write_seqcount_invalidate, no read-side seq operations will complete
  * successfully and see data older than this.
  */
-static inline void write_seqcount_invalidate(seqcount_t *s)
+#define write_seqcount_invalidate(s)	do_write_seqcount_invalidate(s)
+
+static inline void write_seqcount_t_invalidate(seqcount_t *s)
 {
 	smp_wmb();
 	s->sequence+=2;
 }
 
+/*
+ * Sequence counters with associated locks (seqcount_LOCKTYPE_t)
+ *
+ * A sequence counter which associates the lock used for writer
+ * serialization at initialization time. This enables lockdep to validate
+ * that the write side critical section is properly serialized.
+ *
+ * For assicated locks which do not implicitly disable preemption,
+ * preemption protection is enforced in the write side function.
+ *
+ * See Documentation/locking/seqlock.rst
+ */
+
+/**
+ * typedef seqcount_spinlock_t - sequence count with spinlock associated
+ * @seqcount:		The real sequence counter
+ * @lock:		Pointer to the associated spinlock
+ *
+ * A plain sequence counter with external writer synchronization by a
+ * spinlock. The spinlock is associated to the sequence count in the
+ * static initializer or init function. This enables lockdep to validate
+ * that the write side critical section is properly serialized.
+ */
+typedef struct seqcount_spinlock {
+	seqcount_t      seqcount;
+#ifdef CONFIG_LOCKDEP
+	spinlock_t	*lock;
+#endif
+} seqcount_spinlock_t;
+
+#ifdef CONFIG_LOCKDEP
+
+#define SEQCOUNT_LOCKTYPE_ZERO(seq_name, assoc_lock) {		\
+	.seqcount	= SEQCNT_ZERO(seq_name.seqcount),	\
+	.lock		= (assoc_lock),				\
+}
+
+/* Define as macro due to static lockdep key @ seqcount_init() */
+#define seqcount_locktype_init(s, assoc_lock)			\
+do {								\
+	seqcount_init(&(s)->seqcount);				\
+	(s)->lock = (assoc_lock);				\
+} while (0)
+
+#else /* !CONFIG_LOCKDEP */
+
+#define SEQCOUNT_LOCKTYPE_ZERO(seq_name, assoc_lock) {		\
+	.seqcount	= SEQCNT_ZERO(seq_name.seqcount),	\
+}
+
+#define seqcount_locktype_init(s, assoc_lock)			\
+do {								\
+	seqcount_init(&(s)->seqcount);				\
+} while (0)
+
+#endif
+
+/**
+ * SEQCNT_SPINLOCK_ZERO - static initializer for seqcount_spinlock_t
+ * @name:	Name of the &typedef seqcount_spinlock_t instance
+ * @lock:	Pointer to the associated spinlock
+ */
+#define SEQCNT_SPINLOCK_ZERO(name, lock)	\
+	SEQCOUNT_LOCKTYPE_ZERO(name, lock)
+
+/**
+ * seqcount_spinlock_init - runtime initializer for seqcount_spinlock_t
+ * @s:		Pointer to the &typedef seqcount_spinlock_t instance
+ * @lock:	Pointer to the associated spinlock
+ */
+#define seqcount_spinlock_init(s, lock)		\
+	seqcount_locktype_init(s, lock)
+
+/**
+ * typedef seqcount_raw_spinlock_t - sequence count with raw spinlock associated
+ * @seqcount:		The real sequence counter
+ * @lock:		Pointer to the associated raw spinlock
+ *
+ * A plain sequence counter with external writer synchronization by a
+ * raw spinlock. The raw spinlock is associated to the sequence count in
+ * the static initializer or init function. This enables lockdep to
+ * validate that the write side critical section is properly serialized.
+ */
+typedef struct seqcount_raw_spinlock {
+	seqcount_t      seqcount;
+#ifdef CONFIG_LOCKDEP
+	raw_spinlock_t	*lock;
+#endif
+} seqcount_raw_spinlock_t;
+
+/**
+ * SEQCNT_RAW_SPINLOCK_ZERO - static initializer for seqcount_raw_spinlock_t
+ * @name:	Name of the &typedef seqcount_raw_spinlock_t instance
+ * @lock:	Pointer to the associated raw_spinlock
+ */
+#define SEQCNT_RAW_SPINLOCK_ZERO(name, lock)	\
+	SEQCOUNT_LOCKTYPE_ZERO(name, lock)
+
+/**
+ * seqcount_raw_spinlock_init - runtime initializer for seqcount_raw_spinlock_t
+ * @s:		Pointer to the &typedef seqcount_raw_spinlock_t instance
+ * @lock:	Pointer to the associated raw_spinlock
+ */
+#define seqcount_raw_spinlock_init(s, lock)	\
+	seqcount_locktype_init(s, lock)
+
+/**
+ * typedef seqcount_rwlock_t - sequence count with rwlock associated
+ * @seqcount:		The real sequence counter
+ * @lock:		Pointer to the associated rwlock
+ *
+ * A plain sequence counter with external writer synchronization by a
+ * rwlock. The rwlock is associated to the sequence count in the static
+ * initializer or init function. This enables lockdep to validate that
+ * the write side critical section is properly serialized.
+ */
+typedef struct seqcount_rwlock {
+	seqcount_t      seqcount;
+#ifdef CONFIG_LOCKDEP
+	rwlock_t	*lock;
+#endif
+} seqcount_rwlock_t;
+
+/**
+ * SEQCNT_RWLOCK_ZERO - static initializer for seqcount_rwlock_t
+ * @name:	Name of the &typedef seqcount_rwlock_t instance
+ * @lock:	Pointer to the associated rwlock
+ */
+#define SEQCNT_RWLOCK_ZERO(name, lock)		\
+	SEQCOUNT_LOCKTYPE_ZERO(name, lock)
+
+/**
+ * seqcount_rwlock_init - runtime initializer for seqcount_rwlock_t
+ * @s:		Pointer to the &typedef seqcount_rwlock_t instance
+ * @lock:	Pointer to the associated rwlock
+ */
+#define seqcount_rwlock_init(s, lock)		\
+	seqcount_locktype_init(s, lock)
+
+/**
+ * typedef seqcount_mutex_t - sequence count with mutex associated
+ * @seqcount:		The real sequence counter
+ * @lock:		Pointer to the associated mutex
+ *
+ * A plain sequence counter with external writer synchronization by a
+ * mutex. The mutex is associated to the sequence counter in the static
+ * initializer or init function. This enables lockdep to validate that
+ * the write side critical section is properly serialized.
+ *
+ * The write side API functions write_seqcount_begin()/end() automatically
+ * disable and enable preemption when used with seqcount_mutex_t.
+ */
+typedef struct seqcount_mutex {
+	seqcount_t      seqcount;
+#ifdef CONFIG_LOCKDEP
+	struct mutex	*lock;
+#endif
+} seqcount_mutex_t;
+
+/**
+ * SEQCNT_MUTEX_ZERO - static initializer for seqcount_mutex_t
+ * @name:	Name of the &typedef seqcount_mutex_t instance
+ * @lock:	Pointer to the associated mutex
+ */
+#define SEQCNT_MUTEX_ZERO(name, lock)		\
+	SEQCOUNT_LOCKTYPE_ZERO(name, lock)
+
+/**
+ * seqcount_mutex_init - runtime initializer for seqcount_mutex_t
+ * @s:		Pointer to the &typedef seqcount_mutex_t instance
+ * @lock:	Pointer to the associated mutex
+ */
+#define seqcount_mutex_init(s, lock)		\
+	seqcount_locktype_init(s, lock)
+
+/**
+ * typedef seqcount_ww_mutex_t - sequence count with ww_mutex associated
+ * @seqcount:		The real sequence counter
+ * @lock:		Pointer to the associated ww_mutex
+ *
+ * A plain sequence counter with external writer synchronization by a
+ * ww_mutex. The ww_mutex is associated to the sequence counter in the static
+ * initializer or init function. This enables lockdep to validate that
+ * the write side critical section is properly serialized.
+ *
+ * The write side API functions write_seqcount_begin()/end() automatically
+ * disable and enable preemption when used with seqcount_ww_mutex_t.
+ */
+typedef struct seqcount_ww_mutex {
+	seqcount_t      seqcount;
+#ifdef CONFIG_LOCKDEP
+	struct ww_mutex	*lock;
+#endif
+} seqcount_ww_mutex_t;
+
+/**
+ * SEQCNT_WW_MUTEX_ZERO - static initializer for seqcount_ww_mutex_t
+ * @name:	Name of the &typedef seqcount_ww_mutex_t instance
+ * @lock:	Pointer to the associated ww_mutex
+ */
+#define SEQCNT_WW_MUTEX_ZERO(name, lock)	\
+	SEQCOUNT_LOCKTYPE_ZERO(name, lock)
+
+/**
+ * seqcount_ww_mutex_init - runtime initializer for seqcount_ww_mutex_t
+ * @s:		Pointer to the &typedef seqcount_ww_mutex_t instance
+ * @lock:	Pointer to the associated ww_mutex
+ */
+#define seqcount_ww_mutex_init(s, lock)		\
+	seqcount_locktype_init(s, lock)
+
+#include <linux/seqlock_types_internal.h>
+
 /*
  * Sequential locks (seqlock_t)
  *
@@ -475,7 +720,7 @@ typedef struct {
 
 /**
  * seqlock_init() - dynamic initializer for seqlock_t
- * @sl: Pointer to the seqlock_t instance
+ * @sl: Pointer to the &typedef seqlock_t instance
  */
 #define seqlock_init(sl)				\
 	do {						\
@@ -502,7 +747,7 @@ typedef struct {
  */
 static inline unsigned read_seqbegin(const seqlock_t *sl)
 {
-	return read_seqcount_begin(&sl->seqcount);
+	return read_seqcount_t_begin(&sl->seqcount);
 }
 
 /**
@@ -518,7 +763,7 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
  */
 static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
 {
-	return read_seqcount_retry(&sl->seqcount, start);
+	return read_seqcount_t_retry(&sl->seqcount, start);
 }
 
 /**
@@ -539,7 +784,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
 static inline void write_seqlock(seqlock_t *sl)
 {
 	spin_lock(&sl->lock);
-	__write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_t_begin(&sl->seqcount);
 }
 
 /**
@@ -551,7 +796,7 @@ static inline void write_seqlock(seqlock_t *sl)
  */
 static inline void write_sequnlock(seqlock_t *sl)
 {
-	write_seqcount_end(&sl->seqcount);
+	write_seqcount_t_end(&sl->seqcount);
 	spin_unlock(&sl->lock);
 }
 
@@ -569,7 +814,7 @@ static inline void write_sequnlock(seqlock_t *sl)
 static inline void write_seqlock_bh(seqlock_t *sl)
 {
 	spin_lock_bh(&sl->lock);
-	__write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_t_begin(&sl->seqcount);
 }
 
 /**
@@ -582,7 +827,7 @@ static inline void write_seqlock_bh(seqlock_t *sl)
  */
 static inline void write_sequnlock_bh(seqlock_t *sl)
 {
-	write_seqcount_end(&sl->seqcount);
+	write_seqcount_t_end(&sl->seqcount);
 	spin_unlock_bh(&sl->lock);
 }
 
@@ -597,7 +842,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl)
 static inline void write_seqlock_irq(seqlock_t *sl)
 {
 	spin_lock_irq(&sl->lock);
-	__write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_t_begin(&sl->seqcount);
 }
 
 /**
@@ -612,7 +857,7 @@ static inline void write_seqlock_irq(seqlock_t *sl)
  */
 static inline void write_sequnlock_irq(seqlock_t *sl)
 {
-	write_seqcount_end(&sl->seqcount);
+	write_seqcount_t_end(&sl->seqcount);
 	spin_unlock_irq(&sl->lock);
 }
 
@@ -621,7 +866,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
 	unsigned long flags;
 
 	spin_lock_irqsave(&sl->lock, flags);
-	__write_seqcount_begin(&sl->seqcount);
+	__write_seqcount_t_begin(&sl->seqcount);
 	return flags;
 }
 
@@ -658,7 +903,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
 static inline void
 write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
 {
-	write_seqcount_end(&sl->seqcount);
+	write_seqcount_t_end(&sl->seqcount);
 	spin_unlock_irqrestore(&sl->lock, flags);
 }
 
diff --git a/include/linux/seqlock_types_internal.h b/include/linux/seqlock_types_internal.h
new file mode 100644
index 000000000000..de635f4c7297
--- /dev/null
+++ b/include/linux/seqlock_types_internal.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_SEQLOCK_TYPES_INTERNAL_H
+#define __LINUX_SEQLOCK_TYPES_INTERNAL_H
+
+/*
+ * Sequence counters with associated locks
+ *
+ * Copyright (C) 2020 Linutronix GmbH
+ */
+
+#ifndef __LINUX_SEQLOCK_H
+#error This is an INTERNAL header; it must only be included by seqlock.h
+#endif
+
+#include <linux/mutex.h>
+#include <linux/rwlock.h>
+#include <linux/spinlock.h>
+#include <linux/ww_mutex.h>
+
+/*
+ * @s: pointer to seqcount_t or any of the seqcount_locktype_t variants
+ */
+#define __to_seqcount_t(s)						\
+({									\
+	seqcount_t *seq;						\
+									\
+	if (__same_type(*(s), seqcount_t))				\
+		seq = (seqcount_t *)(s);				\
+	else if (__same_type(*(s), seqcount_spinlock_t))		\
+		seq = &((seqcount_spinlock_t *)(s))->seqcount;		\
+	else if (__same_type(*(s), seqcount_raw_spinlock_t))		\
+		seq = &((seqcount_raw_spinlock_t *)(s))->seqcount;	\
+	else if (__same_type(*(s), seqcount_rwlock_t))			\
+		seq = &((seqcount_rwlock_t *)(s))->seqcount;		\
+	else if (__same_type(*(s), seqcount_mutex_t))			\
+		seq = &((seqcount_mutex_t *)(s))->seqcount;		\
+	else if (__same_type(*(s), seqcount_ww_mutex_t))		\
+		seq = &((seqcount_ww_mutex_t *)(s))->seqcount;		\
+	else								\
+		BUILD_BUG_ON_MSG(1, "Unknown seqcount type");		\
+									\
+	seq;								\
+})
+
+/*
+ *	seqcount_LOCKTYPE_t -- write APIs
+ *
+ * For associated lock types which do not implicitly disable preemption,
+ * enforce preemption protection in the write side functions.
+ *
+ * Never use lockdep for the raw write variants.
+ */
+
+#define __associated_lock_is_preemptible(s)				\
+({									\
+	bool ret;							\
+									\
+	if (__same_type(*(s), seqcount_t) ||				\
+	    __same_type(*(s), seqcount_spinlock_t) ||			\
+	    __same_type(*(s), seqcount_raw_spinlock_t) ||		\
+	    __same_type(*(s), seqcount_rwlock_t)) {			\
+		ret = false;						\
+	} else if (__same_type(*(s), seqcount_mutex_t) ||		\
+		   __same_type(*(s), seqcount_ww_mutex_t)) {		\
+		ret = true;						\
+	} else								\
+		BUILD_BUG_ON_MSG(1, "Unknown seqcount type");		\
+									\
+	ret;								\
+})
+
+#ifdef CONFIG_LOCKDEP
+
+#define __assert_associated_lock_held(s)				\
+do {									\
+	if (__same_type(*(s), seqcount_t))				\
+		break;							\
+									\
+	if (__same_type(*(s), seqcount_spinlock_t))			\
+		lockdep_assert_held(((seqcount_spinlock_t *)(s))->lock);\
+	else if (__same_type(*(s), seqcount_raw_spinlock_t))		\
+		lockdep_assert_held(((seqcount_raw_spinlock_t *)(s))->lock);	\
+	else if (__same_type(*(s), seqcount_rwlock_t))			\
+		lockdep_assert_held_write(((seqcount_rwlock_t *)(s))->lock);	\
+	else if (__same_type(*(s), seqcount_mutex_t))			\
+		lockdep_assert_held(((seqcount_mutex_t *)(s))->lock);	\
+	else if (__same_type(*(s), seqcount_ww_mutex_t))		\
+		lockdep_assert_held(&((seqcount_ww_mutex_t *)(s))->lock->base);	\
+	else								\
+		BUILD_BUG_ON_MSG(1, "Unknown seqcount type");		\
+} while (0)
+
+#else
+
+#define __assert_associated_lock_held(s)				\
+do {									\
+	(void) __to_seqcount_t(s);					\
+} while (0)
+
+#endif /* CONFIG_LOCKDEP */
+
+#define do_raw_write_seqcount_begin(s)					\
+do {									\
+	if (__associated_lock_is_preemptible(s))			\
+		preempt_disable();					\
+									\
+	raw_write_seqcount_t_begin(__to_seqcount_t(s));			\
+} while (0)
+
+#define do_raw_write_seqcount_end(s)					\
+do {									\
+	raw_write_seqcount_t_end(__to_seqcount_t(s));			\
+									\
+	if (__associated_lock_is_preemptible(s))			\
+		preempt_enable();					\
+} while (0)
+
+#define do_write_seqcount_begin_nested(s, subclass)			\
+do {									\
+	__assert_associated_lock_held(s);				\
+									\
+	if (__associated_lock_is_preemptible(s))			\
+		preempt_disable();					\
+									\
+	write_seqcount_t_begin_nested(__to_seqcount_t(s), subclass);	\
+} while (0)
+
+#define do_write_seqcount_begin(s)					\
+do {									\
+	__assert_associated_lock_held(s);				\
+									\
+	if (__associated_lock_is_preemptible(s))			\
+		preempt_disable();					\
+									\
+	write_seqcount_t_begin(__to_seqcount_t(s));			\
+} while (0)
+
+#define do_write_seqcount_end(s)					\
+do {									\
+	write_seqcount_t_end(__to_seqcount_t(s));			\
+									\
+	if (__associated_lock_is_preemptible(s))			\
+		preempt_enable();					\
+} while (0)
+
+#define do_write_seqcount_invalidate(s)					\
+	write_seqcount_t_invalidate(__to_seqcount_t(s))
+
+#define do_raw_write_seqcount_barrier(s)				\
+	raw_write_seqcount_t_barrier(__to_seqcount_t(s))
+
+/*
+ * Latch sequence counters write side critical sections don't need to
+ * run with preemption disabled. Check @raw_write_seqcount_latch().
+ */
+#define do_raw_write_seqcount_latch(s)					\
+	raw_write_seqcount_t_latch(__to_seqcount_t(s))
+
+/*
+ *	seqcount_LOCKTYPE_t -- read APIs
+ */
+
+#define do___read_seqcount_begin(s)					\
+	__read_seqcount_t_begin(__to_seqcount_t(s))
+
+#define do_raw_read_seqcount(s)						\
+	raw_read_seqcount_t(__to_seqcount_t(s))
+
+#define do_raw_seqcount_begin(s)					\
+	raw_seqcount_t_begin(__to_seqcount_t(s))
+
+#define do_raw_read_seqcount_begin(s)					\
+	raw_read_seqcount_t_begin(__to_seqcount_t(s))
+
+#define do_read_seqcount_begin(s)					\
+	read_seqcount_t_begin(__to_seqcount_t(s))
+
+#define do_raw_read_seqcount_latch(s)					\
+	raw_read_seqcount_t_latch(__to_seqcount_t(s))
+
+#define do___read_seqcount_retry(s, start)				\
+	__read_seqcount_t_retry(__to_seqcount_t(s), start)
+
+#define do_read_seqcount_retry(s, start)				\
+	read_seqcount_t_retry(__to_seqcount_t(s), start)
+
+#endif /* __LINUX_SEQLOCK_TYPES_INTERNAL_H */
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (11 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 12/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-20 10:48   ` Christian König
  2020-05-19 21:45 ` [PATCH v1 14/25] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
                   ` (11 subsequent siblings)
  24 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Sumit Semwal,
	Felix Kuehling, Alex Deucher, Christian König,
	David (ChunMing) Zhou, David Airlie, Daniel Vetter, linux-media,
	dri-devel, amd-gfx

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the sequence counter write side critical
section.

The dma-buf reservation subsystem uses plain sequence counters to manage
updates to reservations. Writer serialization is accomplished through a
wound/wait mutex.

Acquiring a wound/wait mutex does not disable preemption, so this needs
to be done manually before and after the write side critical section.

Use the newly-added seqcount_ww_mutex_t instead:

  - It associates the ww_mutex with the sequence count, which enables
    lockdep to validate that the write side critical section is properly
    serialized.

  - It removes the need to explicitly add preempt_disable/enable()
    around the write side critical section because the write_begin/end()
    functions for this new data type automatically do this.

If lockdep is disabled this ww_mutex lock association is compiled out
and has neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 drivers/dma-buf/dma-resv.c                       | 8 +-------
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 --
 include/linux/dma-resv.h                         | 2 +-
 3 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 590ce7ad60a0..3aba2b2bfc48 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -128,7 +128,7 @@ subsys_initcall(dma_resv_lockdep);
 void dma_resv_init(struct dma_resv *obj)
 {
 	ww_mutex_init(&obj->lock, &reservation_ww_class);
-	seqcount_init(&obj->seq);
+	seqcount_ww_mutex_init(&obj->seq, &obj->lock);
 
 	RCU_INIT_POINTER(obj->fence, NULL);
 	RCU_INIT_POINTER(obj->fence_excl, NULL);
@@ -259,7 +259,6 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
 	fobj = dma_resv_get_list(obj);
 	count = fobj->shared_count;
 
-	preempt_disable();
 	write_seqcount_begin(&obj->seq);
 
 	for (i = 0; i < count; ++i) {
@@ -281,7 +280,6 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
 	smp_store_mb(fobj->shared_count, count);
 
 	write_seqcount_end(&obj->seq);
-	preempt_enable();
 	dma_fence_put(old);
 }
 EXPORT_SYMBOL(dma_resv_add_shared_fence);
@@ -308,14 +306,12 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
 	if (fence)
 		dma_fence_get(fence);
 
-	preempt_disable();
 	write_seqcount_begin(&obj->seq);
 	/* write_seqcount_begin provides the necessary memory barrier */
 	RCU_INIT_POINTER(obj->fence_excl, fence);
 	if (old)
 		old->shared_count = 0;
 	write_seqcount_end(&obj->seq);
-	preempt_enable();
 
 	/* inplace update, no shared fences */
 	while (i--)
@@ -393,13 +389,11 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
 	src_list = dma_resv_get_list(dst);
 	old = dma_resv_get_excl(dst);
 
-	preempt_disable();
 	write_seqcount_begin(&dst->seq);
 	/* write_seqcount_begin provides the necessary memory barrier */
 	RCU_INIT_POINTER(dst->fence_excl, new);
 	RCU_INIT_POINTER(dst->fence, dst_list);
 	write_seqcount_end(&dst->seq);
-	preempt_enable();
 
 	dma_resv_list_free(src_list);
 	dma_fence_put(old);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 9dff792c9290..87fd32aae8f9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -258,11 +258,9 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
 	new->shared_count = k;
 
 	/* Install the new fence list, seqcount provides the barriers */
-	preempt_disable();
 	write_seqcount_begin(&resv->seq);
 	RCU_INIT_POINTER(resv->fence, new);
 	write_seqcount_end(&resv->seq);
-	preempt_enable();
 
 	/* Drop the references to the removed fences or move them to ef_list */
 	for (i = j, k = 0; i < old->shared_count; ++i) {
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index a6538ae7d93f..d44a77e8a7e3 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -69,7 +69,7 @@ struct dma_resv_list {
  */
 struct dma_resv {
 	struct ww_mutex lock;
-	seqcount_t seq;
+	seqcount_ww_mutex_t seq;
 
 	struct dma_fence __rcu *fence_excl;
 	struct dma_resv_list __rcu *fence;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 14/25] sched: tasks: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (12 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 15/25] netfilter: conntrack: " Ahmed S. Darwish
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Ben Segall, Mel Gorman

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/linux/sched.h | 2 +-
 init/init_task.c      | 3 ++-
 kernel/fork.c         | 2 +-
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4418f5cb8324..a9ce6fbeb735 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1046,7 +1046,7 @@ struct task_struct {
 	/* Protected by ->alloc_lock: */
 	nodemask_t			mems_allowed;
 	/* Seqence number to catch updates: */
-	seqcount_t			mems_allowed_seq;
+	seqcount_spinlock_t		mems_allowed_seq;
 	int				cpuset_mem_spread_rotor;
 	int				cpuset_slab_spread_rotor;
 #endif
diff --git a/init/init_task.c b/init/init_task.c
index bd403ed3e418..94bf4aea8293 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -142,7 +142,8 @@ struct task_struct init_task
 	.rcu_tasks_idle_cpu = -1,
 #endif
 #ifdef CONFIG_CPUSETS
-	.mems_allowed_seq = SEQCNT_ZERO(init_task.mems_allowed_seq),
+	.mems_allowed_seq = SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq,
+						 &init_task.alloc_lock),
 #endif
 #ifdef CONFIG_RT_MUTEXES
 	.pi_waiters	= RB_ROOT_CACHED,
diff --git a/kernel/fork.c b/kernel/fork.c
index 8c700f881d92..a0fde1f17e0a 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2019,7 +2019,7 @@ static __latent_entropy struct task_struct *copy_process(
 #ifdef CONFIG_CPUSETS
 	p->cpuset_mem_spread_rotor = NUMA_NO_NODE;
 	p->cpuset_slab_spread_rotor = NUMA_NO_NODE;
-	seqcount_init(&p->mems_allowed_seq);
+	seqcount_spinlock_init(&p->mems_allowed_seq, &p->alloc_lock);
 #endif
 #ifdef CONFIG_TRACE_IRQFLAGS
 	p->irq_events = 0;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 15/25] netfilter: conntrack: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (13 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 14/25] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 16/25] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Pablo Neira Ayuso,
	Jozsef Kadlecsik, Florian Westphal, David S. Miller,
	Jakub Kicinski, netfilter-devel, coreteam, netdev

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/net/netfilter/nf_conntrack.h | 2 +-
 net/netfilter/nf_conntrack_core.c    | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
index 9f551f3b69c6..333fd54aec30 100644
--- a/include/net/netfilter/nf_conntrack.h
+++ b/include/net/netfilter/nf_conntrack.h
@@ -286,7 +286,7 @@ int nf_conntrack_hash_resize(unsigned int hashsize);
 
 extern struct hlist_nulls_head *nf_conntrack_hash;
 extern unsigned int nf_conntrack_htable_size;
-extern seqcount_t nf_conntrack_generation;
+extern seqcount_spinlock_t nf_conntrack_generation;
 extern unsigned int nf_conntrack_max;
 
 /* must be called with rcu read lock held */
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index c4582eb71766..48a839377da2 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -180,7 +180,7 @@ EXPORT_SYMBOL_GPL(nf_conntrack_htable_size);
 
 unsigned int nf_conntrack_max __read_mostly;
 EXPORT_SYMBOL_GPL(nf_conntrack_max);
-seqcount_t nf_conntrack_generation __read_mostly;
+seqcount_spinlock_t nf_conntrack_generation __read_mostly;
 static unsigned int nf_conntrack_hash_rnd __read_mostly;
 
 static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
@@ -2512,7 +2512,8 @@ int nf_conntrack_init_start(void)
 	/* struct nf_ct_ext uses u8 to store offsets/size */
 	BUILD_BUG_ON(total_extension_size() > 255u);
 
-	seqcount_init(&nf_conntrack_generation);
+	seqcount_spinlock_init(&nf_conntrack_generation,
+			       &nf_conntrack_locks_all_lock);
 
 	for (i = 0; i < CONNTRACK_LOCKS; i++)
 		spin_lock_init(&nf_conntrack_locks[i]);
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 16/25] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (14 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 15/25] netfilter: conntrack: " Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 17/25] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Pablo Neira Ayuso,
	Jozsef Kadlecsik, Florian Westphal, David S. Miller,
	Jakub Kicinski, netfilter-devel, coreteam, netdev

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_rwlock_t data type, which allows to associate a
rwlock with the sequence counter. This enables lockdep to verify that
the rwlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 net/netfilter/nft_set_rbtree.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
index 3ffef454d469..f50d986d43c5 100644
--- a/net/netfilter/nft_set_rbtree.c
+++ b/net/netfilter/nft_set_rbtree.c
@@ -18,7 +18,7 @@
 struct nft_rbtree {
 	struct rb_root		root;
 	rwlock_t		lock;
-	seqcount_t		count;
+	seqcount_rwlock_t	count;
 	struct delayed_work	gc_work;
 };
 
@@ -505,7 +505,7 @@ static int nft_rbtree_init(const struct nft_set *set,
 	struct nft_rbtree *priv = nft_set_priv(set);
 
 	rwlock_init(&priv->lock);
-	seqcount_init(&priv->count);
+	seqcount_rwlock_init(&priv->count, &priv->lock);
 	priv->root = RB_ROOT;
 
 	INIT_DEFERRABLE_WORK(&priv->gc_work, nft_rbtree_gc);
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 17/25] xfrm: policy: Use sequence counters with associated lock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (15 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 16/25] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 18/25] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Steffen Klassert,
	Herbert Xu, David S. Miller, Jakub Kicinski, netdev

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the sequence counter write side critical
section.

A plain seqcount_t does not contain the information of which lock must
be held when entering a write side critical section.

Use the new seqcount_spinlock_t and seqcount_mutex_t data types instead,
which allow to associate a lock with the sequence counter. This enables
lockdep to verify that the lock used for writer serialization is held
when the write side critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 net/xfrm/xfrm_policy.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index 297b2fdb3c29..aae78a7aecd7 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -122,7 +122,7 @@ struct xfrm_pol_inexact_bin {
 	/* list containing '*:*' policies */
 	struct hlist_head hhead;
 
-	seqcount_t count;
+	seqcount_spinlock_t count;
 	/* tree sorted by daddr/prefix */
 	struct rb_root root_d;
 
@@ -155,7 +155,7 @@ static struct xfrm_policy_afinfo const __rcu *xfrm_policy_afinfo[AF_INET6 + 1]
 						__read_mostly;
 
 static struct kmem_cache *xfrm_dst_cache __ro_after_init;
-static __read_mostly seqcount_t xfrm_policy_hash_generation;
+static __read_mostly seqcount_mutex_t xfrm_policy_hash_generation;
 
 static struct rhashtable xfrm_policy_inexact_table;
 static const struct rhashtable_params xfrm_pol_inexact_params;
@@ -719,7 +719,7 @@ xfrm_policy_inexact_alloc_bin(const struct xfrm_policy *pol, u8 dir)
 	INIT_HLIST_HEAD(&bin->hhead);
 	bin->root_d = RB_ROOT;
 	bin->root_s = RB_ROOT;
-	seqcount_init(&bin->count);
+	seqcount_spinlock_init(&bin->count, &net->xfrm.xfrm_policy_lock);
 
 	prev = rhashtable_lookup_get_insert_key(&xfrm_policy_inexact_table,
 						&bin->k, &bin->head,
@@ -1911,7 +1911,7 @@ static int xfrm_policy_match(const struct xfrm_policy *pol,
 
 static struct xfrm_pol_inexact_node *
 xfrm_policy_lookup_inexact_addr(const struct rb_root *r,
-				seqcount_t *count,
+				seqcount_spinlock_t *count,
 				const xfrm_address_t *addr, u16 family)
 {
 	const struct rb_node *parent;
@@ -4158,7 +4158,7 @@ void __init xfrm_init(void)
 {
 	register_pernet_subsys(&xfrm_net_ops);
 	xfrm_dev_init();
-	seqcount_init(&xfrm_policy_hash_generation);
+	seqcount_mutex_init(&xfrm_policy_hash_generation, &hash_resize_mutex);
 	xfrm_input_init();
 
 #ifdef CONFIG_INET_ESPINTCP
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 18/25] timekeeping: Use sequence counter with associated raw spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (16 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 17/25] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 19/25] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, John Stultz,
	Stephen Boyd

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_raw_spinlock_t data type, which allows to associate
a raw spinlock with the sequence counter. This enables lockdep to verify
that the raw spinlock used for writer serialization is held when the
write side critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 kernel/time/timekeeping.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 9ebaab13339d..24e91a1e2acd 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -39,18 +39,19 @@ enum timekeeping_adv_mode {
 	TK_ADV_FREQ
 };
 
+static DEFINE_RAW_SPINLOCK(timekeeper_lock);
+
 /*
  * The most important data for readout fits into a single 64 byte
  * cache line.
  */
 static struct {
-	seqcount_t		seq;
+	seqcount_raw_spinlock_t	seq;
 	struct timekeeper	timekeeper;
 } tk_core ____cacheline_aligned = {
-	.seq = SEQCNT_ZERO(tk_core.seq),
+	.seq = SEQCNT_RAW_SPINLOCK_ZERO(tk_core.seq, &timekeeper_lock),
 };
 
-static DEFINE_RAW_SPINLOCK(timekeeper_lock);
 static struct timekeeper shadow_timekeeper;
 
 /**
@@ -63,7 +64,7 @@ static struct timekeeper shadow_timekeeper;
  * See @update_fast_timekeeper() below.
  */
 struct tk_fast {
-	seqcount_t		seq;
+	seqcount_raw_spinlock_t	seq;
 	struct tk_read_base	base[2];
 };
 
@@ -80,11 +81,13 @@ static struct clocksource dummy_clock = {
 };
 
 static struct tk_fast tk_fast_mono ____cacheline_aligned = {
+	.seq     = SEQCNT_RAW_SPINLOCK_ZERO(tk_fast_mono.seq, &timekeeper_lock),
 	.base[0] = { .clock = &dummy_clock, },
 	.base[1] = { .clock = &dummy_clock, },
 };
 
 static struct tk_fast tk_fast_raw  ____cacheline_aligned = {
+	.seq     = SEQCNT_RAW_SPINLOCK_ZERO(tk_fast_raw.seq, &timekeeper_lock),
 	.base[0] = { .clock = &dummy_clock, },
 	.base[1] = { .clock = &dummy_clock, },
 };
@@ -157,7 +160,7 @@ static inline void tk_update_sleep_time(struct timekeeper *tk, ktime_t delta)
  * tk_clock_read - atomic clocksource read() helper
  *
  * This helper is necessary to use in the read paths because, while the
- * seqlock ensures we don't return a bad value while structures are updated,
+ * seqcount ensures we don't return a bad value while structures are updated,
  * it doesn't protect from potential crashes. There is the possibility that
  * the tkr's clocksource may change between the read reference, and the
  * clock reference passed to the read function.  This can cause crashes if
@@ -222,10 +225,10 @@ static inline u64 timekeeping_get_delta(const struct tk_read_base *tkr)
 	unsigned int seq;
 
 	/*
-	 * Since we're called holding a seqlock, the data may shift
+	 * Since we're called holding a seqcount, the data may shift
 	 * under us while we're doing the calculation. This can cause
 	 * false positives, since we'd note a problem but throw the
-	 * results away. So nest another seqlock here to atomically
+	 * results away. So nest another seqcount here to atomically
 	 * grab the points we are checking with.
 	 */
 	do {
@@ -486,7 +489,7 @@ EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns);
  *
  * To keep it NMI safe since we're accessing from tracing, we're not using a
  * separate timekeeper with updates to monotonic clock and boot offset
- * protected with seqlocks. This has the following minor side effects:
+ * protected with seqcounts. This has the following minor side effects:
  *
  * (1) Its possible that a timestamp be taken after the boot offset is updated
  * but before the timekeeper is updated. If this happens, the new boot offset
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 19/25] vfs: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (17 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 18/25] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 20/25] raid5: " Ahmed S. Darwish
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Alexander Viro,
	linux-fsdevel

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 fs/dcache.c               | 2 +-
 fs/fs_struct.c            | 4 ++--
 include/linux/dcache.h    | 2 +-
 include/linux/fs_struct.h | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/dcache.c b/fs/dcache.c
index b280e07e162b..e5f365d8fd67 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -1727,7 +1727,7 @@ static struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
 	dentry->d_lockref.count = 1;
 	dentry->d_flags = 0;
 	spin_lock_init(&dentry->d_lock);
-	seqcount_init(&dentry->d_seq);
+	seqcount_spinlock_init(&dentry->d_seq, &dentry->d_lock);
 	dentry->d_inode = NULL;
 	dentry->d_parent = dentry;
 	dentry->d_sb = sb;
diff --git a/fs/fs_struct.c b/fs/fs_struct.c
index ca639ed967b7..04b3f5b9c629 100644
--- a/fs/fs_struct.c
+++ b/fs/fs_struct.c
@@ -117,7 +117,7 @@ struct fs_struct *copy_fs_struct(struct fs_struct *old)
 		fs->users = 1;
 		fs->in_exec = 0;
 		spin_lock_init(&fs->lock);
-		seqcount_init(&fs->seq);
+		seqcount_spinlock_init(&fs->seq, &fs->lock);
 		fs->umask = old->umask;
 
 		spin_lock(&old->lock);
@@ -163,6 +163,6 @@ EXPORT_SYMBOL(current_umask);
 struct fs_struct init_fs = {
 	.users		= 1,
 	.lock		= __SPIN_LOCK_UNLOCKED(init_fs.lock),
-	.seq		= SEQCNT_ZERO(init_fs.seq),
+	.seq		= SEQCNT_SPINLOCK_ZERO(init_fs.seq, &init_fs.lock),
 	.umask		= 0022,
 };
diff --git a/include/linux/dcache.h b/include/linux/dcache.h
index c1488cc84fd9..235563da356d 100644
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -89,7 +89,7 @@ extern struct dentry_stat_t dentry_stat;
 struct dentry {
 	/* RCU lookup touched fields */
 	unsigned int d_flags;		/* protected by d_lock */
-	seqcount_t d_seq;		/* per dentry seqlock */
+	seqcount_spinlock_t d_seq;	/* per dentry seqlock */
 	struct hlist_bl_node d_hash;	/* lookup hash list */
 	struct dentry *d_parent;	/* parent directory */
 	struct qstr d_name;
diff --git a/include/linux/fs_struct.h b/include/linux/fs_struct.h
index cf1015abfbf2..783b48dedb72 100644
--- a/include/linux/fs_struct.h
+++ b/include/linux/fs_struct.h
@@ -9,7 +9,7 @@
 struct fs_struct {
 	int users;
 	spinlock_t lock;
-	seqcount_t seq;
+	seqcount_spinlock_t seq;
 	int umask;
 	int in_exec;
 	struct path root, pwd;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 20/25] raid5: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (18 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 19/25] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 21/25] iocost: " Ahmed S. Darwish
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Song Liu, linux-raid

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 drivers/md/raid5.c | 2 +-
 drivers/md/raid5.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index ba00e9877f02..69f31c675b58 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -6929,7 +6929,7 @@ static struct r5conf *setup_conf(struct mddev *mddev)
 	} else
 		goto abort;
 	spin_lock_init(&conf->device_lock);
-	seqcount_init(&conf->gen_lock);
+	seqcount_spinlock_init(&conf->gen_lock, &conf->device_lock);
 	mutex_init(&conf->cache_size_mutex);
 	init_waitqueue_head(&conf->wait_for_quiescent);
 	init_waitqueue_head(&conf->wait_for_stripe);
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index f90e0704bed9..a2c9e9e9f5ac 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -589,7 +589,7 @@ struct r5conf {
 	int			prev_chunk_sectors;
 	int			prev_algo;
 	short			generation; /* increments with every reshape */
-	seqcount_t		gen_lock;	/* lock against generation changes */
+	seqcount_spinlock_t	gen_lock;	/* lock against generation changes */
 	unsigned long		reshape_checkpoint; /* Time we last updated
 						     * metadata */
 	long long		min_offset_diff; /* minimum difference between
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 21/25] iocost: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (19 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 20/25] raid5: " Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 22/25] NFSv4: " Ahmed S. Darwish
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Jens Axboe, linux-block

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 block/blk-iocost.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 7c1fe605d0d6..8029a9e8fa55 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -405,7 +405,7 @@ struct ioc {
 	enum ioc_running		running;
 	atomic64_t			vtime_rate;
 
-	seqcount_t			period_seqcount;
+	seqcount_spinlock_t		period_seqcount;
 	u32				period_at;	/* wallclock starttime */
 	u64				period_at_vtime; /* vtime starttime */
 
@@ -872,7 +872,6 @@ static void ioc_now(struct ioc *ioc, struct ioc_now *now)
 
 static void ioc_start_period(struct ioc *ioc, struct ioc_now *now)
 {
-	lockdep_assert_held(&ioc->lock);
 	WARN_ON_ONCE(ioc->running != IOC_RUNNING);
 
 	write_seqcount_begin(&ioc->period_seqcount);
@@ -1958,7 +1957,7 @@ static int blk_iocost_init(struct request_queue *q)
 
 	ioc->running = IOC_IDLE;
 	atomic64_set(&ioc->vtime_rate, VTIME_PER_USEC);
-	seqcount_init(&ioc->period_seqcount);
+	seqcount_spinlock_init(&ioc->period_seqcount, &ioc->lock);
 	ioc->period_at = ktime_to_us(ktime_get());
 	atomic64_set(&ioc->cur_period, 0);
 	atomic_set(&ioc->hweight_gen, 0);
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 22/25] NFSv4: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (20 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 21/25] iocost: " Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 23/25] userfaultfd: " Ahmed S. Darwish
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Trond Myklebust,
	Anna Schumaker, linux-nfs

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 fs/nfs/nfs4_fs.h   | 2 +-
 fs/nfs/nfs4state.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 2b7f6dcd2eb8..210e590e1f71 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -117,7 +117,7 @@ struct nfs4_state_owner {
 	unsigned long	     so_flags;
 	struct list_head     so_states;
 	struct nfs_seqid_counter so_seqid;
-	seqcount_t	     so_reclaim_seqcount;
+	seqcount_spinlock_t  so_reclaim_seqcount;
 	struct mutex	     so_delegreturn_mutex;
 };
 
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index ac93715c05a4..9b2bad35ad24 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -509,7 +509,7 @@ nfs4_alloc_state_owner(struct nfs_server *server,
 	nfs4_init_seqid_counter(&sp->so_seqid);
 	atomic_set(&sp->so_count, 1);
 	INIT_LIST_HEAD(&sp->so_lru);
-	seqcount_init(&sp->so_reclaim_seqcount);
+	seqcount_spinlock_init(&sp->so_reclaim_seqcount, &sp->so_lock);
 	mutex_init(&sp->so_delegreturn_mutex);
 	return sp;
 }
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 23/25] userfaultfd: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (21 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 22/25] NFSv4: " Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 24/25] kvm/eventfd: " Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 25/25] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Alexander Viro,
	linux-fsdevel

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 fs/userfaultfd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index e39fdec8a0b0..dd3aab31c50f 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -61,7 +61,7 @@ struct userfaultfd_ctx {
 	/* waitqueue head for events */
 	wait_queue_head_t event_wqh;
 	/* a refile sequence protected by fault_pending_wqh lock */
-	struct seqcount refile_seq;
+	seqcount_spinlock_t refile_seq;
 	/* pseudo fd refcounting */
 	refcount_t refcount;
 	/* userfaultfd syscall flags */
@@ -1998,7 +1998,7 @@ static void init_once_userfaultfd_ctx(void *mem)
 	init_waitqueue_head(&ctx->fault_wqh);
 	init_waitqueue_head(&ctx->event_wqh);
 	init_waitqueue_head(&ctx->fd_wqh);
-	seqcount_init(&ctx->refile_seq);
+	seqcount_spinlock_init(&ctx->refile_seq, &ctx->fault_pending_wqh.lock);
 }
 
 SYSCALL_DEFINE1(userfaultfd, int, flags)
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 24/25] kvm/eventfd: Use sequence counter with associated spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (22 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 23/25] userfaultfd: " Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  2020-05-19 21:45 ` [PATCH v1 25/25] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish, Paolo Bonzini, kvm

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/linux/kvm_irqfd.h | 2 +-
 virt/kvm/eventfd.c        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/kvm_irqfd.h b/include/linux/kvm_irqfd.h
index dc1da020305b..dac047abdba7 100644
--- a/include/linux/kvm_irqfd.h
+++ b/include/linux/kvm_irqfd.h
@@ -42,7 +42,7 @@ struct kvm_kernel_irqfd {
 	wait_queue_entry_t wait;
 	/* Update side is protected by irqfds.lock */
 	struct kvm_kernel_irq_routing_entry irq_entry;
-	seqcount_t irq_entry_sc;
+	seqcount_spinlock_t irq_entry_sc;
 	/* Used for level IRQ fast-path */
 	int gsi;
 	struct work_struct inject;
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index 67b6fc153e9c..8694a2920ea9 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -303,7 +303,7 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
 	INIT_LIST_HEAD(&irqfd->list);
 	INIT_WORK(&irqfd->inject, irqfd_inject);
 	INIT_WORK(&irqfd->shutdown, irqfd_shutdown);
-	seqcount_init(&irqfd->irq_entry_sc);
+	seqcount_spinlock_init(&irqfd->irq_entry_sc, &kvm->irqfds.lock);
 
 	f = fdget(args->fd);
 	if (!f.file) {
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v1 25/25] hrtimer: Use sequence counter with associated raw spinlock
  2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
                   ` (23 preceding siblings ...)
  2020-05-19 21:45 ` [PATCH v1 24/25] kvm/eventfd: " Ahmed S. Darwish
@ 2020-05-19 21:45 ` Ahmed S. Darwish
  24 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-19 21:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Ahmed S. Darwish

A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.

Use the new seqcount_raw_spinlock_t data type, which allows to associate
a raw spinlock with the sequence counter. This enables lockdep to verify
that the raw spinlock used for writer serialization is held when the
write side critical section is entered.

If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.

Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
---
 include/linux/hrtimer.h |  2 +-
 kernel/time/hrtimer.c   | 13 ++++++++++---
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index 15c8ac313678..25993b86ac5c 100644
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -159,7 +159,7 @@ struct hrtimer_clock_base {
 	struct hrtimer_cpu_base	*cpu_base;
 	unsigned int		index;
 	clockid_t		clockid;
-	seqcount_t		seq;
+	seqcount_raw_spinlock_t	seq;
 	struct hrtimer		*running;
 	struct timerqueue_head	active;
 	ktime_t			(*get_time)(void);
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index d89da1c7e005..c4038511d5c9 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -135,7 +135,11 @@ static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
  * timer->base->cpu_base
  */
 static struct hrtimer_cpu_base migration_cpu_base = {
-	.clock_base = { { .cpu_base = &migration_cpu_base, }, },
+	.clock_base = { {
+		.cpu_base = &migration_cpu_base,
+		.seq      = SEQCNT_RAW_SPINLOCK_ZERO(migration_cpu_base.seq,
+						     &migration_cpu_base.lock),
+	}, },
 };
 
 #define migration_base	migration_cpu_base.clock_base[0]
@@ -1998,8 +2002,11 @@ int hrtimers_prepare_cpu(unsigned int cpu)
 	int i;
 
 	for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
-		cpu_base->clock_base[i].cpu_base = cpu_base;
-		timerqueue_init_head(&cpu_base->clock_base[i].active);
+		struct hrtimer_clock_base *clock_b = &cpu_base->clock_base[i];
+
+		clock_b->cpu_base = cpu_base;
+		seqcount_raw_spinlock_init(&clock_b->seq, &cpu_base->lock);
+		timerqueue_init_head(&clock_b->active);
 	}
 
 	cpu_base->cpu = cpu;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
@ 2020-05-19 22:01   ` Stephen Hemminger
  2020-05-19 22:23     ` Thomas Gleixner
  2020-05-20  2:01   ` Eric Dumazet
  2020-05-20 14:37   ` Dan Carpenter
  2 siblings, 1 reply; 77+ messages in thread
From: Stephen Hemminger @ 2020-05-19 22:01 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	David S. Miller, Jakub Kicinski, netdev

On Tue, 19 May 2020 23:45:23 +0200
"Ahmed S. Darwish" <a.darwish@linutronix.de> wrote:

> Sequence counters write paths are critical sections that must never be
> preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
> 
> Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
> netdev name retrieval.") handled a deadlock, observed with
> CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
> infinitely spinning: it got scheduled after the seqcount write side
> blocked inside its own critical section.
> 
> To fix that deadlock, among other issues, the commit added a
> cond_resched() inside the read side section. While this will get the
> non-preemptible kernel eventually unstuck, the seqcount reader is fully
> exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
> 
> The fix is also still broken: if the seqcount reader belongs to a
> real-time scheduling policy, it can spin forever and the kernel will
> livelock.
> 
> Disabling preemption over the seqcount write side critical section will
> not work: inside it are a number of GFP_KERNEL allocations and mutex
> locking through the drivers/base/ :: device_rename() call chain.
> 
> From all the above, replace the seqcount with a rwsem.
> 
> Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
> Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
> Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Have your performance tested this with 1000's of network devices?

The reason seqcount logic is was done here was to achieve scaleablity
and a semaphore does not scale as well.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 22:01   ` Stephen Hemminger
@ 2020-05-19 22:23     ` Thomas Gleixner
  2020-05-19 23:11       ` Stephen Hemminger
  0 siblings, 1 reply; 77+ messages in thread
From: Thomas Gleixner @ 2020-05-19 22:23 UTC (permalink / raw)
  To: Stephen Hemminger, Ahmed S. Darwish
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, David S. Miller,
	Jakub Kicinski, netdev

Stephen Hemminger <stephen@networkplumber.org> writes:
> On Tue, 19 May 2020 23:45:23 +0200
> "Ahmed S. Darwish" <a.darwish@linutronix.de> wrote:
>
>> Sequence counters write paths are critical sections that must never be
>> preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
>> 
>> Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
>> netdev name retrieval.") handled a deadlock, observed with
>> CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
>> infinitely spinning: it got scheduled after the seqcount write side
>> blocked inside its own critical section.
>> 
>> To fix that deadlock, among other issues, the commit added a
>> cond_resched() inside the read side section. While this will get the
>> non-preemptible kernel eventually unstuck, the seqcount reader is fully
>> exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
>> 
>> The fix is also still broken: if the seqcount reader belongs to a
>> real-time scheduling policy, it can spin forever and the kernel will
>> livelock.
>> 
>> Disabling preemption over the seqcount write side critical section will
>> not work: inside it are a number of GFP_KERNEL allocations and mutex
>> locking through the drivers/base/ :: device_rename() call chain.
>> 
>> From all the above, replace the seqcount with a rwsem.
>> 
>> Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
>> Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
>> Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
>> Cc: <stable@vger.kernel.org>
>> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
>> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> Have your performance tested this with 1000's of network devices?

No. We did not. -ENOTESTCASE

> The reason seqcount logic is was done here was to achieve scaleablity
> and a semaphore does not scale as well.

That still does not make the livelock magically going away. Just make a
reader with real-time priority preempt the writer and the system stops
dead. The net result is perfomance <= 0.

This was observed on RT kernels without a special 1000's of network
devices test case.

Just for the record: This is not a RT specific problem. You can
reproduce that w/o an RT kernel as well. Just run the reader with
real-time scheduling policy.

As much as you hate it from a performance POV the only sane rule of
programming is: Correctness first.

And this code clearly violates that rule.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 22:23     ` Thomas Gleixner
@ 2020-05-19 23:11       ` Stephen Hemminger
  2020-05-19 23:42         ` Thomas Gleixner
  0 siblings, 1 reply; 77+ messages in thread
From: Stephen Hemminger @ 2020-05-19 23:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	David S. Miller, Jakub Kicinski, netdev

On Wed, 20 May 2020 00:23:48 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> Stephen Hemminger <stephen@networkplumber.org> writes:
> > On Tue, 19 May 2020 23:45:23 +0200
> > "Ahmed S. Darwish" <a.darwish@linutronix.de> wrote:
> >  
> >> Sequence counters write paths are critical sections that must never be
> >> preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
> >> 
> >> Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
> >> netdev name retrieval.") handled a deadlock, observed with
> >> CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
> >> infinitely spinning: it got scheduled after the seqcount write side
> >> blocked inside its own critical section.
> >> 
> >> To fix that deadlock, among other issues, the commit added a
> >> cond_resched() inside the read side section. While this will get the
> >> non-preemptible kernel eventually unstuck, the seqcount reader is fully
> >> exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
> >> 
> >> The fix is also still broken: if the seqcount reader belongs to a
> >> real-time scheduling policy, it can spin forever and the kernel will
> >> livelock.
> >> 
> >> Disabling preemption over the seqcount write side critical section will
> >> not work: inside it are a number of GFP_KERNEL allocations and mutex
> >> locking through the drivers/base/ :: device_rename() call chain.
> >> 
> >> From all the above, replace the seqcount with a rwsem.
> >> 
> >> Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
> >> Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
> >> Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
> >> Cc: <stable@vger.kernel.org>
> >> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> >> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>  
> >
> > Have your performance tested this with 1000's of network devices?  
> 
> No. We did not. -ENOTESTCASE

Please try, it isn't that hard..

# time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done

real	0m17.002s
user	0m1.064s
sys	0m0.375s

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 23:11       ` Stephen Hemminger
@ 2020-05-19 23:42         ` Thomas Gleixner
  2020-05-20  0:06           ` Stephen Hemminger
  2020-05-20  2:57           ` David Miller
  0 siblings, 2 replies; 77+ messages in thread
From: Thomas Gleixner @ 2020-05-19 23:42 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	David S. Miller, Jakub Kicinski, netdev

Stephen Hemminger <stephen@networkplumber.org> writes:
> On Wed, 20 May 2020 00:23:48 +0200
> Thomas Gleixner <tglx@linutronix.de> wrote:
>> No. We did not. -ENOTESTCASE
>
> Please try, it isn't that hard..
>
> # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
>
> real	0m17.002s
> user	0m1.064s
> sys	0m0.375s

And that solves the incorrectness of the current code in which way?

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 23:42         ` Thomas Gleixner
@ 2020-05-20  0:06           ` Stephen Hemminger
  2020-05-20  1:55             ` Thomas Gleixner
  2020-05-20  2:57           ` David Miller
  1 sibling, 1 reply; 77+ messages in thread
From: Stephen Hemminger @ 2020-05-20  0:06 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	David S. Miller, Jakub Kicinski, netdev

On Wed, 20 May 2020 01:42:30 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> Stephen Hemminger <stephen@networkplumber.org> writes:
> > On Wed, 20 May 2020 00:23:48 +0200
> > Thomas Gleixner <tglx@linutronix.de> wrote:  
> >> No. We did not. -ENOTESTCASE  
> >
> > Please try, it isn't that hard..
> >
> > # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
> >
> > real	0m17.002s
> > user	0m1.064s
> > sys	0m0.375s  
> 
> And that solves the incorrectness of the current code in which way?

Agree that the current code is has evolved over time to a state where it is not
correct in the case of Preempt-RT. The motivation for the changes to seqcount
goes back many years when there were ISP's that were concerned about scaling of tunnels, vlans etc.

Is it too much to ask for a simple before/after test of your patch as part 
of the submission. You probably measure latency changes to the nanosecond.

Getting it correct without causing user complaints.



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20  0:06           ` Stephen Hemminger
@ 2020-05-20  1:55             ` Thomas Gleixner
  0 siblings, 0 replies; 77+ messages in thread
From: Thomas Gleixner @ 2020-05-20  1:55 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	David S. Miller, Jakub Kicinski, netdev

Stephen Hemminger <stephen@networkplumber.org> writes:
> On Wed, 20 May 2020 01:42:30 +0200
> Thomas Gleixner <tglx@linutronix.de> wrote:
>
>> Stephen Hemminger <stephen@networkplumber.org> writes:
>> > On Wed, 20 May 2020 00:23:48 +0200
>> > Thomas Gleixner <tglx@linutronix.de> wrote:  
>> >> No. We did not. -ENOTESTCASE  
>> >
>> > Please try, it isn't that hard..
>> >
>> > # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
>> >
>> > real	0m17.002s
>> > user	0m1.064s
>> > sys	0m0.375s  
>> 
>> And that solves the incorrectness of the current code in which way?
>
> Agree that the current code is has evolved over time to a state where it is not
> correct in the case of Preempt-RT.

That's not a RT problem as explained in great length in the changelog
and as I pointed out in my previous reply.

 Realtime scheduling classes are available on stock kernels and all
 those attempts to "fix" the livelock problem are ignoring that fact.

Just because you or whoever involved are not using them or do not care
is not making the code more correct.

> The motivation for the changes to seqcount goes back many years when
> there were ISP's that were concerned about scaling of tunnels, vlans
> etc.

I completely understand where this comes from, but that is not a
justification for incorrect code at all.

> Is it too much to ask for a simple before/after test of your patch as part 
> of the submission. You probably measure latency changes to the
> nanosecond.

It's not too much to ask and I'm happy to provide the numbers.

But before I waste my time and produce them, can you please explain how
any numbers provided are going to change the fact that the code is
incorrect?

  A bug, is a bug no matter what the numbers are.

I don't have a insta reproducer at hand for the problem which made that
code go belly up, but the net result is simply:

      Before:			After:
	real	INFINTE         0mxx.yyys

And the 'Before' comes with the extra benefit of stall warnings (if
enabled in the config).

If you insist I surely can go the extra mile and write up the insta
reproducer and stick it into a bugzilla for you.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
  2020-05-19 22:01   ` Stephen Hemminger
@ 2020-05-20  2:01   ` Eric Dumazet
  2020-05-20  6:42     ` Ahmed S. Darwish
  2020-05-20 14:37   ` Dan Carpenter
  2 siblings, 1 reply; 77+ messages in thread
From: Eric Dumazet @ 2020-05-20  2:01 UTC (permalink / raw)
  To: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, David S. Miller, Jakub Kicinski, netdev



On 5/19/20 2:45 PM, Ahmed S. Darwish wrote:
> Sequence counters write paths are critical sections that must never be
> preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
> 
> Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
> netdev name retrieval.") handled a deadlock, observed with
> CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
> infinitely spinning: it got scheduled after the seqcount write side
> blocked inside its own critical section.
> 
> To fix that deadlock, among other issues, the commit added a
> cond_resched() inside the read side section. While this will get the
> non-preemptible kernel eventually unstuck, the seqcount reader is fully
> exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
> 
> The fix is also still broken: if the seqcount reader belongs to a
> real-time scheduling policy, it can spin forever and the kernel will
> livelock.
> 
> Disabling preemption over the seqcount write side critical section will
> not work: inside it are a number of GFP_KERNEL allocations and mutex
> locking through the drivers/base/ :: device_rename() call chain.
> 
> From all the above, replace the seqcount with a rwsem.
> 
> Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
> Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
> Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  net/core/dev.c | 30 ++++++++++++------------------
>  1 file changed, 12 insertions(+), 18 deletions(-)
>

Seems fine to me, assuming rwsem prevent starvation of the writer.

(Presumably this could be a per ndevice rwsem, or per netns, to provide some isolation)

Alternative would be to convert ndev->name from char array to a pointer (rcu protected),
but this looks quite invasive change, certainly not for stable branches.

Reviewed-by: Eric Dumazet <edumazet@google.com>



^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 23:42         ` Thomas Gleixner
  2020-05-20  0:06           ` Stephen Hemminger
@ 2020-05-20  2:57           ` David Miller
  2020-05-20  3:18             ` Eric Dumazet
  2020-05-20 19:37             ` Thomas Gleixner
  1 sibling, 2 replies; 77+ messages in thread
From: David Miller @ 2020-05-20  2:57 UTC (permalink / raw)
  To: tglx
  Cc: stephen, a.darwish, peterz, mingo, will, paulmck, bigeasy,
	rostedt, linux-kernel, kuba, netdev

From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 20 May 2020 01:42:30 +0200

> Stephen Hemminger <stephen@networkplumber.org> writes:
>> On Wed, 20 May 2020 00:23:48 +0200
>> Thomas Gleixner <tglx@linutronix.de> wrote:
>>> No. We did not. -ENOTESTCASE
>>
>> Please try, it isn't that hard..
>>
>> # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
>>
>> real	0m17.002s
>> user	0m1.064s
>> sys	0m0.375s
> 
> And that solves the incorrectness of the current code in which way?

You mentioned that there wasn't a test case, he gave you one to try.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20  2:57           ` David Miller
@ 2020-05-20  3:18             ` Eric Dumazet
  2020-05-20  4:36               ` Stephen Hemminger
  2020-05-20 19:37             ` Thomas Gleixner
  1 sibling, 1 reply; 77+ messages in thread
From: Eric Dumazet @ 2020-05-20  3:18 UTC (permalink / raw)
  To: David Miller, tglx
  Cc: stephen, a.darwish, peterz, mingo, will, paulmck, bigeasy,
	rostedt, linux-kernel, kuba, netdev



On 5/19/20 7:57 PM, David Miller wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> Date: Wed, 20 May 2020 01:42:30 +0200
> 
>> Stephen Hemminger <stephen@networkplumber.org> writes:
>>> On Wed, 20 May 2020 00:23:48 +0200
>>> Thomas Gleixner <tglx@linutronix.de> wrote:
>>>> No. We did not. -ENOTESTCASE
>>>
>>> Please try, it isn't that hard..
>>>
>>> # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
>>>
>>> real	0m17.002s
>>> user	0m1.064s
>>> sys	0m0.375s
>>
>> And that solves the incorrectness of the current code in which way?
> 
> You mentioned that there wasn't a test case, he gave you one to try.
> 

I do not think this would ever use device rename, nor netdev_get_name()

None of this stuff is fast path really.

# time for ((i=1;i<1000;i++)); do ip li add dev dummy$i type dummy; done

real	0m1.127s
user	0m0.270s
sys	0m1.039s

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20  3:18             ` Eric Dumazet
@ 2020-05-20  4:36               ` Stephen Hemminger
  0 siblings, 0 replies; 77+ messages in thread
From: Stephen Hemminger @ 2020-05-20  4:36 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David Miller, tglx, a.darwish, peterz, mingo, will, paulmck,
	bigeasy, rostedt, linux-kernel, kuba, netdev

On Tue, 19 May 2020 20:18:19 -0700
Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On 5/19/20 7:57 PM, David Miller wrote:
> > From: Thomas Gleixner <tglx@linutronix.de>
> > Date: Wed, 20 May 2020 01:42:30 +0200
> >   
> >> Stephen Hemminger <stephen@networkplumber.org> writes:  
> >>> On Wed, 20 May 2020 00:23:48 +0200
> >>> Thomas Gleixner <tglx@linutronix.de> wrote:  
> >>>> No. We did not. -ENOTESTCASE  
> >>>
> >>> Please try, it isn't that hard..
> >>>
> >>> # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
> >>>
> >>> real	0m17.002s
> >>> user	0m1.064s
> >>> sys	0m0.375s  
> >>
> >> And that solves the incorrectness of the current code in which way?  
> > 
> > You mentioned that there wasn't a test case, he gave you one to try.
> >   
> 
> I do not think this would ever use device rename, nor netdev_get_name()
> 
> None of this stuff is fast path really.
> 
> # time for ((i=1;i<1000;i++)); do ip li add dev dummy$i type dummy; done
> 
> real	0m1.127s
> user	0m0.270s
> sys	0m1.039s

Your right it is a weak test, and most of the overhead is in the syscall
and all netlink events that happen.

It does end up looking up the new name, so would exercise that.
Better test is to use %d syntax or create 1000 dummy's then rename every one.

This is more of a stress test
# for ((i=0;i<1000;i++)); do echo link add dev dummy%d type dummy; done | time ip -batch -
0.00user 0.29system 0:02.11elapsed 13%CPU (0avgtext+0avgdata 2544maxresident)k
0inputs+0outputs (0major+148minor)pagefaults 0swaps

# for ((i=999;i>=0;i--)); do echo link set dummy$i name dummy$((i+1)); done | time ip -batch -
0.00user 0.26system 0:54.98elapsed 0%CPU (0avgtext+0avgdata 2508maxresident)k
0inputs+0outputs (0major+145minor)pagefaults 0swaps


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20  2:01   ` Eric Dumazet
@ 2020-05-20  6:42     ` Ahmed S. Darwish
  2020-05-20 12:51       ` Eric Dumazet
  0 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-20  6:42 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, David S. Miller, Jakub Kicinski, netdev

Hello Eric,

On Tue, May 19, 2020 at 07:01:38PM -0700, Eric Dumazet wrote:
>
> On 5/19/20 2:45 PM, Ahmed S. Darwish wrote:
> > Sequence counters write paths are critical sections that must never be
> > preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
> >
> > Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
> > netdev name retrieval.") handled a deadlock, observed with
> > CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
> > infinitely spinning: it got scheduled after the seqcount write side
> > blocked inside its own critical section.
> >
> > To fix that deadlock, among other issues, the commit added a
> > cond_resched() inside the read side section. While this will get the
> > non-preemptible kernel eventually unstuck, the seqcount reader is fully
> > exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
> >
> > The fix is also still broken: if the seqcount reader belongs to a
> > real-time scheduling policy, it can spin forever and the kernel will
> > livelock.
> >
> > Disabling preemption over the seqcount write side critical section will
> > not work: inside it are a number of GFP_KERNEL allocations and mutex
> > locking through the drivers/base/ :: device_rename() call chain.
> >
> > From all the above, replace the seqcount with a rwsem.
> >
> > Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
> > Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
> > Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
> > Cc: <stable@vger.kernel.org>
> > Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> > Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> >  net/core/dev.c | 30 ++++++++++++------------------
> >  1 file changed, 12 insertions(+), 18 deletions(-)
> >
>
> Seems fine to me, assuming rwsem prevent starvation of the writer.
>

Thanks for the review.

AFAIK, due to 5cfd92e12e13 ("locking/rwsem: Adaptive disabling of reader
optimistic spinning"), using a rwsem shouldn't lead to writer starvation
in the contended case.

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex
  2020-05-19 21:45 ` [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
@ 2020-05-20 10:48   ` Christian König
  2020-05-21  0:09     ` Ahmed S. Darwish
  0 siblings, 1 reply; 77+ messages in thread
From: Christian König @ 2020-05-20 10:48 UTC (permalink / raw)
  To: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: David (ChunMing) Zhou, amd-gfx, Paul E. McKenney, David Airlie,
	Sebastian A. Siewior, LKML, Steven Rostedt, Christian König,
	dri-devel, Daniel Vetter, Alex Deucher, Felix Kuehling,
	Thomas Gleixner, Sumit Semwal, linux-media

Am 19.05.20 um 23:45 schrieb Ahmed S. Darwish:
> A sequence counter write side critical section must be protected by some
> form of locking to serialize writers. If the serialization primitive is
> not disabling preemption implicitly, preemption has to be explicitly
> disabled before entering the sequence counter write side critical
> section.
>
> The dma-buf reservation subsystem uses plain sequence counters to manage
> updates to reservations. Writer serialization is accomplished through a
> wound/wait mutex.
>
> Acquiring a wound/wait mutex does not disable preemption, so this needs
> to be done manually before and after the write side critical section.
>
> Use the newly-added seqcount_ww_mutex_t instead:
>
>    - It associates the ww_mutex with the sequence count, which enables
>      lockdep to validate that the write side critical section is properly
>      serialized.
>
>    - It removes the need to explicitly add preempt_disable/enable()
>      around the write side critical section because the write_begin/end()
>      functions for this new data type automatically do this.
>
> If lockdep is disabled this ww_mutex lock association is compiled out
> and has neither storage size nor runtime overhead.

Mhm, is the dma_resv object the only user of this new seqcount_ww_mutex 
variant ?

If yes we are trying to get rid of this sequence counter for quite some 
time, so I would rather invest the additional time to finish this.

Regards,
Christian.

>
> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> ---
>   drivers/dma-buf/dma-resv.c                       | 8 +-------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 --
>   include/linux/dma-resv.h                         | 2 +-
>   3 files changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 590ce7ad60a0..3aba2b2bfc48 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -128,7 +128,7 @@ subsys_initcall(dma_resv_lockdep);
>   void dma_resv_init(struct dma_resv *obj)
>   {
>   	ww_mutex_init(&obj->lock, &reservation_ww_class);
> -	seqcount_init(&obj->seq);
> +	seqcount_ww_mutex_init(&obj->seq, &obj->lock);
>   
>   	RCU_INIT_POINTER(obj->fence, NULL);
>   	RCU_INIT_POINTER(obj->fence_excl, NULL);
> @@ -259,7 +259,6 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
>   	fobj = dma_resv_get_list(obj);
>   	count = fobj->shared_count;
>   
> -	preempt_disable();
>   	write_seqcount_begin(&obj->seq);
>   
>   	for (i = 0; i < count; ++i) {
> @@ -281,7 +280,6 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
>   	smp_store_mb(fobj->shared_count, count);
>   
>   	write_seqcount_end(&obj->seq);
> -	preempt_enable();
>   	dma_fence_put(old);
>   }
>   EXPORT_SYMBOL(dma_resv_add_shared_fence);
> @@ -308,14 +306,12 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
>   	if (fence)
>   		dma_fence_get(fence);
>   
> -	preempt_disable();
>   	write_seqcount_begin(&obj->seq);
>   	/* write_seqcount_begin provides the necessary memory barrier */
>   	RCU_INIT_POINTER(obj->fence_excl, fence);
>   	if (old)
>   		old->shared_count = 0;
>   	write_seqcount_end(&obj->seq);
> -	preempt_enable();
>   
>   	/* inplace update, no shared fences */
>   	while (i--)
> @@ -393,13 +389,11 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
>   	src_list = dma_resv_get_list(dst);
>   	old = dma_resv_get_excl(dst);
>   
> -	preempt_disable();
>   	write_seqcount_begin(&dst->seq);
>   	/* write_seqcount_begin provides the necessary memory barrier */
>   	RCU_INIT_POINTER(dst->fence_excl, new);
>   	RCU_INIT_POINTER(dst->fence, dst_list);
>   	write_seqcount_end(&dst->seq);
> -	preempt_enable();
>   
>   	dma_resv_list_free(src_list);
>   	dma_fence_put(old);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> index 9dff792c9290..87fd32aae8f9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> @@ -258,11 +258,9 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
>   	new->shared_count = k;
>   
>   	/* Install the new fence list, seqcount provides the barriers */
> -	preempt_disable();
>   	write_seqcount_begin(&resv->seq);
>   	RCU_INIT_POINTER(resv->fence, new);
>   	write_seqcount_end(&resv->seq);
> -	preempt_enable();
>   
>   	/* Drop the references to the removed fences or move them to ef_list */
>   	for (i = j, k = 0; i < old->shared_count; ++i) {
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index a6538ae7d93f..d44a77e8a7e3 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -69,7 +69,7 @@ struct dma_resv_list {
>    */
>   struct dma_resv {
>   	struct ww_mutex lock;
> -	seqcount_t seq;
> +	seqcount_ww_mutex_t seq;
>   
>   	struct dma_fence __rcu *fence_excl;
>   	struct dma_resv_list __rcu *fence;


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-19 21:45 ` [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API Ahmed S. Darwish
@ 2020-05-20 12:22   ` Konstantin Khlebnikov
  2020-05-20 13:05     ` Peter Zijlstra
  2020-05-22 14:57   ` Peter Zijlstra
  1 sibling, 1 reply; 77+ messages in thread
From: Konstantin Khlebnikov @ 2020-05-20 12:22 UTC (permalink / raw)
  To: Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Andrew Morton, linux-mm

On 20/05/2020 00.45, Ahmed S. Darwish wrote:
> Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> implemented an optimization mechanism to exit the to-be-started LRU
> drain operation (name it A) if another drain operation *started and
> finished* while (A) was blocked on the LRU draining mutex.
> 
> This was done through a seqcount latch, which is an abuse of its
> semantics:
> 
>    1. Seqcount latching should be used for the purpose of switching
>       between two storage places with sequence protection to allow
>       interruptible, preemptible writer sections. The optimization
>       mechanism has absolutely nothing to do with that.
> 
>    2. The used raw_write_seqcount_latch() has two smp write memory
>       barriers to always insure one consistent storage place out of the
>       two storage places available. This extra smp_wmb() is redundant for
>       the optimization use case.
> 
> Beside the API abuse, the semantics of a latch sequence counter was
> force fitted into the optimization. What was actually meant is to track
> generations of LRU draining operations, where "current lru draining
> generation = x" implies that all generations 0 < n <= x are already
> *scheduled* for draining.
> 
> Remove the conceptually-inappropriate seqcount latch usage and manually
> implement the optimization using a counter and SMP memory barriers.

Well, I thought it fits perfectly =)

Maybe it's worth to add helpers with appropriate semantics?
This is pretty common pattern.

> 
> Link: https://lkml.kernel.org/r/CALYGNiPSr-cxV9MX9czaVh6Wz_gzSv3H_8KPvgjBTGbJywUJpA@mail.gmail.com
> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> ---
>   mm/swap.c | 57 +++++++++++++++++++++++++++++++++++++++++++++----------
>   1 file changed, 47 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index bf9a79fed62d..d6910eeed43d 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -713,10 +713,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
>    */
>   void lru_add_drain_all(void)
>   {
> -	static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
> -	static DEFINE_MUTEX(lock);
> +	/*
> +	 * lru_drain_gen - Current generation of pages that could be in vectors
> +	 *
> +	 * (A) Definition: lru_drain_gen = x implies that all generations
> +	 *     0 < n <= x are already scheduled for draining.
> +	 *
> +	 * This is an optimization for the highly-contended use case where a
> +	 * user space workload keeps constantly generating a flow of pages
> +	 * for each CPU.
> +	 */
> +	static unsigned int lru_drain_gen;
>   	static struct cpumask has_work;
> -	int cpu, seq;
> +	static DEFINE_MUTEX(lock);
> +	int cpu, this_gen;
>   
>   	/*
>   	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
>   	if (WARN_ON(!mm_percpu_wq))
>   		return;
>   
> -	seq = raw_read_seqcount_latch(&seqcount);
> +	/*
> +	 * (B) Cache the LRU draining generation number
> +	 *
> +	 * smp_rmb() ensures that the counter is loaded before the mutex is
> +	 * taken. It pairs with the smp_wmb() inside the mutex critical section
> +	 * at (D).
> +	 */
> +	this_gen = READ_ONCE(lru_drain_gen);
> +	smp_rmb();
>   
>   	mutex_lock(&lock);
>   
>   	/*
> -	 * Piggyback on drain started and finished while we waited for lock:
> -	 * all pages pended at the time of our enter were drained from vectors.
> +	 * (C) Exit the draining operation if a newer generation, from another
> +	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
>   	 */
> -	if (__read_seqcount_retry(&seqcount, seq))
> +	if (unlikely(this_gen != lru_drain_gen))
>   		goto done;
>   
> -	raw_write_seqcount_latch(&seqcount);
> +	/*
> +	 * (D) Increment generation number
> +	 *
> +	 * Pairs with READ_ONCE() and smp_rmb() at (B), outside of the critical
> +	 * section.
> +	 *
> +	 * This pairing must be done here, before the for_each_online_cpu loop
> +	 * below which drains the page vectors.
> +	 *
> +	 * Let x, y, and z represent some system CPU numbers, where x < y < z.
> +	 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
> +	 * below and has already reached CPU #y's per-cpu data. CPU #x comes
> +	 * along, adds some pages to its per-cpu vectors, then calls
> +	 * lru_add_drain_all().
> +	 *
> +	 * If the paired smp_wmb() below is done at any later step, e.g. after
> +	 * the loop, CPU #x will just exit at (C) and miss flushing out all of
> +	 * its added pages.
> +	 */
> +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> +	smp_wmb();
>   
>   	cpumask_clear(&has_work);
> -
>   	for_each_online_cpu(cpu) {
>   		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
>   
> @@ -766,7 +803,7 @@ void lru_add_drain_all(void)
>   {
>   	lru_add_drain();
>   }
> -#endif
> +#endif /* CONFIG_SMP */
>   
>   /**
>    * release_pages - batched put_page()
> 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20  6:42     ` Ahmed S. Darwish
@ 2020-05-20 12:51       ` Eric Dumazet
  2020-06-03 14:33         ` Ahmed S. Darwish
  0 siblings, 1 reply; 77+ messages in thread
From: Eric Dumazet @ 2020-05-20 12:51 UTC (permalink / raw)
  To: Ahmed S. Darwish, Eric Dumazet
  Cc: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, David S. Miller, Jakub Kicinski, netdev



On 5/19/20 11:42 PM, Ahmed S. Darwish wrote:
> Hello Eric,
> 
> On Tue, May 19, 2020 at 07:01:38PM -0700, Eric Dumazet wrote:
>>
>> On 5/19/20 2:45 PM, Ahmed S. Darwish wrote:
>>> Sequence counters write paths are critical sections that must never be
>>> preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
>>>
>>> Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
>>> netdev name retrieval.") handled a deadlock, observed with
>>> CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
>>> infinitely spinning: it got scheduled after the seqcount write side
>>> blocked inside its own critical section.
>>>
>>> To fix that deadlock, among other issues, the commit added a
>>> cond_resched() inside the read side section. While this will get the
>>> non-preemptible kernel eventually unstuck, the seqcount reader is fully
>>> exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
>>>
>>> The fix is also still broken: if the seqcount reader belongs to a
>>> real-time scheduling policy, it can spin forever and the kernel will
>>> livelock.
>>>
>>> Disabling preemption over the seqcount write side critical section will
>>> not work: inside it are a number of GFP_KERNEL allocations and mutex
>>> locking through the drivers/base/ :: device_rename() call chain.
>>>
>>> From all the above, replace the seqcount with a rwsem.
>>>
>>> Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
>>> Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
>>> Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
>>> Cc: <stable@vger.kernel.org>
>>> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
>>> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>>> ---
>>>  net/core/dev.c | 30 ++++++++++++------------------
>>>  1 file changed, 12 insertions(+), 18 deletions(-)
>>>
>>
>> Seems fine to me, assuming rwsem prevent starvation of the writer.
>>
> 
> Thanks for the review.
> 
> AFAIK, due to 5cfd92e12e13 ("locking/rwsem: Adaptive disabling of reader
> optimistic spinning"), using a rwsem shouldn't lead to writer starvation
> in the contended case.

Hmm this was in linux-5.3, so very recent stuff.

Has this patch been backported to stable releases ?

With all the Fixes: tags you added, stable teams will backport this networking patch to
all stable versions.

Do we have a way to tune a dedicare rwsem to 'give preference to the (unique in this case) writer" over
a myriad of potential readers ?

Thanks.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-20 12:22   ` Konstantin Khlebnikov
@ 2020-05-20 13:05     ` Peter Zijlstra
  0 siblings, 0 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-20 13:05 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	Andrew Morton, linux-mm

On Wed, May 20, 2020 at 03:22:15PM +0300, Konstantin Khlebnikov wrote:
> On 20/05/2020 00.45, Ahmed S. Darwish wrote:
> > Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> > implemented an optimization mechanism to exit the to-be-started LRU
> > drain operation (name it A) if another drain operation *started and
> > finished* while (A) was blocked on the LRU draining mutex.

That commit is horrible...

> Well, I thought it fits perfectly =)
> 
> Maybe it's worth to add helpers with appropriate semantics?
> This is pretty common pattern.

Where's more sites?

> > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> >   	if (WARN_ON(!mm_percpu_wq))
> >   		return;
> > -	seq = raw_read_seqcount_latch(&seqcount);
> >   	mutex_lock(&lock);
> >   	/*
> > -	 * Piggyback on drain started and finished while we waited for lock:
> > -	 * all pages pended at the time of our enter were drained from vectors.
> >   	 */
> > -	if (__read_seqcount_retry(&seqcount, seq))
> >   		goto done;

Since there is no ordering in raw_read_seqcount_latch(), and
mutex_lock() is an ACQUIRE, there's no guarantee the read actually
happens before the mutex is acquired.

> > -	raw_write_seqcount_latch(&seqcount);
> >   	cpumask_clear(&has_work);

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
  2020-05-19 22:01   ` Stephen Hemminger
  2020-05-20  2:01   ` Eric Dumazet
@ 2020-05-20 14:37   ` Dan Carpenter
  2020-05-25 16:22     ` Ahmed S. Darwish
  2 siblings, 1 reply; 77+ messages in thread
From: Dan Carpenter @ 2020-05-20 14:37 UTC (permalink / raw)
  To: kbuild, Ahmed S. Darwish, Peter Zijlstra, Ingo Molnar, Will Deacon
  Cc: lkp, kbuild-all, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Ahmed S. Darwish,
	Jakub Kicinski, netdev


[-- Attachment #1: Type: text/plain, Size: 2944 bytes --]

Hi "Ahmed,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on tip/locking/core]
[also build test WARNING on nf-next/master nf/master tip/timers/core linus/master v5.7-rc6 next-20200519]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Ahmed-S-Darwish/seqlock-Extend-seqcount-API-with-associated-locks/20200520-055145
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 23b5ae2e8e1326c91b5dfdbb6ebcd5a6820074ae
config: x86_64-defconfig (attached as .config)

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
net/core/dev.c:953 netdev_get_name() warn: inconsistent returns 'devnet_rename_sem'.

# https://github.com/0day-ci/linux/commit/2354e271ada778bbb935d7b20113693710905cff
git remote add linux-review https://github.com/0day-ci/linux
git remote update linux-review
git checkout 2354e271ada778bbb935d7b20113693710905cff
vim +/devnet_rename_sem +953 net/core/dev.c

5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  935  int netdev_get_name(struct net *net, char *name, int ifindex)
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  936  {
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  937  	struct net_device *dev;
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  938  
2354e271ada778b Ahmed S. Darwish 2020-05-19  939  	down_read(&devnet_rename_sem);
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

2354e271ada778b Ahmed S. Darwish 2020-05-19  940  
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  941  	rcu_read_lock();
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  942  	dev = dev_get_by_index_rcu(net, ifindex);
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  943  	if (!dev) {
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  944  		rcu_read_unlock();
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  945  		return -ENODEV;
                                                                ^^^^^^^^^^^^^^
We need to drop the new semaphore on error.

5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  946  	}
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  947  
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  948  	strcpy(name, dev->name);
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  949  	rcu_read_unlock();
2354e271ada778b Ahmed S. Darwish 2020-05-19  950  
2354e271ada778b Ahmed S. Darwish 2020-05-19  951  	up_read(&devnet_rename_sem);
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  952  
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26 @953  	return 0;
5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  954  }

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 29111 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20  2:57           ` David Miller
  2020-05-20  3:18             ` Eric Dumazet
@ 2020-05-20 19:37             ` Thomas Gleixner
  2020-05-20 21:36               ` Stephen Hemminger
  1 sibling, 1 reply; 77+ messages in thread
From: Thomas Gleixner @ 2020-05-20 19:37 UTC (permalink / raw)
  To: David Miller
  Cc: stephen, a.darwish, peterz, mingo, will, paulmck, bigeasy,
	rostedt, linux-kernel, kuba, netdev

David Miller <davem@davemloft.net> writes:
> From: Thomas Gleixner <tglx@linutronix.de>
> Date: Wed, 20 May 2020 01:42:30 +0200
>>> Please try, it isn't that hard..
>>>
>>> # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
>>>
>>> real	0m17.002s
>>> user	0m1.064s
>>> sys	0m0.375s
>> 
>> And that solves the incorrectness of the current code in which way?
>
> You mentioned that there wasn't a test case, he gave you one to try.

If it makes you happy to compare incorrrect code with correct code, here
you go:

5 runs of 1000 device add, 1000 device rename and 1000 device del

CONFIG_PREEMPT_NONE=y

         Base      rwsem
 add     0:05.01   0:05.28
	 0:05.93   0:06.11
	 0:06.52   0:06.26
	 0:06.06   0:05.74
	 0:05.71   0:06.07

 rename  0:32.57   0:33.04
	 0:32.91   0:32.45
	 0:32.72   0:32.53
	 0:39.65   0:34.18
	 0:34.52   0:32.50

 delete  3:48.65   3:48.91
	 3:49.66   3:49.13
	 3:45.29   3:48.26
	 3:47.56   3:46.60
	 3:50.01   3:48.06

 -------------------------

CONFIG_PREEMPT_VOLUNTARY=y

         Base      rwsem
 add     0:06.80   0:06.42
	 0:04.77   0:05.03
	 0:05.74   0:04.62
	 0:05.87   0:04.34
	 0:04.20   0:04.12

 rename  0:33.33   0:42.02
	 0:42.36   0:32.55
	 0:39.58   0:31.60
	 0:33.69   0:35.08
	 0:34.24   0:33.97

 delete  3:47.82   3:44.00
	 3:47.42   3:51.00
	 3:48.52   3:48.88
	 3:48.50   3:48.09
	 3:50.03   3:46.56

 -------------------------

CONFIG_PREEMPT=y

         Base      rwsem

 add     0:07.89   0:07.72
	 0:07.25   0:06.72
	 0:07.42   0:06.51
	 0:06.92   0:06.38
	 0:06.20   0:06.72

 rename  0:41.77   0:32.39
	 0:44.29   0:33.29
	 0:36.19   0:34.86
	 0:33.19   0:35.06
	 0:37.00   0:34.78

 delete  2:36.96   2:39.97
	 2:37.80   2:42.19
	 2:44.66   2:48.40
	 2:39.75   2:41.02
	 2:40.77   2:38.36

The runtime variation is rather large and when running the same in a VM
I got complete random numbers for both base and rwsem. The most amazing
was delete where the time varies from 30s to 6m20s.

Btw, Sebastian noticed that rename spams dmesg:

  netdev_info(dev, "renamed from %s\n", oldname);

which eats about 50% of the Rename run time.

         Base      netdev_info() removed

Rename   0:34.84   0:17.48

That number at least makes tons of sense

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20 19:37             ` Thomas Gleixner
@ 2020-05-20 21:36               ` Stephen Hemminger
  0 siblings, 0 replies; 77+ messages in thread
From: Stephen Hemminger @ 2020-05-20 21:36 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: David Miller, a.darwish, peterz, mingo, will, paulmck, bigeasy,
	rostedt, linux-kernel, kuba, netdev

On Wed, 20 May 2020 21:37:11 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> David Miller <davem@davemloft.net> writes:
> > From: Thomas Gleixner <tglx@linutronix.de>
> > Date: Wed, 20 May 2020 01:42:30 +0200  
> >>> Please try, it isn't that hard..
> >>>
> >>> # time for ((i=0;i<1000;i++)); do ip li add dev dummy$i type dummy; done
> >>>
> >>> real	0m17.002s
> >>> user	0m1.064s
> >>> sys	0m0.375s  
> >> 
> >> And that solves the incorrectness of the current code in which way?  
> >
> > You mentioned that there wasn't a test case, he gave you one to try.  
> 
> If it makes you happy to compare incorrrect code with correct code, here
> you go:
> 
> 5 runs of 1000 device add, 1000 device rename and 1000 device del
> 
> CONFIG_PREEMPT_NONE=y
> 
>          Base      rwsem
>  add     0:05.01   0:05.28
> 	 0:05.93   0:06.11
> 	 0:06.52   0:06.26
> 	 0:06.06   0:05.74
> 	 0:05.71   0:06.07
> 
>  rename  0:32.57   0:33.04
> 	 0:32.91   0:32.45
> 	 0:32.72   0:32.53
> 	 0:39.65   0:34.18
> 	 0:34.52   0:32.50
> 
>  delete  3:48.65   3:48.91
> 	 3:49.66   3:49.13
> 	 3:45.29   3:48.26
> 	 3:47.56   3:46.60
> 	 3:50.01   3:48.06
> 
>  -------------------------
> 
> CONFIG_PREEMPT_VOLUNTARY=y
> 
>          Base      rwsem
>  add     0:06.80   0:06.42
> 	 0:04.77   0:05.03
> 	 0:05.74   0:04.62
> 	 0:05.87   0:04.34
> 	 0:04.20   0:04.12
> 
>  rename  0:33.33   0:42.02
> 	 0:42.36   0:32.55
> 	 0:39.58   0:31.60
> 	 0:33.69   0:35.08
> 	 0:34.24   0:33.97
> 
>  delete  3:47.82   3:44.00
> 	 3:47.42   3:51.00
> 	 3:48.52   3:48.88
> 	 3:48.50   3:48.09
> 	 3:50.03   3:46.56
> 
>  -------------------------
> 
> CONFIG_PREEMPT=y
> 
>          Base      rwsem
> 
>  add     0:07.89   0:07.72
> 	 0:07.25   0:06.72
> 	 0:07.42   0:06.51
> 	 0:06.92   0:06.38
> 	 0:06.20   0:06.72
> 
>  rename  0:41.77   0:32.39
> 	 0:44.29   0:33.29
> 	 0:36.19   0:34.86
> 	 0:33.19   0:35.06
> 	 0:37.00   0:34.78
> 
>  delete  2:36.96   2:39.97
> 	 2:37.80   2:42.19
> 	 2:44.66   2:48.40
> 	 2:39.75   2:41.02
> 	 2:40.77   2:38.36
> 
> The runtime variation is rather large and when running the same in a VM
> I got complete random numbers for both base and rwsem. The most amazing
> was delete where the time varies from 30s to 6m20s.
> 
> Btw, Sebastian noticed that rename spams dmesg:
> 
>   netdev_info(dev, "renamed from %s\n", oldname);
> 
> which eats about 50% of the Rename run time.
> 
>          Base      netdev_info() removed
> 
> Rename   0:34.84   0:17.48
> 
> That number at least makes tons of sense
> 
> Thanks,
> 
>         tglx

Looks good thanks for following through.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex
  2020-05-20 10:48   ` Christian König
@ 2020-05-21  0:09     ` Ahmed S. Darwish
  2020-05-21 13:20       ` Christian König
  0 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-21  0:09 UTC (permalink / raw)
  To: Christian König
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, David (ChunMing) Zhou,
	amd-gfx, Paul E. McKenney, David Airlie, Sebastian A. Siewior,
	LKML, Steven Rostedt, Christian König, dri-devel,
	Daniel Vetter, Alex Deucher, Felix Kuehling, Thomas Gleixner,
	Sumit Semwal, linux-media

On Wed, May 20, 2020, Christian König wrote:
> Am 19.05.20 um 23:45 schrieb Ahmed S. Darwish:
> > A sequence counter write side critical section must be protected by some
> > form of locking to serialize writers. If the serialization primitive is
> > not disabling preemption implicitly, preemption has to be explicitly
> > disabled before entering the sequence counter write side critical
> > section.
> >
> > The dma-buf reservation subsystem uses plain sequence counters to manage
> > updates to reservations. Writer serialization is accomplished through a
> > wound/wait mutex.
> >
> > Acquiring a wound/wait mutex does not disable preemption, so this needs
> > to be done manually before and after the write side critical section.
> >
> > Use the newly-added seqcount_ww_mutex_t instead:
> >
> >    - It associates the ww_mutex with the sequence count, which enables
> >      lockdep to validate that the write side critical section is properly
> >      serialized.
> >
> >    - It removes the need to explicitly add preempt_disable/enable()
> >      around the write side critical section because the write_begin/end()
> >      functions for this new data type automatically do this.
> >
> > If lockdep is disabled this ww_mutex lock association is compiled out
> > and has neither storage size nor runtime overhead.
>
> Mhm, is the dma_resv object the only user of this new seqcount_ww_mutex
> variant ?
>
> If yes we are trying to get rid of this sequence counter for quite some
> time, so I would rather invest the additional time to finish this.
>

In this patch series, each extra "seqcount with associated lock" data
type costs us, exactly:

  - 1 typedef definition, seqcount_ww_mutex_t
  - 1 static initializer, SEQCNT_WW_MUTEX_ZERO()
  - 1 runtime initializer, seqcount_ww_mutex_init()

Definitions for the typedef and the 2 initializers above are
template-code one liners.

The logic which automatically disables preemption upon entering a
seqcount_ww_mutex_t write side critical section is also already shared
with seqcount_mutex_t and any future, preemptible, associated lock.

So, yes, dma-resv is the only user of seqcount_ww_mutex.

But even in that case, given the one liner template code nature of
seqcount_ww_mutex_t logic, it does not make sense to block the dma_resv
and amdgpu change until at some point in the future the sequence counter
is completely removed.

**If and when** the sequence counter gets removed, please just remove
the seqcount_ww_mutex_t data type with it. It will be extremely simple.

> Regards,
> Christian.
>

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex
  2020-05-21  0:09     ` Ahmed S. Darwish
@ 2020-05-21 13:20       ` Christian König
  0 siblings, 0 replies; 77+ messages in thread
From: Christian König @ 2020-05-21 13:20 UTC (permalink / raw)
  To: Ahmed S. Darwish, Christian König
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, David (ChunMing) Zhou,
	amd-gfx, Paul E. McKenney, David Airlie, Sebastian A. Siewior,
	LKML, Steven Rostedt, dri-devel, Daniel Vetter, Alex Deucher,
	Felix Kuehling, Thomas Gleixner, Sumit Semwal, linux-media

Am 21.05.20 um 02:09 schrieb Ahmed S. Darwish:
> On Wed, May 20, 2020, Christian König wrote:
>> Am 19.05.20 um 23:45 schrieb Ahmed S. Darwish:
>>> A sequence counter write side critical section must be protected by some
>>> form of locking to serialize writers. If the serialization primitive is
>>> not disabling preemption implicitly, preemption has to be explicitly
>>> disabled before entering the sequence counter write side critical
>>> section.
>>>
>>> The dma-buf reservation subsystem uses plain sequence counters to manage
>>> updates to reservations. Writer serialization is accomplished through a
>>> wound/wait mutex.
>>>
>>> Acquiring a wound/wait mutex does not disable preemption, so this needs
>>> to be done manually before and after the write side critical section.
>>>
>>> Use the newly-added seqcount_ww_mutex_t instead:
>>>
>>>     - It associates the ww_mutex with the sequence count, which enables
>>>       lockdep to validate that the write side critical section is properly
>>>       serialized.
>>>
>>>     - It removes the need to explicitly add preempt_disable/enable()
>>>       around the write side critical section because the write_begin/end()
>>>       functions for this new data type automatically do this.
>>>
>>> If lockdep is disabled this ww_mutex lock association is compiled out
>>> and has neither storage size nor runtime overhead.
>> Mhm, is the dma_resv object the only user of this new seqcount_ww_mutex
>> variant ?
>>
>> If yes we are trying to get rid of this sequence counter for quite some
>> time, so I would rather invest the additional time to finish this.
>>
> In this patch series, each extra "seqcount with associated lock" data
> type costs us, exactly:
>
>    - 1 typedef definition, seqcount_ww_mutex_t
>    - 1 static initializer, SEQCNT_WW_MUTEX_ZERO()
>    - 1 runtime initializer, seqcount_ww_mutex_init()
>
> Definitions for the typedef and the 2 initializers above are
> template-code one liners.

In this case I'm perfectly fine with this.

>
> The logic which automatically disables preemption upon entering a
> seqcount_ww_mutex_t write side critical section is also already shared
> with seqcount_mutex_t and any future, preemptible, associated lock.
>
> So, yes, dma-resv is the only user of seqcount_ww_mutex.
>
> But even in that case, given the one liner template code nature of
> seqcount_ww_mutex_t logic, it does not make sense to block the dma_resv
> and amdgpu change until at some point in the future the sequence counter
> is completely removed.
>
> **If and when** the sequence counter gets removed, please just remove
> the seqcount_ww_mutex_t data type with it. It will be extremely simple.

Completely agree, just wanted to prevent that we now add a lot of code 
which gets removed again ~3 month from now.

Regards,
Christian.

>
>> Regards,
>> Christian.
>>
> Thanks,
>
> --
> Ahmed S. Darwish
> Linutronix GmbH


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-19 21:45 ` [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API Ahmed S. Darwish
  2020-05-20 12:22   ` Konstantin Khlebnikov
@ 2020-05-22 14:57   ` Peter Zijlstra
  2020-05-22 15:17     ` Sebastian A. Siewior
                       ` (2 more replies)
  1 sibling, 3 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 14:57 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Andrew Morton,
	Konstantin Khlebnikov, linux-mm

On Tue, May 19, 2020 at 11:45:24PM +0200, Ahmed S. Darwish wrote:
> @@ -713,10 +713,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
>   */
>  void lru_add_drain_all(void)
>  {

> +	static unsigned int lru_drain_gen;
>  	static struct cpumask has_work;
> +	static DEFINE_MUTEX(lock);
> +	int cpu, this_gen;
>  
>  	/*
>  	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
>  	if (WARN_ON(!mm_percpu_wq))
>  		return;
>  

> +	this_gen = READ_ONCE(lru_drain_gen);
> +	smp_rmb();

	this_gen = smp_load_acquire(&lru_drain_gen);
>  
>  	mutex_lock(&lock);
>  
>  	/*
> +	 * (C) Exit the draining operation if a newer generation, from another
> +	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
>  	 */
> +	if (unlikely(this_gen != lru_drain_gen))
>  		goto done;
>  

> +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> +	smp_wmb();

You can leave this smp_wmb() out and rely on the smp_mb() implied by
queue_work_on()'s test_and_set_bit().

>  	cpumask_clear(&has_work);
> -
>  	for_each_online_cpu(cpu) {
>  		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
>  

While you're here, do:

	s/cpumask_set_cpu/__&/

> @@ -766,7 +803,7 @@ void lru_add_drain_all(void)
>  {
>  	lru_add_drain();
>  }
> -#endif
> +#endif /* CONFIG_SMP */
>  
>  /**
>   * release_pages - batched put_page()

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-22 14:57   ` Peter Zijlstra
@ 2020-05-22 15:17     ` Sebastian A. Siewior
  2020-05-22 16:23       ` Peter Zijlstra
  2020-05-25 15:24     ` Ahmed S. Darwish
  2020-05-25 16:10     ` John Ogness
  2 siblings, 1 reply; 77+ messages in thread
From: Sebastian A. Siewior @ 2020-05-22 15:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML, Andrew Morton,
	Konstantin Khlebnikov, linux-mm

On 2020-05-22 16:57:07 [+0200], Peter Zijlstra wrote:
> > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> >  	if (WARN_ON(!mm_percpu_wq))
> >  		return;
> >  
> 
> > +	this_gen = READ_ONCE(lru_drain_gen);
> > +	smp_rmb();
> 
> 	this_gen = smp_load_acquire(&lru_drain_gen);
> >  
> >  	mutex_lock(&lock);
> >  
> >  	/*
> > +	 * (C) Exit the draining operation if a newer generation, from another
> > +	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
> >  	 */
> > +	if (unlikely(this_gen != lru_drain_gen))
> >  		goto done;
> >  
> 
> > +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> > +	smp_wmb();
> 
> You can leave this smp_wmb() out and rely on the smp_mb() implied by
> queue_work_on()'s test_and_set_bit().

This is to avoid smp_store_release() ?

Sebastian

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-22 15:17     ` Sebastian A. Siewior
@ 2020-05-22 16:23       ` Peter Zijlstra
  0 siblings, 0 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 16:23 UTC (permalink / raw)
  To: Sebastian A. Siewior
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML, Andrew Morton,
	Konstantin Khlebnikov, linux-mm

On Fri, May 22, 2020 at 05:17:05PM +0200, Sebastian A. Siewior wrote:
> On 2020-05-22 16:57:07 [+0200], Peter Zijlstra wrote:
> > > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> > >  	if (WARN_ON(!mm_percpu_wq))
> > >  		return;
> > >  
> > 
> > > +	this_gen = READ_ONCE(lru_drain_gen);
> > > +	smp_rmb();
> > 
> > 	this_gen = smp_load_acquire(&lru_drain_gen);
> > >  
> > >  	mutex_lock(&lock);
> > >  
> > >  	/*
> > > +	 * (C) Exit the draining operation if a newer generation, from another
> > > +	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
> > >  	 */
> > > +	if (unlikely(this_gen != lru_drain_gen))
> > >  		goto done;
> > >  
> > 
> > > +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> > > +	smp_wmb();
> > 
> > You can leave this smp_wmb() out and rely on the smp_mb() implied by
> > queue_work_on()'s test_and_set_bit().
> 
> This is to avoid smp_store_release() ?

store_release would have the barrier on the other end. If you read the
comments (I so helpfully cut out) you'll see it wants to order against
later stores, not ealier.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write
  2020-05-19 21:45 ` [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write Ahmed S. Darwish
@ 2020-05-22 16:39   ` Peter Zijlstra
  2020-05-25  9:56     ` Ahmed S. Darwish
       [not found]   ` <20200522001237.A00E8206BE@mail.kernel.org>
  1 sibling, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 16:39 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jens Axboe,
	Phillip Susi, Vivek Goyal, linux-block

On Tue, May 19, 2020 at 11:45:26PM +0200, Ahmed S. Darwish wrote:
> For optimized block readers not holding a mutex, the "number of sectors"
> 64-bit value is protected from tearing on 32-bit architectures by a
> sequence counter.
> 
> Disable preemption before entering that sequence counter's write side
> critical section. Otherwise, the read side can preempt the write side
> section and spin for the entire scheduler tick. If the reader belongs to
> a real-time scheduling class, it can spin forever and the kernel will
> livelock.
> 
> Fixes: c83f6bf98dc1 ("block: add partition resize function to blkpg ioctl")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  block/blk.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/block/blk.h b/block/blk.h
> index 0a94ec68af32..151f86932547 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -470,9 +470,11 @@ static inline sector_t part_nr_sects_read(struct hd_struct *part)
>  static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
>  {
>  #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> +	preempt_disable();
>  	write_seqcount_begin(&part->nr_sects_seq);
>  	part->nr_sects = size;
>  	write_seqcount_end(&part->nr_sects_seq);
> +	preempt_enable();
>  #elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
>  	preempt_disable();
>  	part->nr_sects = size;

This does look like something that include/linux/u64_stats_sync.h could
help with.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-19 21:45 ` [PATCH v1 07/25] lockdep: Add preemption disabled assertion API Ahmed S. Darwish
@ 2020-05-22 17:55   ` Peter Zijlstra
  2020-05-23 14:59     ` Sebastian A. Siewior
  0 siblings, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 17:55 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML

On Tue, May 19, 2020 at 11:45:29PM +0200, Ahmed S. Darwish wrote:
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 206774ac6946..54c929ea5b98 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -702,6 +702,14 @@ do {									\
>  			  "Not in hardirq as expected\n");		\
>  	} while (0)
>  
> +/*
> + * Don't define this assertion here to avoid a call-site's header file
> + * dependency on sched.h task_struct current. This is needed by call
> + * sites that are inline defined at header files already included by
> + * sched.h.
> + */
> +void lockdep_assert_preemption_disabled(void);

So how about:

#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_TRACE_IRQFLAGS)
#define lockdep_assert_preemption_disabled() do { \
		WARN_ON(debug_locks && !preempt_count() && \
			current->hardirqs_enabled); \
	} while (0)
#else
#define lockdep_assert_preemption_disabled() do { } while (0)
#endif

That is both more consistent with the things you claim it's modelled
after and also completely avoids that header dependency.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage
  2020-05-19 21:45 ` [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
@ 2020-05-22 18:01   ` Peter Zijlstra
  2020-05-22 22:24     ` Steven Rostedt
  0 siblings, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 18:01 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

On Tue, May 19, 2020 at 11:45:31PM +0200, Ahmed S. Darwish wrote:
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index d35be7709403..2a4af746b1da 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -1,36 +1,15 @@
>  /* SPDX-License-Identifier: GPL-2.0 */
>  #ifndef __LINUX_SEQLOCK_H
>  #define __LINUX_SEQLOCK_H
> +
>  /*
> - * Reader/writer consistent mechanism without starving writers. This type of
> - * lock for data where the reader wants a consistent set of information
> - * and is willing to retry if the information changes. There are two types
> - * of readers:
> - * 1. Sequence readers which never block a writer but they may have to retry
> - *    if a writer is in progress by detecting change in sequence number.
> - *    Writers do not wait for a sequence reader.
> - * 2. Locking readers which will wait if a writer or another locking reader
> - *    is in progress. A locking reader in progress will also block a writer
> - *    from going forward. Unlike the regular rwlock, the read lock here is
> - *    exclusive so that only one locking reader can get it.
> + * seqcount_t / seqlock_t - a reader-writer consistency mechanism with
> + * lockless readers (read-only retry loops), and no writer starvation.
>   *
> - * This is not as cache friendly as brlock. Also, this may not work well
> - * for data that contains pointers, because any writer could
> - * invalidate a pointer that a reader was following.
> + * See Documentation/locking/seqlock.rst for full description.

So I really really hate that... I _much_ prefer code comments to crappy
documents.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-19 21:45 ` [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes Ahmed S. Darwish
@ 2020-05-22 18:02   ` Peter Zijlstra
  2020-05-22 18:03     ` Peter Zijlstra
  0 siblings, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 18:02 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

On Tue, May 19, 2020 at 11:45:32PM +0200, Ahmed S. Darwish wrote:
> Mark all C code samples inside seqlock.h kernel-doc text with the RST
> 'code-block: c' directive. Sphinx won't properly format the example code
> and will produce noisy text indentation warnings otherwise.

I so bloody hate RST.. and now it's infecting perfectly sane comments
and turning them into unreadable junk :-(

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-22 18:02   ` Peter Zijlstra
@ 2020-05-22 18:03     ` Peter Zijlstra
  2020-05-22 18:26       ` Thomas Gleixner
  0 siblings, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 18:03 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

On Fri, May 22, 2020 at 08:02:54PM +0200, Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 11:45:32PM +0200, Ahmed S. Darwish wrote:
> > Mark all C code samples inside seqlock.h kernel-doc text with the RST
> > 'code-block: c' directive. Sphinx won't properly format the example code
> > and will produce noisy text indentation warnings otherwise.
> 
> I so bloody hate RST.. and now it's infecting perfectly sane comments
> and turning them into unreadable junk :-(

The correct fix is, as always, to remove the kernel-doc marker.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-22 18:03     ` Peter Zijlstra
@ 2020-05-22 18:26       ` Thomas Gleixner
  2020-05-22 18:32         ` Peter Zijlstra
  0 siblings, 1 reply; 77+ messages in thread
From: Thomas Gleixner @ 2020-05-22 18:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, Jonathan Corbet, linux-doc

Peter Zijlstra <peterz@infradead.org> writes:
> On Fri, May 22, 2020 at 08:02:54PM +0200, Peter Zijlstra wrote:
>> On Tue, May 19, 2020 at 11:45:32PM +0200, Ahmed S. Darwish wrote:
>> > Mark all C code samples inside seqlock.h kernel-doc text with the RST
>> > 'code-block: c' directive. Sphinx won't properly format the example code
>> > and will produce noisy text indentation warnings otherwise.
>> 
>> I so bloody hate RST.. and now it's infecting perfectly sane comments
>> and turning them into unreadable junk :-(
>
> The correct fix is, as always, to remove the kernel-doc marker.

Get over it already.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-22 18:26       ` Thomas Gleixner
@ 2020-05-22 18:32         ` Peter Zijlstra
  2020-05-25  9:36           ` Ahmed S. Darwish
  0 siblings, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-22 18:32 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

On Fri, May 22, 2020 at 08:26:44PM +0200, Thomas Gleixner wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
> > On Fri, May 22, 2020 at 08:02:54PM +0200, Peter Zijlstra wrote:
> >> On Tue, May 19, 2020 at 11:45:32PM +0200, Ahmed S. Darwish wrote:
> >> > Mark all C code samples inside seqlock.h kernel-doc text with the RST
> >> > 'code-block: c' directive. Sphinx won't properly format the example code
> >> > and will produce noisy text indentation warnings otherwise.
> >> 
> >> I so bloody hate RST.. and now it's infecting perfectly sane comments
> >> and turning them into unreadable junk :-(
> >
> > The correct fix is, as always, to remove the kernel-doc marker.
> 
> Get over it already.

I will not let sensible code comments deteriorate to the benefit of some
external piece of crap.

As a programmer the primary interface to all this is a text editor, not
a web broswer or a pdf file or whatever other bullshit.

If comments are unreadable in your text editor, they're useless.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage
  2020-05-22 18:01   ` Peter Zijlstra
@ 2020-05-22 22:24     ` Steven Rostedt
  2020-05-25 10:50       ` Ahmed S. Darwish
  0 siblings, 1 reply; 77+ messages in thread
From: Steven Rostedt @ 2020-05-22 22:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Sebastian A. Siewior, LKML, Jonathan Corbet,
	linux-doc

On Fri, 22 May 2020 20:01:45 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> On Tue, May 19, 2020 at 11:45:31PM +0200, Ahmed S. Darwish wrote:
> > diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> > index d35be7709403..2a4af746b1da 100644
> > --- a/include/linux/seqlock.h
> > +++ b/include/linux/seqlock.h
> > @@ -1,36 +1,15 @@
> >  /* SPDX-License-Identifier: GPL-2.0 */
> >  #ifndef __LINUX_SEQLOCK_H
> >  #define __LINUX_SEQLOCK_H
> > +
> >  /*
> > - * Reader/writer consistent mechanism without starving writers. This type of
> > - * lock for data where the reader wants a consistent set of information
> > - * and is willing to retry if the information changes. There are two types
> > - * of readers:
> > - * 1. Sequence readers which never block a writer but they may have to retry
> > - *    if a writer is in progress by detecting change in sequence number.
> > - *    Writers do not wait for a sequence reader.
> > - * 2. Locking readers which will wait if a writer or another locking reader
> > - *    is in progress. A locking reader in progress will also block a writer
> > - *    from going forward. Unlike the regular rwlock, the read lock here is
> > - *    exclusive so that only one locking reader can get it.
> > + * seqcount_t / seqlock_t - a reader-writer consistency mechanism with
> > + * lockless readers (read-only retry loops), and no writer starvation.
> >   *
> > - * This is not as cache friendly as brlock. Also, this may not work well
> > - * for data that contains pointers, because any writer could
> > - * invalidate a pointer that a reader was following.
> > + * See Documentation/locking/seqlock.rst for full description.  
> 
> So I really really hate that... I _much_ prefer code comments to crappy
> documents.

Agreed. Comments are much less likely to bitrot than documents. The
farther away the documentation is from the code, the quicker it becomes
stale.

It's fine to add "See Documentation/..." but please don't *ever* remove
comments that's next to the actual code.

-- Steve

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-22 17:55   ` Peter Zijlstra
@ 2020-05-23 14:59     ` Sebastian A. Siewior
  2020-05-23 22:41       ` Peter Zijlstra
  0 siblings, 1 reply; 77+ messages in thread
From: Sebastian A. Siewior @ 2020-05-23 14:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On 2020-05-22 19:55:03 [+0200], Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 11:45:29PM +0200, Ahmed S. Darwish wrote:
> > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > index 206774ac6946..54c929ea5b98 100644
> > --- a/include/linux/lockdep.h
> > +++ b/include/linux/lockdep.h
> > @@ -702,6 +702,14 @@ do {									\
> >  			  "Not in hardirq as expected\n");		\
> >  	} while (0)
> >  
> > +/*
> > + * Don't define this assertion here to avoid a call-site's header file
> > + * dependency on sched.h task_struct current. This is needed by call
> > + * sites that are inline defined at header files already included by
> > + * sched.h.
> > + */
> > +void lockdep_assert_preemption_disabled(void);
> 
> So how about:
> 
> #if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_TRACE_IRQFLAGS)
> #define lockdep_assert_preemption_disabled() do { \
> 		WARN_ON(debug_locks && !preempt_count() && \
> 			current->hardirqs_enabled); \
> 	} while (0)
> #else
> #define lockdep_assert_preemption_disabled() do { } while (0)
> #endif
> 
> That is both more consistent with the things you claim it's modelled
> after and also completely avoids that header dependency.

So we need additionally: 

- #include <linux/sched.h> in include/linux/flex_proportions.h
  and I think un another file as well.

- write_seqcount_t_begin_nested() as a define

- write_seqcount_t_begin() as a define

Any "static inline" in the header file using
lockdep_assert_preemption_disabled() will tro to complain about missing
current-> define. But yes, it will work otherwise.

Sebastian

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-23 14:59     ` Sebastian A. Siewior
@ 2020-05-23 22:41       ` Peter Zijlstra
  2020-05-24 10:50         ` Sebastian A. Siewior
  2020-05-25 10:22         ` Peter Zijlstra
  0 siblings, 2 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-23 22:41 UTC (permalink / raw)
  To: Sebastian A. Siewior
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> On 2020-05-22 19:55:03 [+0200], Peter Zijlstra wrote:

> > That is both more consistent with the things you claim it's modelled
> > after and also completely avoids that header dependency.
> 
> So we need additionally: 
> 
> - #include <linux/sched.h> in include/linux/flex_proportions.h
>   and I think un another file as well.
> 
> - write_seqcount_t_begin_nested() as a define
> 
> - write_seqcount_t_begin() as a define
> 
> Any "static inline" in the header file using
> lockdep_assert_preemption_disabled() will tro to complain about missing
> current-> define. But yes, it will work otherwise.

Because...? /me rummages around.. Ah you're proposing sticking this in
seqcount itself and then header hell.

Moo.. ok I'll go have another look on Monday.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-23 22:41       ` Peter Zijlstra
@ 2020-05-24 10:50         ` Sebastian A. Siewior
  2020-05-25 10:22         ` Peter Zijlstra
  1 sibling, 0 replies; 77+ messages in thread
From: Sebastian A. Siewior @ 2020-05-24 10:50 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On 2020-05-24 00:41:32 [+0200], Peter Zijlstra wrote:
> Because...? /me rummages around.. Ah you're proposing sticking this in
> seqcount itself and then header hell.
> 
> Moo.. ok I'll go have another look on Monday.

if you have a look on Monday you might want to start with the patch at
the bottom on top of the series. Both sched.h includes are needed also
you need to avoid write_seqcount_t_begin_nested() as static inline in
header files.

diff --git a/include/linux/flex_proportions.h b/include/linux/flex_proportions.h
index c12df59d3f5fc..c0f88f08371d7 100644
--- a/include/linux/flex_proportions.h
+++ b/include/linux/flex_proportions.h
@@ -12,6 +12,7 @@
 #include <linux/spinlock.h>
 #include <linux/seqlock.h>
 #include <linux/gfp.h>
+#include <linux/sched.h>
 
 /*
  * When maximum proportion of some event type is specified, this is the
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 54c929ea5b982..76385e599a9cb 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -702,13 +702,13 @@ do {									\
 			  "Not in hardirq as expected\n");		\
 	} while (0)
 
-/*
- * Don't define this assertion here to avoid a call-site's header file
- * dependency on sched.h task_struct current. This is needed by call
- * sites that are inline defined at header files already included by
- * sched.h.
- */
-void lockdep_assert_preemption_disabled(void);
+#define lockdep_assert_preemption_disabled() do {			\
+	WARN_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)      &&		\
+		  debug_locks                           &&		\
+		  !current->lockdep_recursion           &&		\
+		  (preempt_count() == 0 && current->hardirqs_enabled),	\
+		  "preemption not disabled as expected\n");		\
+       } while (0)
 
 #else
 # define might_lock(lock) do { } while (0)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index eca464ecf012f..7f1261376110a 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -423,11 +423,11 @@ static inline void __write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
 	seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
 }
 
-static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass)
-{
-	lockdep_assert_preemption_disabled();
-	__write_seqcount_t_begin_nested(s, subclass);
-}
+#define write_seqcount_t_begin_nested(__s, __subclass)		\
+	do {							\
+		lockdep_assert_preemption_disabled();		\
+		__write_seqcount_t_begin_nested(__s, __subclass);\
+	} while (0)
 
 /*
  * write_seqcount_t_begin() without lockdep non-preemptibility check.
@@ -450,10 +450,7 @@ static inline void __write_seqcount_t_begin(seqcount_t *s)
  */
 #define write_seqcount_begin(s)		do_write_seqcount_begin(s)
 
-static inline void write_seqcount_t_begin(seqcount_t *s)
-{
-	write_seqcount_t_begin_nested(s, 0);
-}
+#define write_seqcount_t_begin(_s)	write_seqcount_t_begin_nested(_s, 0);
 
 /**
  * write_seqcount_end() - end a seqcount write-side critical section
diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
index 30358ce3d8fe1..d0fd3edcdc50b 100644
--- a/include/linux/u64_stats_sync.h
+++ b/include/linux/u64_stats_sync.h
@@ -62,6 +62,7 @@
  * Example of use in drivers/net/loopback.c, using per_cpu containers,
  * in BH disabled context.
  */
+#include <linux/sched.h>
 #include <linux/seqlock.h>
 
 struct u64_stats_sync {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4dae65bc65c24..ac10db66cc63f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5857,18 +5857,3 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
 	dump_stack();
 }
 EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
-
-#ifdef CONFIG_PROVE_LOCKING
-
-void lockdep_assert_preemption_disabled(void)
-{
-	WARN_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&
-		  debug_locks				&&
-		  !current->lockdep_recursion		&&
-		  (preempt_count() == 0 && current->hardirqs_enabled),
-		  "preemption not disabled as expected\n");
-}
-EXPORT_SYMBOL_GPL(lockdep_assert_preemption_disabled);
-NOKPROBE_SYMBOL(lockdep_assert_preemption_disabled);
-
-#endif

Sebastian

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-22 18:32         ` Peter Zijlstra
@ 2020-05-25  9:36           ` Ahmed S. Darwish
  2020-05-25 13:44             ` Peter Zijlstra
  0 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25  9:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, Will Deacon, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, May 22, 2020 at 08:26:44PM +0200, Thomas Gleixner wrote:
> > Peter Zijlstra <peterz@infradead.org> writes:
> > > On Fri, May 22, 2020 at 08:02:54PM +0200, Peter Zijlstra wrote:
> > >> On Tue, May 19, 2020 at 11:45:32PM +0200, Ahmed S. Darwish wrote:
> > >> > Mark all C code samples inside seqlock.h kernel-doc text with the RST
> > >> > 'code-block: c' directive. Sphinx won't properly format the example code
> > >> > and will produce noisy text indentation warnings otherwise.
> > >>
> > >> I so bloody hate RST.. and now it's infecting perfectly sane comments
> > >> and turning them into unreadable junk :-(
> > >
> > > The correct fix is, as always, to remove the kernel-doc marker.
> >
> > Get over it already.
>
> I will not let sensible code comments deteriorate to the benefit of some
> external piece of crap.
>
> As a programmer the primary interface to all this is a text editor, not
> a web broswer or a pdf file or whatever other bullshit.
>
> If comments are unreadable in your text editor, they're useless.

Wait.

Most of the patch in question is just substituting the code snippet's
leading white spaces to tabs. For illustration purposes, if we remove
these white space hunks from the diff, it becomes:

  --- a/include/linux/seqlock.h
  +++ b/include/linux/seqlock.h
  @@ -232,6 +232,8 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
  + * .. code-block:: c
  ...
  + * .. code-block:: c
  ...
  - * NOTE: The non-requirement for atomic modifications does _NOT_ include
  - *       the publishing of new entries in the case where data is a dynamic
  - *       data structure.
  + * .. attention::
  + *
  + *     The non-requirement for atomic modifications does _NOT_ include
  + *     the publishing of new entries in the case where data is a dynamic
  + *     data structure.
  ...

Are you trying to tell me that, good heavens, these directives are
really hurting your eyes so much?

Putting kernel-doc aside... That huge raw_write_seqcount_latch() comment
is actually *way more readable from any text editor* after applying this
patch. Go figure.

>>> The correct fix is, as always, to remove the kernel-doc marker.

Sorry, that's not the correct fix.

In the following patches, kernel-doc for the entire seqlock.h API is
added. Singling out raw_write_seqcount_latch() doesn't make any sense.

If you look at the top of this patch series, a lot of seqlock.h
seqcount_t call sites were badly broken. The 0day kernel test bot sent
me even more erroneous call sites due to the added lockdep checks. This
is an extra argument for the added documentation: the existing one is
horrible.

So, please, don't claim that the current situation is fine. It is not.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write
  2020-05-22 16:39   ` Peter Zijlstra
@ 2020-05-25  9:56     ` Ahmed S. Darwish
  0 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25  9:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jens Axboe,
	Phillip Susi, Vivek Goyal, linux-block

Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, May 19, 2020 at 11:45:26PM +0200, Ahmed S. Darwish wrote:
> > For optimized block readers not holding a mutex, the "number of sectors"
> > 64-bit value is protected from tearing on 32-bit architectures by a
> > sequence counter.
> >
> > Disable preemption before entering that sequence counter's write side
> > critical section. Otherwise, the read side can preempt the write side
> > section and spin for the entire scheduler tick. If the reader belongs to
> > a real-time scheduling class, it can spin forever and the kernel will
> > livelock.
> >
> > Fixes: c83f6bf98dc1 ("block: add partition resize function to blkpg ioctl")
> > Cc: <stable@vger.kernel.org>
> > Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> > Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> > ---
> >  block/blk.h | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/block/blk.h b/block/blk.h
> > index 0a94ec68af32..151f86932547 100644
> > --- a/block/blk.h
> > +++ b/block/blk.h
> > @@ -470,9 +470,11 @@ static inline sector_t part_nr_sects_read(struct hd_struct *part)
> >  static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
> >  {
> >  #if BITS_PER_LONG==32 && defined(CONFIG_SMP)
> > +	preempt_disable();
> >  	write_seqcount_begin(&part->nr_sects_seq);
> >  	part->nr_sects = size;
> >  	write_seqcount_end(&part->nr_sects_seq);
> > +	preempt_enable();
> >  #elif BITS_PER_LONG==32 && defined(CONFIG_PREEMPTION)
> >  	preempt_disable();
> >  	part->nr_sects = size;
>
> This does look like something that include/linux/u64_stats_sync.h could
> help with.

Correct.

I just felt though that this would be too much for a 'Cc: stable' patch.

In another (in-progress) seqlock.h patch series, all of the seqcount_t
call sites that are used for 64-bit values tearing protection on 32-bit
kernels are transformed to the u64_stats_sync.h API.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write
       [not found]   ` <20200522001237.A00E8206BE@mail.kernel.org>
@ 2020-05-25 10:12     ` Ahmed S. Darwish
  0 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25 10:12 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Peter Zijlstra, Thomas Gleixner, Sebastian A. Siewior, stable,
	Jens Axboe, Christoph Hellwig, linux-block, LKML

Sasha Levin <sashal@kernel.org> wrote:
> Hi
>
> [This is an automated email]
>
> This commit has been processed because it contains a "Fixes:" tag
> fixing commit: c83f6bf98dc1 ("block: add partition resize function to blkpg ioctl").
>
> The bot has tested the following trees: v5.6.13, v5.4.41, v4.19.123, v4.14.180, v4.9.223, v4.4.223.
>
> v5.6.13: Failed to apply! Possible dependencies:
...
> v5.4.41: Failed to apply! Possible dependencies:
...
> v4.19.123: Failed to apply! Possible dependencies:
...
> v4.14.180: Failed to apply! Possible dependencies:
...
> v4.9.223: Failed to apply! Possible dependencies:
...
> v4.4.223: Failed to apply! Possible dependencies:
...
>
> NOTE: The patch will not be queued to stable trees until it is upstream.
>
> How should we proceed with this patch?
>

The v5.7-rc1 commit 581e26004a09 ("block: move block layer internals out
of include/linux/genhd.h") moved the part_nr_sects_write() static inline
function from include/linux/genhd.h to block/blk.h.

After review, I'll send a rebased patch to stable.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-23 22:41       ` Peter Zijlstra
  2020-05-24 10:50         ` Sebastian A. Siewior
@ 2020-05-25 10:22         ` Peter Zijlstra
  2020-05-26  0:52           ` Ahmed S. Darwish
  1 sibling, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-25 10:22 UTC (permalink / raw)
  To: Sebastian A. Siewior
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On Sun, May 24, 2020 at 12:41:32AM +0200, Peter Zijlstra wrote:
> On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> > On 2020-05-22 19:55:03 [+0200], Peter Zijlstra wrote:
> 
> > > That is both more consistent with the things you claim it's modelled
> > > after and also completely avoids that header dependency.
> > 
> > So we need additionally: 
> > 
> > - #include <linux/sched.h> in include/linux/flex_proportions.h
> >   and I think un another file as well.
> > 
> > - write_seqcount_t_begin_nested() as a define
> > 
> > - write_seqcount_t_begin() as a define
> > 
> > Any "static inline" in the header file using
> > lockdep_assert_preemption_disabled() will tro to complain about missing
> > current-> define. But yes, it will work otherwise.
> 
> Because...? /me rummages around.. Ah you're proposing sticking this in
> seqcount itself and then header hell.
> 
> Moo.. ok I'll go have another look on Monday.

How's this?

---

diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index d7f7e436c3af..459ae7a6c207 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -14,6 +14,7 @@
 
 #include <linux/typecheck.h>
 #include <asm/irqflags.h>
+#include <asm/percpu.h>
 
 /* Currently lockdep_softirqs_on/off is used only by lockdep */
 #ifdef CONFIG_PROVE_LOCKING
@@ -31,18 +32,22 @@
 #endif
 
 #ifdef CONFIG_TRACE_IRQFLAGS
+
+DECLARE_PER_CPU(int, hardirqs_enabled);
+DECLARE_PER_CPU(int, hardirq_context);
+
   extern void trace_hardirqs_on_prepare(void);
   extern void trace_hardirqs_off_prepare(void);
   extern void trace_hardirqs_on(void);
   extern void trace_hardirqs_off(void);
-# define lockdep_hardirq_context(p)	((p)->hardirq_context)
+# define lockdep_hardirq_context(p)	(this_cpu_read(hardirq_context))
 # define lockdep_softirq_context(p)	((p)->softirq_context)
-# define lockdep_hardirqs_enabled(p)	((p)->hardirqs_enabled)
+# define lockdep_hardirqs_enabled(p)	(this_cpu_read(hardirqs_enabled))
 # define lockdep_softirqs_enabled(p)	((p)->softirqs_enabled)
-# define lockdep_hardirq_enter()		\
-do {						\
-	if (!current->hardirq_context++)	\
-		current->hardirq_threaded = 0;	\
+# define lockdep_hardirq_enter()			\
+do {							\
+	if (!this_cpu_inc_return(hardirq_context))	\
+		current->hardirq_threaded = 0;		\
 } while (0)
 # define lockdep_hardirq_threaded()		\
 do {						\
@@ -50,7 +55,7 @@ do {						\
 } while (0)
 # define lockdep_hardirq_exit()			\
 do {						\
-	current->hardirq_context--;		\
+	this_cpu_dec(hardirq_context);		\
 } while (0)
 # define lockdep_softirq_enter()		\
 do {						\
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 8fce5c98a4b0..754c31e30a83 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -20,6 +20,7 @@ extern int lock_stat;
 #define MAX_LOCKDEP_SUBCLASSES		8UL
 
 #include <linux/types.h>
+#include <asm/percpu.h>
 
 enum lockdep_wait_type {
 	LD_WAIT_INV = 0,	/* not checked, catch all */
@@ -703,28 +704,29 @@ do {									\
 	lock_release(&(lock)->dep_map, _THIS_IP_);			\
 } while (0)
 
-#define lockdep_assert_irqs_enabled()	do {				\
-		WARN_ONCE(debug_locks && !current->lockdep_recursion &&	\
-			  !current->hardirqs_enabled,			\
-			  "IRQs not enabled as expected\n");		\
-	} while (0)
+DECLARE_PER_CPU(int, hardirqs_enabled);
+DECLARE_PER_CPU(int, hardirq_context);
 
-#define lockdep_assert_irqs_disabled()	do {				\
-		WARN_ONCE(debug_locks && !current->lockdep_recursion &&	\
-			  current->hardirqs_enabled,			\
-			  "IRQs not disabled as expected\n");		\
-	} while (0)
+#define lockdep_assert_irqs_enabled()					\
+do {									\
+	WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled));	\
+} while (0)
 
-#define lockdep_assert_in_irq() do {					\
-		WARN_ONCE(debug_locks && !current->lockdep_recursion &&	\
-			  !current->hardirq_context,			\
-			  "Not in hardirq as expected\n");		\
-	} while (0)
+#define lockdep_assert_irqs_disabled()					\
+do {									\
+	WARN_ON_ONCE(debug_locks && this_cpu_read(hardirqs_enabled));	\
+} while (0)
+
+#define lockdep_assert_in_irq()						\
+do {									\
+	WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirq_context));	\
+} while (0)
 
 #else
 # define might_lock(lock) do { } while (0)
 # define might_lock_read(lock) do { } while (0)
 # define might_lock_nested(lock, subclass) do { } while (0)
+
 # define lockdep_assert_irqs_enabled() do { } while (0)
 # define lockdep_assert_irqs_disabled() do { } while (0)
 # define lockdep_assert_in_irq() do { } while (0)
@@ -734,7 +736,7 @@ do {									\
 
 # define lockdep_assert_RT_in_threaded_ctx() do {			\
 		WARN_ONCE(debug_locks && !current->lockdep_recursion &&	\
-			  current->hardirq_context &&			\
+			  lockdep_hardirq_context(current) &&		\
 			  !(current->hardirq_threaded || current->irq_config),	\
 			  "Not in threaded context on PREEMPT_RT as expected\n");	\
 } while (0)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 1d68ee36c583..3d48f80848db 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -990,8 +990,6 @@ struct task_struct {
 	unsigned long			hardirq_disable_ip;
 	unsigned int			hardirq_enable_event;
 	unsigned int			hardirq_disable_event;
-	int				hardirqs_enabled;
-	int				hardirq_context;
 	u64				hardirq_chain_key;
 	unsigned long			softirq_disable_ip;
 	unsigned long			softirq_enable_ip;
diff --git a/kernel/fork.c b/kernel/fork.c
index 96eb4b535ced..5e0e4918dc9f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1946,8 +1946,8 @@ static __latent_entropy struct task_struct *copy_process(
 
 	rt_mutex_init_task(p);
 
+	lockdep_assert_irqs_enabled();
 #ifdef CONFIG_PROVE_LOCKING
-	DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
 	DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
 #endif
 	retval = -EAGAIN;
@@ -2028,7 +2028,6 @@ static __latent_entropy struct task_struct *copy_process(
 #endif
 #ifdef CONFIG_TRACE_IRQFLAGS
 	p->irq_events = 0;
-	p->hardirqs_enabled = 0;
 	p->hardirq_enable_ip = 0;
 	p->hardirq_enable_event = 0;
 	p->hardirq_disable_ip = _THIS_IP_;
@@ -2038,7 +2037,6 @@ static __latent_entropy struct task_struct *copy_process(
 	p->softirq_enable_event = 0;
 	p->softirq_disable_ip = 0;
 	p->softirq_disable_event = 0;
-	p->hardirq_context = 0;
 	p->softirq_context = 0;
 #endif
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index bdea09b365b6..b113941f579e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2062,9 +2062,9 @@ print_bad_irq_dependency(struct task_struct *curr,
 	pr_warn("-----------------------------------------------------\n");
 	pr_warn("%s/%d [HC%u[%lu]:SC%u[%lu]:HE%u:SE%u] is trying to acquire:\n",
 		curr->comm, task_pid_nr(curr),
-		curr->hardirq_context, hardirq_count() >> HARDIRQ_SHIFT,
+		lockdep_hardirq_context(curr), hardirq_count() >> HARDIRQ_SHIFT,
 		curr->softirq_context, softirq_count() >> SOFTIRQ_SHIFT,
-		curr->hardirqs_enabled,
+		lockdep_hardirqs_enabled(curr),
 		curr->softirqs_enabled);
 	print_lock(next);
 
@@ -3649,7 +3649,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip)
 	if (unlikely(!debug_locks || current->lockdep_recursion))
 		return;
 
-	if (unlikely(current->hardirqs_enabled)) {
+	if (unlikely(lockdep_hardirqs_enabled(current))) {
 		/*
 		 * Neither irq nor preemption are disabled here
 		 * so this is racy by nature but losing one hit
@@ -3677,7 +3677,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip)
 	 * Can't allow enabling interrupts while in an interrupt handler,
 	 * that's general bad form and such. Recursion, limited stack etc..
 	 */
-	if (DEBUG_LOCKS_WARN_ON(current->hardirq_context))
+	if (DEBUG_LOCKS_WARN_ON(lockdep_hardirq_context(current)))
 		return;
 
 	current->hardirq_chain_key = current->curr_chain_key;
@@ -3695,7 +3695,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip)
 	if (unlikely(!debug_locks || curr->lockdep_recursion))
 		return;
 
-	if (curr->hardirqs_enabled) {
+	if (lockdep_hardirqs_enabled(curr)) {
 		/*
 		 * Neither irq nor preemption are disabled here
 		 * so this is racy by nature but losing one hit
@@ -3721,7 +3721,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip)
 			    current->curr_chain_key);
 
 	/* we'll do an OFF -> ON transition: */
-	curr->hardirqs_enabled = 1;
+	this_cpu_write(hardirqs_enabled, 1);
 	curr->hardirq_enable_ip = ip;
 	curr->hardirq_enable_event = ++curr->irq_events;
 	debug_atomic_inc(hardirqs_on_events);
@@ -3745,11 +3745,11 @@ void noinstr lockdep_hardirqs_off(unsigned long ip)
 	if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
 		return;
 
-	if (curr->hardirqs_enabled) {
+	if (lockdep_hardirqs_enabled(curr)) {
 		/*
 		 * We have done an ON -> OFF transition:
 		 */
-		curr->hardirqs_enabled = 0;
+		this_cpu_write(hardirqs_enabled, 0);
 		curr->hardirq_disable_ip = ip;
 		curr->hardirq_disable_event = ++curr->irq_events;
 		debug_atomic_inc(hardirqs_off_events);
@@ -3794,7 +3794,7 @@ void lockdep_softirqs_on(unsigned long ip)
 	 * usage bit for all held locks, if hardirqs are
 	 * enabled too:
 	 */
-	if (curr->hardirqs_enabled)
+	if (lockdep_hardirqs_enabled(curr))
 		mark_held_locks(curr, LOCK_ENABLED_SOFTIRQ);
 	lockdep_recursion_finish();
 }
@@ -3843,7 +3843,7 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)
 	 */
 	if (!hlock->trylock) {
 		if (hlock->read) {
-			if (curr->hardirq_context)
+			if (lockdep_hardirq_context(curr))
 				if (!mark_lock(curr, hlock,
 						LOCK_USED_IN_HARDIRQ_READ))
 					return 0;
@@ -3852,7 +3852,7 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)
 						LOCK_USED_IN_SOFTIRQ_READ))
 					return 0;
 		} else {
-			if (curr->hardirq_context)
+			if (lockdep_hardirq_context(curr))
 				if (!mark_lock(curr, hlock, LOCK_USED_IN_HARDIRQ))
 					return 0;
 			if (curr->softirq_context)
@@ -3890,7 +3890,7 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check)
 
 static inline unsigned int task_irq_context(struct task_struct *task)
 {
-	return LOCK_CHAIN_HARDIRQ_CONTEXT * !!task->hardirq_context +
+	return LOCK_CHAIN_HARDIRQ_CONTEXT * !!lockdep_hardirq_context(task) +
 	       LOCK_CHAIN_SOFTIRQ_CONTEXT * !!task->softirq_context;
 }
 
@@ -3983,7 +3983,7 @@ static inline short task_wait_context(struct task_struct *curr)
 	 * Set appropriate wait type for the context; for IRQs we have to take
 	 * into account force_irqthread as that is implied by PREEMPT_RT.
 	 */
-	if (curr->hardirq_context) {
+	if (lockdep_hardirq_context(curr)) {
 		/*
 		 * Check if force_irqthreads will run us threaded.
 		 */
@@ -4826,11 +4826,11 @@ static void check_flags(unsigned long flags)
 		return;
 
 	if (irqs_disabled_flags(flags)) {
-		if (DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)) {
+		if (DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled(current))) {
 			printk("possible reason: unannotated irqs-off.\n");
 		}
 	} else {
-		if (DEBUG_LOCKS_WARN_ON(!current->hardirqs_enabled)) {
+		if (DEBUG_LOCKS_WARN_ON(!lockdep_hardirqs_enabled(current))) {
 			printk("possible reason: unannotated irqs-on.\n");
 		}
 	}
diff --git a/kernel/softirq.c b/kernel/softirq.c
index beb8e3a66c7c..f45ebff906f7 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -107,6 +107,12 @@ static bool ksoftirqd_running(unsigned long pending)
  * where hardirqs are disabled legitimately:
  */
 #ifdef CONFIG_TRACE_IRQFLAGS
+
+DEFINE_PER_CPU(int, hardirqs_enabled);
+DEFINE_PER_CPU(int, hardirq_context);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled);
+EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context);
+
 void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
 {
 	unsigned long flags;

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage
  2020-05-22 22:24     ` Steven Rostedt
@ 2020-05-25 10:50       ` Ahmed S. Darwish
  2020-05-25 11:02         ` Ahmed S. Darwish
  0 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25 10:50 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Sebastian A. Siewior, LKML, Jonathan Corbet,
	linux-doc

Steven Rostedt <rostedt@goodmis.org> wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:
> > On Tue, May 19, 2020 at 11:45:31PM +0200, Ahmed S. Darwish wrote:
> > > diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> > > index d35be7709403..2a4af746b1da 100644
> > > --- a/include/linux/seqlock.h
> > > +++ b/include/linux/seqlock.h
> > > @@ -1,36 +1,15 @@
> > >  /* SPDX-License-Identifier: GPL-2.0 */
> > >  #ifndef __LINUX_SEQLOCK_H
> > >  #define __LINUX_SEQLOCK_H
> > > +
> > >  /*
> > > - * Reader/writer consistent mechanism without starving writers. This type of
> > > - * lock for data where the reader wants a consistent set of information
> > > - * and is willing to retry if the information changes. There are two types
> > > - * of readers:
> > > - * 1. Sequence readers which never block a writer but they may have to retry
> > > - *    if a writer is in progress by detecting change in sequence number.
> > > - *    Writers do not wait for a sequence reader.
> > > - * 2. Locking readers which will wait if a writer or another locking reader
> > > - *    is in progress. A locking reader in progress will also block a writer
> > > - *    from going forward. Unlike the regular rwlock, the read lock here is
> > > - *    exclusive so that only one locking reader can get it.
> > > + * seqcount_t / seqlock_t - a reader-writer consistency mechanism with
> > > + * lockless readers (read-only retry loops), and no writer starvation.
> > >   *
> > > - * This is not as cache friendly as brlock. Also, this may not work well
> > > - * for data that contains pointers, because any writer could
> > > - * invalidate a pointer that a reader was following.
> > > + * See Documentation/locking/seqlock.rst for full description.
> >
> > So I really really hate that... I _much_ prefer code comments to crappy
> > documents.
>
> Agreed. Comments are much less likely to bitrot than documents. The
> farther away the documentation is from the code, the quicker it becomes
> stale.
>
> It's fine to add "See Documentation/..." but please don't *ever* remove
> comments that's next to the actual code.
>

This patch was unfairly cut at the hunk above :)

If you follow the rest of it, you see that the documentation has just
moved 3 lines below:

     /*
    - * Version using sequence counter only.
    - * This can be used when code has its own mutex protecting the
    - * updating starting before the write_seqcountbeqin() and ending
    - * after the write_seqcount_end().
    + * Sequence counters (seqcount_t)
    + *
    + * The raw counting mechanism without any writer protection. Write side
    + * critical sections must be serialized and readers on the same CPU
    + * (e.g. through preemption or interrupts) must be excluded.
    + *
    + * If it's desired to automatically handle the sequence counter writer
    + * serialization and non-preemptibility requirements, use a sequential
    + * lock (seqlock_t) instead.
    + *
    + * See Documentation/locking/seqlock.rst
      */
    +
     typedef struct seqcount {

and:

    +/*
    + * Sequential locks (seqlock_t)
    + *
    + * Sequence counters with an embedded spinlock for writer serialization
    + * and non-preemptibility.
    + *
    + * See Documentation/locking/seqlock.rst
    + */
    +
     typedef struct {
     	struct seqcount seqcount;
     	spinlock_t lock;
     } seqlock_t;

This was done because, as said in the commit log, documentation of
seqcount_t and seqlock_t was originally intermingled. This is incorrect
and confusing since the usage constrains for each type are vastly
different.

Then, the brlock comment:

    This is not as cache friendly as brlock. Also, this may not work
    well for data that contains pointers, because any writer could
    invalidate a pointer that a reader was following.

was removed not because it's moved to Documentation/locking/seqlock.rst,
but because it's obsolete: 0f6ed63b1707 ("no need to keep brlock macros
anymore...").

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage
  2020-05-25 10:50       ` Ahmed S. Darwish
@ 2020-05-25 11:02         ` Ahmed S. Darwish
  0 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25 11:02 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Sebastian A. Siewior, LKML, Jonathan Corbet,
	linux-doc

Ahmed S. Darwish <a.darwish@linutronix.de> wrote:
> > Steven Rostedt <rostedt@goodmis.org> wrote:
> > > Peter Zijlstra <peterz@infradead.org> wrote:
...
> > >
> > > So I really really hate that... I _much_ prefer code comments to crappy
> > > documents.
> >
> > Agreed. Comments are much less likely to bitrot than documents. The
> > farther away the documentation is from the code, the quicker it becomes
> > stale.
> >
> > It's fine to add "See Documentation/..." but please don't *ever* remove
> > comments that's next to the actual code.
...
>
> Then, the brlock comment:
>
>     This is not as cache friendly as brlock. Also, this may not work
>     well for data that contains pointers, because any writer could
>     invalidate a pointer that a reader was following.
>
> was removed not because it's moved to Documentation/locking/seqlock.rst,
> but because it's obsolete: 0f6ed63b1707 ("no need to keep brlock macros
> anymore...").
>

Hmm, the part about not including pointers is only mentiond in the RST
file though, and not at seqlock.h.

Anyway, ACK, I'll beef up the comments at seqlock.h and make sure they
are self-contained.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-25  9:36           ` Ahmed S. Darwish
@ 2020-05-25 13:44             ` Peter Zijlstra
  2020-05-25 14:07               ` Peter Zijlstra
  0 siblings, 1 reply; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-25 13:44 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Thomas Gleixner, Ingo Molnar, Will Deacon, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

On Mon, May 25, 2020 at 11:36:49AM +0200, Ahmed S. Darwish wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:

> > I will not let sensible code comments deteriorate to the benefit of some
> > external piece of crap.
> >
> > As a programmer the primary interface to all this is a text editor, not
> > a web broswer or a pdf file or whatever other bullshit.
> >
> > If comments are unreadable in your text editor, they're useless.
> 
> Wait.
> 
> Most of the patch in question is just substituting the code snippet's
> leading white spaces to tabs. For illustration purposes, if we remove
> these white space hunks from the diff, it becomes:
> 
>   --- a/include/linux/seqlock.h
>   +++ b/include/linux/seqlock.h
>   @@ -232,6 +232,8 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
>   + * .. code-block:: c
>   ...
>   + * .. code-block:: c
>   ...
>   - * NOTE: The non-requirement for atomic modifications does _NOT_ include
>   - *       the publishing of new entries in the case where data is a dynamic
>   - *       data structure.
>   + * .. attention::
>   + *
>   + *     The non-requirement for atomic modifications does _NOT_ include
>   + *     the publishing of new entries in the case where data is a dynamic
>   + *     data structure.
>   ...
> 
> Are you trying to tell me that, good heavens, these directives are
> really hurting your eyes so much?

Yep, they're a distraction and serve absolutely no purpose. They're also
utterly moronic, of course it's code and of course it's bloody well C.

> Putting kernel-doc aside... That huge raw_write_seqcount_latch() comment
> is actually *way more readable from any text editor* after applying this
> patch. Go figure.

I don't mind the re-indent.

> >>> The correct fix is, as always, to remove the kernel-doc marker.
> 
> Sorry, that's not the correct fix.

Of course it is, if kerneldoc complains that a perfectly good comment
is no good, then the fault lies with kerneldoc.

It's like checkpatch; assume it is wrong :-)

> In the following patches, kernel-doc for the entire seqlock.h API is
> added. Singling out raw_write_seqcount_latch() doesn't make any sense.

% s/\/\*\*/\/\*/g -- tada!!

> If you look at the top of this patch series, a lot of seqlock.h
> seqcount_t call sites were badly broken. The 0day kernel test bot sent
> me even more erroneous call sites due to the added lockdep checks. This
> is an extra argument for the added documentation: the existing one is
> horrible.

I've nothing against improving comments, I'm just saying that RST is
absolute atrocious shite and has nothing to do with good comments.

If sphinx doesn't like "NOTE:' when go teach it.

> So, please, don't claim that the current situation is fine. It is not.

I've never claimed that. My claim is that RST is shite and has no added
value.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes
  2020-05-25 13:44             ` Peter Zijlstra
@ 2020-05-25 14:07               ` Peter Zijlstra
  0 siblings, 0 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-25 14:07 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Thomas Gleixner, Ingo Molnar, Will Deacon, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jonathan Corbet,
	linux-doc

On Mon, May 25, 2020 at 03:44:29PM +0200, Peter Zijlstra wrote:

> I've never claimed that. My claim is that RST is shite and has no added
> value.

Or rather, it has negative value, for it makes comments less readable.

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-22 14:57   ` Peter Zijlstra
  2020-05-22 15:17     ` Sebastian A. Siewior
@ 2020-05-25 15:24     ` Ahmed S. Darwish
  2020-05-25 15:45       ` Peter Zijlstra
  2020-05-25 16:10     ` John Ogness
  2 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25 15:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Andrew Morton,
	Konstantin Khlebnikov, linux-mm

Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, May 19, 2020 at 11:45:24PM +0200, Ahmed S. Darwish wrote:
> > @@ -713,10 +713,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
> >   */
> >  void lru_add_drain_all(void)
> >  {
>

Re-adding cut-out comment for context:

	/*
	 * lru_drain_gen - Current generation of pages that could be in vectors
	 *
	 * (A) Definition: lru_drain_gen = x implies that all generations
	 *     0 < n <= x are already scheduled for draining.
	 *
	 * This is an optimization for the highly-contended use case where a
	 * user space workload keeps constantly generating a flow of pages
	 * for each CPU.
	 */
> > +	static unsigned int lru_drain_gen;
> >  	static struct cpumask has_work;
> > +	static DEFINE_MUTEX(lock);
> > +	int cpu, this_gen;
> >
> >  	/*
> >  	 * Make sure nobody triggers this path before mm_percpu_wq is fully
> > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> >  	if (WARN_ON(!mm_percpu_wq))
> >  		return;
> >
>

Re-adding cut-out comment for context:

	/*
	 * (B) Cache the LRU draining generation number
	 *
	 * smp_rmb() ensures that the counter is loaded before the mutex is
	 * taken. It pairs with the smp_wmb() inside the mutex critical section
	 * at (D).
	 */
> > +	this_gen = READ_ONCE(lru_drain_gen);
> > +	smp_rmb();
>
> 	this_gen = smp_load_acquire(&lru_drain_gen);

ACK. will do.

> >
> >  	mutex_lock(&lock);
> >
> >  	/*
> > +	 * (C) Exit the draining operation if a newer generation, from another
> > +	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
> >  	 */
> > +	if (unlikely(this_gen != lru_drain_gen))
> >  		goto done;
> >
>

Re-adding cut-out comment for context:

	/*
	 * (D) Increment generation number
	 *
	 * Pairs with READ_ONCE() and smp_rmb() at (B), outside of the critical
	 * section.
	 *
	 * This pairing must be done here, before the for_each_online_cpu loop
	 * below which drains the page vectors.
	 *
	 * Let x, y, and z represent some system CPU numbers, where x < y < z.
	 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
	 * below and has already reached CPU #y's per-cpu data. CPU #x comes
	 * along, adds some pages to its per-cpu vectors, then calls
	 * lru_add_drain_all().
	 *
	 * If the paired smp_wmb() below is done at any later step, e.g. after
	 * the loop, CPU #x will just exit at (C) and miss flushing out all of
	 * its added pages.
	 */
> > +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> > +	smp_wmb();
>
> You can leave this smp_wmb() out and rely on the smp_mb() implied by
> queue_work_on()'s test_and_set_bit().
>

Won't this be too implicit?

Isn't it possible that, over the years, queue_work_on() impementation
changes and the test_and_set_bit()/smp_mb() gets removed?

If that happens, this commit will get *silently* broken and the local
CPU pages won't be drained.

> >  	cpumask_clear(&has_work);
> > -
> >  	for_each_online_cpu(cpu) {
> >  		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
> >
>
> While you're here, do:
>
> 	s/cpumask_set_cpu/__&/
>

ACK.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-25 15:24     ` Ahmed S. Darwish
@ 2020-05-25 15:45       ` Peter Zijlstra
  0 siblings, 0 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-25 15:45 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Ingo Molnar, Will Deacon, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Andrew Morton,
	Konstantin Khlebnikov, linux-mm

On Mon, May 25, 2020 at 05:24:01PM +0200, Ahmed S. Darwish wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:
> > On Tue, May 19, 2020 at 11:45:24PM +0200, Ahmed S. Darwish wrote:

> > > +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> > > +	smp_wmb();
> >
> > You can leave this smp_wmb() out and rely on the smp_mb() implied by
> > queue_work_on()'s test_and_set_bit().
> >
> 
> Won't this be too implicit?
> 
> Isn't it possible that, over the years, queue_work_on() impementation
> changes and the test_and_set_bit()/smp_mb() gets removed?
> 
> If that happens, this commit will get *silently* broken and the local
> CPU pages won't be drained.

Add a comment to queue_work_on() that points here? That way people are
aware.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
  2020-05-22 14:57   ` Peter Zijlstra
  2020-05-22 15:17     ` Sebastian A. Siewior
  2020-05-25 15:24     ` Ahmed S. Darwish
@ 2020-05-25 16:10     ` John Ogness
  2 siblings, 0 replies; 77+ messages in thread
From: John Ogness @ 2020-05-25 16:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ahmed S. Darwish, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Sebastian A. Siewior, Steven Rostedt, LKML,
	Andrew Morton, Konstantin Khlebnikov, linux-mm


[-- Attachment #1: Type: text/plain, Size: 2799 bytes --]

Hi,

This optimization is broken. The main concern here: Is it possible that
lru_add_drain_all() _would_ have drained pagevec X, but then aborted
because another lru_add_drain_all() is underway and that other task will
_not_ drain pagevec X? I claim the answer is yes!

My suggested changes are inline below.

I attached a litmus test to verify it.

On 2020-05-22, Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, May 19, 2020 at 11:45:24PM +0200, Ahmed S. Darwish wrote:
>> @@ -713,10 +713,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
>>   */
>>  void lru_add_drain_all(void)
>>  {
>
>> +	static unsigned int lru_drain_gen;
>>  	static struct cpumask has_work;
>> +	static DEFINE_MUTEX(lock);
>> +	int cpu, this_gen;
>>  
>>  	/*
>>  	 * Make sure nobody triggers this path before mm_percpu_wq is fully
>> @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
>>  	if (WARN_ON(!mm_percpu_wq))
>>  		return;
>>  

An smp_mb() is needed here.

	/*
	 * Guarantee the pagevec counter stores visible by
	 * this CPU are visible to other CPUs before loading
	 * the current drain generation.
	 */
	smp_mb();

>> +	this_gen = READ_ONCE(lru_drain_gen);
>> +	smp_rmb();
>
> 	this_gen = smp_load_acquire(&lru_drain_gen);
>>  
>>  	mutex_lock(&lock);
>>  
>>  	/*
>> +	 * (C) Exit the draining operation if a newer generation, from another
>> +	 * lru_add_drain_all(), was already scheduled for draining. Check (A).
>>  	 */
>> +	if (unlikely(this_gen != lru_drain_gen))
>>  		goto done;
>>  
>
>> +	WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
>> +	smp_wmb();

Instead of smp_wmb(), this needs to be a full memory barrier.

	/*
	 * Guarantee the new drain generation is stored before
	 * loading the pagevec counters.
	 */
	smp_mb();

> You can leave this smp_wmb() out and rely on the smp_mb() implied by
> queue_work_on()'s test_and_set_bit().
>
>>  	cpumask_clear(&has_work);
>> -
>>  	for_each_online_cpu(cpu) {
>>  		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
>>  
>
> While you're here, do:
>
> 	s/cpumask_set_cpu/__&/
>
>> @@ -766,7 +803,7 @@ void lru_add_drain_all(void)
>>  {
>>  	lru_add_drain();
>>  }
>> -#endif
>> +#endif /* CONFIG_SMP */
>>  
>>  /**
>>   * release_pages - batched put_page()

For the litmus test:

1:rx=0             (P1 did not see the pagevec counter)
2:rx=1             (P2 _would_ have seen the pagevec counter)
2:ry1=0 /\ 2:ry2=1 (P2 aborted due to optimization)

Changing the smp_mb() back to smp_wmb() in P1 and removing the smp_mb()
in P2 represents this patch. And it shows that sometimes P2 will abort
even though it would have drained the pagevec and P1 did not drain the
pagevec.

This is ugly as hell. And there maybe other memory barrier types to make
it pretty. But as is, memory barriers are missing.

John Ogness


[-- Attachment #2: lru_add_drain_all.litmus --]
[-- Type: text/plain, Size: 1069 bytes --]

C lru_add_drain_all

(*
 * x is a pagevec counter
 * y is @lru_drain_gen
 * z is @lock
 *)

{ }

P0(int *x)
{
	// mark pagevec for draining
	WRITE_ONCE(*x, 1);
}

P1(int *x, int *y, int *z)
{
	int rx;
	int rz;

	// mutex_lock(&lock);
	rz = cmpxchg_acquire(z, 0, 1);
	if (rz == 0) {

		// WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
		WRITE_ONCE(*y, 1);

		// guarantee lru_drain_gen store before loading pagevec
		smp_mb();

		// if (pagevec_count(...))
		rx = READ_ONCE(*x);

		// mutex_unlock(&lock);
		rz = cmpxchg_release(z, 1, 2);
	}
}

P2(int *x, int *y, int *z)
{
	int rx;
	int ry1;
	int ry2;
	int rz;

	// the pagevec counter as visible now to this CPU
	rx = READ_ONCE(*x);

	// guarantee pagevec store before loading lru_drain_gen
	smp_mb();

	// this_gen = READ_ONCE(lru_drain_gen); smp_rmb();
	ry1 = smp_load_acquire(y);

	// mutex_lock(&lock) - acquired after P1
	rz = cmpxchg_acquire(z, 2, 3);
	if (rz == 2) {

		// if (unlikely(this_gen != lru_drain_gen))
		ry2 = READ_ONCE(*y);
	}
}

locations [x; y; z]
exists (1:rx=0 /\ 2:rx=1 /\ 2:ry1=0 /\ 2:ry2=1)

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20 14:37   ` Dan Carpenter
@ 2020-05-25 16:22     ` Ahmed S. Darwish
  0 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-25 16:22 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: kbuild, Peter Zijlstra, Ingo Molnar, Will Deacon, lkp,
	kbuild-all, Thomas Gleixner, Paul E. McKenney,
	Sebastian A. Siewior, Steven Rostedt, LKML, Jakub Kicinski,
	netdev

On Wed, May 20, 2020 at 05:37:07PM +0300, Dan Carpenter wrote:
...
>
> smatch warnings:
> net/core/dev.c:953 netdev_get_name() warn: inconsistent returns 'devnet_rename_sem'.
>
...
>
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  935  int netdev_get_name(struct net *net, char *name, int ifindex)
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  936  {
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  937  	struct net_device *dev;
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  938
> 2354e271ada778b Ahmed S. Darwish 2020-05-19  939  	down_read(&devnet_rename_sem);
>                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> 2354e271ada778b Ahmed S. Darwish 2020-05-19  940
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  941  	rcu_read_lock();
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  942  	dev = dev_get_by_index_rcu(net, ifindex);
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  943  	if (!dev) {
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  944  		rcu_read_unlock();
> 5dbe7c178d3f0a4 Nicolas Schichan 2013-06-26  945  		return -ENODEV;
>                                                               ^^^^^^^^^^^^^^

Oh, shouldn't have missed that. Will fix in v2.

Thanks,

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-25 10:22         ` Peter Zijlstra
@ 2020-05-26  0:52           ` Ahmed S. Darwish
  2020-05-26  8:13             ` Peter Zijlstra
  0 siblings, 1 reply; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-26  0:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sebastian A. Siewior, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

Peter Zijlstra <peterz@infradead.org> wrote:
> On Sun, May 24, 2020 at 12:41:32AM +0200, Peter Zijlstra wrote:
> > On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> > >
> > > Any "static inline" in the header file using
> > > lockdep_assert_preemption_disabled() will tro to complain about missing
> > > current-> define. But yes, it will work otherwise.
> >
> > Because...? /me rummages around.. Ah you're proposing sticking this in
> > seqcount itself and then header hell.
> >
> > Moo.. ok I'll go have another look on Monday.
>
> How's this?
>

This will work for my case as current-> is no longer referenced by the
lockdep macros. Please continue below though.

...

> -#define lockdep_assert_irqs_enabled()	do {				\
> -		WARN_ONCE(debug_locks && !current->lockdep_recursion &&	\
> -			  !current->hardirqs_enabled,			\
> -			  "IRQs not enabled as expected\n");		\
> -	} while (0)
> +DECLARE_PER_CPU(int, hardirqs_enabled);
> +DECLARE_PER_CPU(int, hardirq_context);
>
> -#define lockdep_assert_irqs_disabled()	do {				\
> -		WARN_ONCE(debug_locks && !current->lockdep_recursion &&	\
> -			  current->hardirqs_enabled,			\
> -			  "IRQs not disabled as expected\n");		\
> -	} while (0)
> +#define lockdep_assert_irqs_enabled()					\
> +do {									\
> +	WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled));	\
> +} while (0)
>

Given that lockdep_off() is defined at lockdep.c as:

  void lockdep_off(void)
  {
        current->lockdep_recursion += LOCKDEP_OFF;
  }

This would imply that all of the macros:

  - lockdep_assert_irqs_enabled()
  - lockdep_assert_irqs_disabled()
  - lockdep_assert_in_irq()
  - lockdep_assert_preemption_disabled()
  - lockdep_assert_preemption_enabled()

will do the lockdep checks *even if* lockdep_off() was called.

This doesn't sound right. Even if all of the above macros call sites
didn't care about lockdep_off()/on(), it is semantically incoherent.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-26  0:52           ` Ahmed S. Darwish
@ 2020-05-26  8:13             ` Peter Zijlstra
  2020-05-26  9:45               ` Ahmed S. Darwish
  2020-06-03 15:30               ` Ahmed S. Darwish
  0 siblings, 2 replies; 77+ messages in thread
From: Peter Zijlstra @ 2020-05-26  8:13 UTC (permalink / raw)
  To: Ahmed S. Darwish
  Cc: Sebastian A. Siewior, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On Tue, May 26, 2020 at 02:52:31AM +0200, Ahmed S. Darwish wrote:
> Peter Zijlstra <peterz@infradead.org> wrote:

> > +#define lockdep_assert_irqs_enabled()					\
> > +do {									\
> > +	WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled));	\
> > +} while (0)
> >
> 
> Given that lockdep_off() is defined at lockdep.c as:
> 
>   void lockdep_off(void)
>   {
>         current->lockdep_recursion += LOCKDEP_OFF;
>   }
> 
> This would imply that all of the macros:
> 
>   - lockdep_assert_irqs_enabled()
>   - lockdep_assert_irqs_disabled()
>   - lockdep_assert_in_irq()
>   - lockdep_assert_preemption_disabled()
>   - lockdep_assert_preemption_enabled()
> 
> will do the lockdep checks *even if* lockdep_off() was called.
> 
> This doesn't sound right. Even if all of the above macros call sites
> didn't care about lockdep_off()/on(), it is semantically incoherent.

lockdep_off() is an abomination and really should not exist.

That dm-cache-target.c thing, for example, is atrocious shite that will
explode on -rt. Whoever wrote that needs a 'medal'.

People using it deserve all the pain they get.

Also; IRQ state _should_ be tracked irrespective of tracking lock
dependencies -- I see that that currently isn't entirely the case, lemme
go fix that.


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-26  8:13             ` Peter Zijlstra
@ 2020-05-26  9:45               ` Ahmed S. Darwish
  2020-06-03 15:30               ` Ahmed S. Darwish
  1 sibling, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-05-26  9:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sebastian A. Siewior, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On Tue, May 26, 2020 at 10:13:50AM +0200, Peter Zijlstra wrote:
> On Tue, May 26, 2020 at 02:52:31AM +0200, Ahmed S. Darwish wrote:
> > Peter Zijlstra <peterz@infradead.org> wrote:
>
> > > +#define lockdep_assert_irqs_enabled()					\
> > > +do {									\
> > > +	WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled));	\
> > > +} while (0)
> > >
> >
> > Given that lockdep_off() is defined at lockdep.c as:
> >
> >   void lockdep_off(void)
> >   {
> >         current->lockdep_recursion += LOCKDEP_OFF;
> >   }
> >
> > This would imply that all of the macros:
> >
> >   - lockdep_assert_irqs_enabled()
> >   - lockdep_assert_irqs_disabled()
> >   - lockdep_assert_in_irq()
> >   - lockdep_assert_preemption_disabled()
> >   - lockdep_assert_preemption_enabled()
> >
> > will do the lockdep checks *even if* lockdep_off() was called.
> >
> > This doesn't sound right. Even if all of the above macros call sites
> > didn't care about lockdep_off()/on(), it is semantically incoherent.
>
> lockdep_off() is an abomination and really should not exist.
>
> That dm-cache-target.c thing, for example, is atrocious shite that will
> explode on -rt. Whoever wrote that needs a 'medal'.
>
> People using it deserve all the pain they get.
>
> Also; IRQ state _should_ be tracked irrespective of tracking lock
> dependencies -- I see that that currently isn't entirely the case, lemme
> go fix that.
>

Exactly, currently all the lockdep IRQ checks gets nullified if
lockdep_off() is called. That was the source of my confusion.

If you'll have any extra patches on this, I can also queue them in the
next iteration of this series, before this patch.

Thanks a lot,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount
  2020-05-20 12:51       ` Eric Dumazet
@ 2020-06-03 14:33         ` Ahmed S. Darwish
  0 siblings, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-06-03 14:33 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Waiman Long, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Thomas Gleixner, Paul E. McKenney, Sebastian A. Siewior,
	Steven Rostedt, LKML, David S. Miller, Jakub Kicinski, netdev

On Wed, May 20, 2020 at 05:51:27AM -0700, Eric Dumazet wrote:
>
> On 5/19/20 11:42 PM, Ahmed S. Darwish wrote:
> > Hello Eric,
> >
> > On Tue, May 19, 2020 at 07:01:38PM -0700, Eric Dumazet wrote:
> >>
> >> On 5/19/20 2:45 PM, Ahmed S. Darwish wrote:
> >>> Sequence counters write paths are critical sections that must never be
> >>> preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed.
> >>>
> >>> Commit 5dbe7c178d3f ("net: fix kernel deadlock with interface rename and
> >>> netdev name retrieval.") handled a deadlock, observed with
> >>> CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was
> >>> infinitely spinning: it got scheduled after the seqcount write side
> >>> blocked inside its own critical section.
> >>>
> >>> To fix that deadlock, among other issues, the commit added a
> >>> cond_resched() inside the read side section. While this will get the
> >>> non-preemptible kernel eventually unstuck, the seqcount reader is fully
> >>> exhausting its slice just spinning -- until TIF_NEED_RESCHED is set.
> >>>
> >>> The fix is also still broken: if the seqcount reader belongs to a
> >>> real-time scheduling policy, it can spin forever and the kernel will
> >>> livelock.
> >>>
> >>> Disabling preemption over the seqcount write side critical section will
> >>> not work: inside it are a number of GFP_KERNEL allocations and mutex
> >>> locking through the drivers/base/ :: device_rename() call chain.
> >>>
> >>> From all the above, replace the seqcount with a rwsem.
> >>>
> >>> Fixes: 5dbe7c178d3f (net: fix kernel deadlock with interface rename and netdev name retrieval.)
> >>> Fixes: 30e6c9fa93cf (net: devnet_rename_seq should be a seqcount)
> >>> Fixes: c91f6df2db49 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name)
> >>> Cc: <stable@vger.kernel.org>
> >>> Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
> >>> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >>> ---
> >>>  net/core/dev.c | 30 ++++++++++++------------------
> >>>  1 file changed, 12 insertions(+), 18 deletions(-)
> >>>
> >>
> >> Seems fine to me, assuming rwsem prevent starvation of the writer.
> >>
> >
> > Thanks for the review.
> >
> > AFAIK, due to 5cfd92e12e13 ("locking/rwsem: Adaptive disabling of reader
> > optimistic spinning"), using a rwsem shouldn't lead to writer starvation
> > in the contended case.
>
> Hmm this was in linux-5.3, so very recent stuff.
>
> Has this patch been backported to stable releases ?
>
> With all the Fixes: tags you added, stable teams will backport this
> networking patch to all stable versions.
>
> Do we have a way to tune a dedicare rwsem to 'give preference to the
> (unique in this case) writer" over a myriad of potential readers ?
>

I was wrong in referencing the commit 5cfd92e12e13 above.

Before and after that commit, once a rwsem writer is blocking, all
subsequent readers will block until that writer makes progress.

Given that behavior, and that the read section is already quite short, I
don't think there's any danger incurred on writers here.

(a v2 will be sent shortly, fixing the error found Dan/kbuild-bot.)

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API
  2020-05-26  8:13             ` Peter Zijlstra
  2020-05-26  9:45               ` Ahmed S. Darwish
@ 2020-06-03 15:30               ` Ahmed S. Darwish
  1 sibling, 0 replies; 77+ messages in thread
From: Ahmed S. Darwish @ 2020-06-03 15:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sebastian A. Siewior, Ingo Molnar, Will Deacon, Thomas Gleixner,
	Paul E. McKenney, Steven Rostedt, LKML

On Tue, May 26, 2020 at 10:13:50AM +0200, Peter Zijlstra wrote:
> On Tue, May 26, 2020 at 02:52:31AM +0200, Ahmed S. Darwish wrote:
> > Peter Zijlstra <peterz@infradead.org> wrote:
>
> > > +#define lockdep_assert_irqs_enabled()					\
> > > +do {									\
> > > +	WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled));	\
> > > +} while (0)
> > >
> >
> > Given that lockdep_off() is defined at lockdep.c as:
> >
> >   void lockdep_off(void)
> >   {
> >         current->lockdep_recursion += LOCKDEP_OFF;
> >   }
> >
> > This would imply that all of the macros:
> >
> >   - lockdep_assert_irqs_enabled()
> >   - lockdep_assert_irqs_disabled()
> >   - lockdep_assert_in_irq()
> >   - lockdep_assert_preemption_disabled()
> >   - lockdep_assert_preemption_enabled()
> >
> > will do the lockdep checks *even if* lockdep_off() was called.
> >
> > This doesn't sound right. Even if all of the above macros call sites
> > didn't care about lockdep_off()/on(), it is semantically incoherent.
>
> lockdep_off() is an abomination and really should not exist.
>
> That dm-cache-target.c thing, for example, is atrocious shite that will
> explode on -rt. Whoever wrote that needs a 'medal'.
>
> People using it deserve all the pain they get.
>
> Also; IRQ state _should_ be tracked irrespective of tracking lock
> dependencies -- I see that that currently isn't entirely the case, lemme
> go fix that.
>

Since the lockdep/x86 series:

  https://lkml.kernel.org/r/20200529212728.795169701@infradead.org
  https://lkml.kernel.org/r/20200529213550.683440625@infradead.org

are pending and quite big, I'll drop patch #7 and patch #8 from this
series, and post a seqlock v2.

This way, this seqlock series can move forward.

Patches #7 and #8 are an "add-on" debugging feature anyway. They're
quite important of course, evident by the number of buggy call sites
they've found, but they don't affect the rest of the seqlock series in
any way.

Once the lockdep/x86 series above get merged, I can easily rebase and
post paches #7 and #8 again.

Thanks a lot,

--
Ahmed S. Darwish
Linutronix GmbH

^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, back to index

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-19 21:45 [PATCH v1 00/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 01/25] net: core: device_rename: Use rwsem instead of a seqcount Ahmed S. Darwish
2020-05-19 22:01   ` Stephen Hemminger
2020-05-19 22:23     ` Thomas Gleixner
2020-05-19 23:11       ` Stephen Hemminger
2020-05-19 23:42         ` Thomas Gleixner
2020-05-20  0:06           ` Stephen Hemminger
2020-05-20  1:55             ` Thomas Gleixner
2020-05-20  2:57           ` David Miller
2020-05-20  3:18             ` Eric Dumazet
2020-05-20  4:36               ` Stephen Hemminger
2020-05-20 19:37             ` Thomas Gleixner
2020-05-20 21:36               ` Stephen Hemminger
2020-05-20  2:01   ` Eric Dumazet
2020-05-20  6:42     ` Ahmed S. Darwish
2020-05-20 12:51       ` Eric Dumazet
2020-06-03 14:33         ` Ahmed S. Darwish
2020-05-20 14:37   ` Dan Carpenter
2020-05-25 16:22     ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API Ahmed S. Darwish
2020-05-20 12:22   ` Konstantin Khlebnikov
2020-05-20 13:05     ` Peter Zijlstra
2020-05-22 14:57   ` Peter Zijlstra
2020-05-22 15:17     ` Sebastian A. Siewior
2020-05-22 16:23       ` Peter Zijlstra
2020-05-25 15:24     ` Ahmed S. Darwish
2020-05-25 15:45       ` Peter Zijlstra
2020-05-25 16:10     ` John Ogness
2020-05-19 21:45 ` [PATCH v1 03/25] net: phy: fixed_phy: Remove unused seqcount Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 04/25] block: nr_sects_write(): Disable preemption on seqcount write Ahmed S. Darwish
2020-05-22 16:39   ` Peter Zijlstra
2020-05-25  9:56     ` Ahmed S. Darwish
     [not found]   ` <20200522001237.A00E8206BE@mail.kernel.org>
2020-05-25 10:12     ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 05/25] u64_stats: Document writer non-preemptibility requirement Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 06/25] dma-buf: Remove custom seqcount lockdep class key Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 07/25] lockdep: Add preemption disabled assertion API Ahmed S. Darwish
2020-05-22 17:55   ` Peter Zijlstra
2020-05-23 14:59     ` Sebastian A. Siewior
2020-05-23 22:41       ` Peter Zijlstra
2020-05-24 10:50         ` Sebastian A. Siewior
2020-05-25 10:22         ` Peter Zijlstra
2020-05-26  0:52           ` Ahmed S. Darwish
2020-05-26  8:13             ` Peter Zijlstra
2020-05-26  9:45               ` Ahmed S. Darwish
2020-06-03 15:30               ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 08/25] seqlock: lockdep assert non-preemptibility on seqcount_t write Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 09/25] Documentation: locking: Describe seqlock design and usage Ahmed S. Darwish
2020-05-22 18:01   ` Peter Zijlstra
2020-05-22 22:24     ` Steven Rostedt
2020-05-25 10:50       ` Ahmed S. Darwish
2020-05-25 11:02         ` Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 10/25] seqlock: Add RST directives to kernel-doc code samples and notes Ahmed S. Darwish
2020-05-22 18:02   ` Peter Zijlstra
2020-05-22 18:03     ` Peter Zijlstra
2020-05-22 18:26       ` Thomas Gleixner
2020-05-22 18:32         ` Peter Zijlstra
2020-05-25  9:36           ` Ahmed S. Darwish
2020-05-25 13:44             ` Peter Zijlstra
2020-05-25 14:07               ` Peter Zijlstra
2020-05-19 21:45 ` [PATCH v1 11/25] seqlock: Add missing kernel-doc annotations Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 12/25] seqlock: Extend seqcount API with associated locks Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 13/25] dma-buf: Use sequence counter with associated wound/wait mutex Ahmed S. Darwish
2020-05-20 10:48   ` Christian König
2020-05-21  0:09     ` Ahmed S. Darwish
2020-05-21 13:20       ` Christian König
2020-05-19 21:45 ` [PATCH v1 14/25] sched: tasks: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 15/25] netfilter: conntrack: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 16/25] netfilter: nft_set_rbtree: Use sequence counter with associated rwlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 17/25] xfrm: policy: Use sequence counters with associated lock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 18/25] timekeeping: Use sequence counter with associated raw spinlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 19/25] vfs: Use sequence counter with associated spinlock Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 20/25] raid5: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 21/25] iocost: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 22/25] NFSv4: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 23/25] userfaultfd: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 24/25] kvm/eventfd: " Ahmed S. Darwish
2020-05-19 21:45 ` [PATCH v1 25/25] hrtimer: Use sequence counter with associated raw spinlock Ahmed S. Darwish

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git