bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] locking: Add new lock contention tracepoints (v4)
@ 2022-03-22 18:57 Namhyung Kim
  2022-03-22 18:57 ` [PATCH 1/2] locking: Add lock contention tracepoints Namhyung Kim
  2022-03-22 18:57 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
  0 siblings, 2 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-22 18:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng
  Cc: LKML, Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

Hello,

There have been some requests for low-overhead kernel lock contention
monitoring.  The kernel has CONFIG_LOCK_STAT to provide such an infra
either via /proc/lock_stat or tracepoints directly.

However it's not light-weight and hard to be used in production.  So
I'm trying to add new tracepoints for lock contention and using them
as a base to build a new monitoring system.

* Changes in v4
 - use __print_flags in the TP_printk()
 - reworked __down_common for semaphore
 - add Tested-by from Hyeonggon Yoo
 
* Changes in v3
 - move the tracepoints deeper in the slow path
 - remove the caller ip
 - don't use task state in the flags
 - add 'ret' field to the contention end tracepoint

* Changes in v2
 - do not use lockdep infrastructure
 - add flags argument to lock:contention_begin tracepoint

I added a flags argument in the contention_begin to classify locks in
question.  It can tell whether it's a spinlock, reader-writer lock or
a mutex.  With stacktrace, users can identify which lock is contended.

The patch 01 added the tracepoints and move the definition to the
mutex.c file so that it can see the tracepoints without lockdep.

The patch 02 actually installs the tracepoints in the locking code.
To minimize the overhead, they were added in the slow path of the code
separately.  As spinlocks are defined in the arch headers, I couldn't
handle them all.  I've just added it to generic queued spinlock and
rwlocks only.  Each arch can add the tracepoints later.

This series base on the current tip/locking/core and you get it from
'locking/tracepoint-v4' branch in my tree at:

  git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git


Thanks,
Namhyung


Namhyung Kim (2):
  locking: Add lock contention tracepoints
  locking: Apply contention tracepoints in the slow path

 include/trace/events/lock.h   | 61 +++++++++++++++++++++++++++++++++--
 kernel/locking/lockdep.c      |  1 -
 kernel/locking/mutex.c        |  6 ++++
 kernel/locking/percpu-rwsem.c |  3 ++
 kernel/locking/qrwlock.c      |  9 ++++++
 kernel/locking/qspinlock.c    |  5 +++
 kernel/locking/rtmutex.c      | 11 +++++++
 kernel/locking/rwbase_rt.c    |  3 ++
 kernel/locking/rwsem.c        |  9 ++++++
 kernel/locking/semaphore.c    | 15 ++++++++-
 10 files changed, 118 insertions(+), 5 deletions(-)


base-commit: cd27ccfc727e99352321c0c75012ab9c5a90321e
-- 
2.35.1.894.gb6a874cedc-goog


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/2] locking: Add lock contention tracepoints
  2022-03-22 18:57 [PATCH 0/2] locking: Add new lock contention tracepoints (v4) Namhyung Kim
@ 2022-03-22 18:57 ` Namhyung Kim
  2022-03-22 18:57 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
  1 sibling, 0 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-22 18:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng
  Cc: LKML, Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

This adds two new lock contention tracepoints like below:

 * lock:contention_begin
 * lock:contention_end

The lock:contention_begin takes a flags argument to classify locks.  I
found it useful to identify what kind of locks it's tracing like if
it's spinning or sleeping, reader-writer lock, real-time, and per-cpu.

Move tracepoint definitions into mutex.c so that we can use them
without lockdep.

Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 include/trace/events/lock.h | 61 +++++++++++++++++++++++++++++++++++--
 kernel/locking/lockdep.c    |  1 -
 kernel/locking/mutex.c      |  3 ++
 3 files changed, 61 insertions(+), 4 deletions(-)

diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h
index d7512129a324..b9b6e3edd518 100644
--- a/include/trace/events/lock.h
+++ b/include/trace/events/lock.h
@@ -5,11 +5,21 @@
 #if !defined(_TRACE_LOCK_H) || defined(TRACE_HEADER_MULTI_READ)
 #define _TRACE_LOCK_H
 
-#include <linux/lockdep.h>
+#include <linux/sched.h>
 #include <linux/tracepoint.h>
 
+/* flags for lock:contention_begin */
+#define LCB_F_SPIN	(1U << 0)
+#define LCB_F_READ	(1U << 1)
+#define LCB_F_WRITE	(1U << 2)
+#define LCB_F_RT	(1U << 3)
+#define LCB_F_PERCPU	(1U << 4)
+
+
 #ifdef CONFIG_LOCKDEP
 
+#include <linux/lockdep.h>
+
 TRACE_EVENT(lock_acquire,
 
 	TP_PROTO(struct lockdep_map *lock, unsigned int subclass,
@@ -78,8 +88,53 @@ DEFINE_EVENT(lock, lock_acquired,
 	TP_ARGS(lock, ip)
 );
 
-#endif
-#endif
+#endif /* CONFIG_LOCK_STAT */
+#endif /* CONFIG_LOCKDEP */
+
+TRACE_EVENT(contention_begin,
+
+	TP_PROTO(void *lock, unsigned int flags),
+
+	TP_ARGS(lock, flags),
+
+	TP_STRUCT__entry(
+		__field(void *, lock_addr)
+		__field(unsigned int, flags)
+	),
+
+	TP_fast_assign(
+		__entry->lock_addr = lock;
+		__entry->flags = flags;
+	),
+
+	TP_printk("%p (flags=%s)", __entry->lock_addr,
+		  __print_flags(__entry->flags, "|",
+				{ LCB_F_SPIN,		"SPIN" },
+				{ LCB_F_READ,		"READ" },
+				{ LCB_F_WRITE,		"WRITE" },
+				{ LCB_F_RT,		"RT" },
+				{ LCB_F_PERCPU,		"PERCPU" }
+			  ))
+);
+
+TRACE_EVENT(contention_end,
+
+	TP_PROTO(void *lock, int ret),
+
+	TP_ARGS(lock, ret),
+
+	TP_STRUCT__entry(
+		__field(void *, lock_addr)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->lock_addr = lock;
+		__entry->ret = ret;
+	),
+
+	TP_printk("%p (ret=%d)", __entry->lock_addr, __entry->ret)
+);
 
 #endif /* _TRACE_LOCK_H */
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 50036c10b518..08f8fb6a2d1e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -60,7 +60,6 @@
 
 #include "lockdep_internals.h"
 
-#define CREATE_TRACE_POINTS
 #include <trace/events/lock.h>
 
 #ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 5e3585950ec8..ee2fd7614a93 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -30,6 +30,9 @@
 #include <linux/debug_locks.h>
 #include <linux/osq_lock.h>
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/lock.h>
+
 #ifndef CONFIG_PREEMPT_RT
 #include "mutex.h"
 
-- 
2.35.1.894.gb6a874cedc-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-22 18:57 [PATCH 0/2] locking: Add new lock contention tracepoints (v4) Namhyung Kim
  2022-03-22 18:57 ` [PATCH 1/2] locking: Add lock contention tracepoints Namhyung Kim
@ 2022-03-22 18:57 ` Namhyung Kim
  2022-03-28 11:29   ` Peter Zijlstra
  2022-03-28 11:39   ` Peter Zijlstra
  1 sibling, 2 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-22 18:57 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng
  Cc: LKML, Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

Adding the lock contention tracepoints in various lock function slow
paths.  Note that each arch can define spinlock differently, I only
added it only to the generic qspinlock for now.

Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 kernel/locking/mutex.c        |  3 +++
 kernel/locking/percpu-rwsem.c |  3 +++
 kernel/locking/qrwlock.c      |  9 +++++++++
 kernel/locking/qspinlock.c    |  5 +++++
 kernel/locking/rtmutex.c      | 11 +++++++++++
 kernel/locking/rwbase_rt.c    |  3 +++
 kernel/locking/rwsem.c        |  9 +++++++++
 kernel/locking/semaphore.c    | 15 ++++++++++++++-
 8 files changed, 57 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index ee2fd7614a93..c88deda77cf2 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 	}
 
 	set_current_state(state);
+	trace_contention_begin(lock, 0);
 	for (;;) {
 		bool first;
 
@@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 skip_wait:
 	/* got the lock - cleanup and rejoice! */
 	lock_acquired(&lock->dep_map, ip);
+	trace_contention_end(lock, 0);
 
 	if (ww_ctx)
 		ww_mutex_lock_acquired(ww, ww_ctx);
@@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 err:
 	__set_current_state(TASK_RUNNING);
 	__mutex_remove_waiter(lock, &waiter);
+	trace_contention_end(lock, ret);
 err_early_kill:
 	raw_spin_unlock(&lock->wait_lock);
 	debug_mutex_free_waiter(&waiter);
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index c9fdae94e098..833043613af6 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -9,6 +9,7 @@
 #include <linux/sched/task.h>
 #include <linux/sched/debug.h>
 #include <linux/errno.h>
+#include <trace/events/lock.h>
 
 int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
 			const char *name, struct lock_class_key *key)
@@ -154,6 +155,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore *sem, bool reader)
 	}
 	spin_unlock_irq(&sem->waiters.lock);
 
+	trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ : LCB_F_WRITE));
 	while (wait) {
 		set_current_state(TASK_UNINTERRUPTIBLE);
 		if (!smp_load_acquire(&wq_entry.private))
@@ -161,6 +163,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore *sem, bool reader)
 		schedule();
 	}
 	__set_current_state(TASK_RUNNING);
+	trace_contention_end(sem, 0);
 }
 
 bool __sched __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
index ec36b73f4733..b9f6f963d77f 100644
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -12,6 +12,7 @@
 #include <linux/percpu.h>
 #include <linux/hardirq.h>
 #include <linux/spinlock.h>
+#include <trace/events/lock.h>
 
 /**
  * queued_read_lock_slowpath - acquire read lock of a queue rwlock
@@ -34,6 +35,8 @@ void queued_read_lock_slowpath(struct qrwlock *lock)
 	}
 	atomic_sub(_QR_BIAS, &lock->cnts);
 
+	trace_contention_begin(lock, LCB_F_READ | LCB_F_SPIN);
+
 	/*
 	 * Put the reader into the wait queue
 	 */
@@ -51,6 +54,8 @@ void queued_read_lock_slowpath(struct qrwlock *lock)
 	 * Signal the next one in queue to become queue head
 	 */
 	arch_spin_unlock(&lock->wait_lock);
+
+	trace_contention_end(lock, 0);
 }
 EXPORT_SYMBOL(queued_read_lock_slowpath);
 
@@ -62,6 +67,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
 {
 	int cnts;
 
+	trace_contention_begin(lock, LCB_F_WRITE | LCB_F_SPIN);
+
 	/* Put the writer into the wait queue */
 	arch_spin_lock(&lock->wait_lock);
 
@@ -79,5 +86,7 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
 	} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));
 unlock:
 	arch_spin_unlock(&lock->wait_lock);
+
+	trace_contention_end(lock, 0);
 }
 EXPORT_SYMBOL(queued_write_lock_slowpath);
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cbff6ba53d56..65a9a10caa6f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
 #include <linux/prefetch.h>
 #include <asm/byteorder.h>
 #include <asm/qspinlock.h>
+#include <trace/events/lock.h>
 
 /*
  * Include queued spinlock statistics code
@@ -401,6 +402,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	idx = node->count++;
 	tail = encode_tail(smp_processor_id(), idx);
 
+	trace_contention_begin(lock, LCB_F_SPIN);
+
 	/*
 	 * 4 nodes are allocated based on the assumption that there will
 	 * not be nested NMIs taking spinlocks. That may not be true in
@@ -554,6 +557,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	pv_kick_node(lock, next);
 
 release:
+	trace_contention_end(lock, 0);
+
 	/*
 	 * release the node
 	 */
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 8555c4efe97c..7779ee8abc2a 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -24,6 +24,8 @@
 #include <linux/sched/wake_q.h>
 #include <linux/ww_mutex.h>
 
+#include <trace/events/lock.h>
+
 #include "rtmutex_common.h"
 
 #ifndef WW_RT
@@ -1579,6 +1581,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
 
 	set_current_state(state);
 
+	trace_contention_begin(lock, LCB_F_RT);
+
 	ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk);
 	if (likely(!ret))
 		ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter);
@@ -1601,6 +1605,9 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
 	 * unconditionally. We might have to fix that up.
 	 */
 	fixup_rt_mutex_waiters(lock);
+
+	trace_contention_end(lock, ret);
+
 	return ret;
 }
 
@@ -1683,6 +1690,8 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
 	/* Save current state and set state to TASK_RTLOCK_WAIT */
 	current_save_and_set_rtlock_wait_state();
 
+	trace_contention_begin(lock, LCB_F_RT);
+
 	task_blocks_on_rt_mutex(lock, &waiter, current, NULL, RT_MUTEX_MIN_CHAINWALK);
 
 	for (;;) {
@@ -1712,6 +1721,8 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
 	 */
 	fixup_rt_mutex_waiters(lock);
 	debug_rt_mutex_free_waiter(&waiter);
+
+	trace_contention_end(lock, 0);
 }
 
 static __always_inline void __sched rtlock_slowlock(struct rt_mutex_base *lock)
diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c
index 6fd3162e4098..ec7b1fda7982 100644
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -247,11 +247,13 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
 		goto out_unlock;
 
 	rwbase_set_and_save_current_state(state);
+	trace_contention_begin(rwb, LCB_F_WRITE | LCB_F_RT);
 	for (;;) {
 		/* Optimized out for rwlocks */
 		if (rwbase_signal_pending_state(state, current)) {
 			rwbase_restore_current_state();
 			__rwbase_write_unlock(rwb, 0, flags);
+			trace_contention_end(rwb, -EINTR);
 			return -EINTR;
 		}
 
@@ -265,6 +267,7 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
 		set_current_state(state);
 	}
 	rwbase_restore_current_state();
+	trace_contention_end(rwb, 0);
 
 out_unlock:
 	raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index acde5d6f1254..465db7bd84f8 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -27,6 +27,7 @@
 #include <linux/export.h>
 #include <linux/rwsem.h>
 #include <linux/atomic.h>
+#include <trace/events/lock.h>
 
 #ifndef CONFIG_PREEMPT_RT
 #include "lock_events.h"
@@ -1014,6 +1015,8 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 	raw_spin_unlock_irq(&sem->wait_lock);
 	wake_up_q(&wake_q);
 
+	trace_contention_begin(sem, LCB_F_READ);
+
 	/* wait to be given the lock */
 	for (;;) {
 		set_current_state(state);
@@ -1035,6 +1038,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 
 	__set_current_state(TASK_RUNNING);
 	lockevent_inc(rwsem_rlock);
+	trace_contention_end(sem, 0);
 	return sem;
 
 out_nolock:
@@ -1042,6 +1046,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 	raw_spin_unlock_irq(&sem->wait_lock);
 	__set_current_state(TASK_RUNNING);
 	lockevent_inc(rwsem_rlock_fail);
+	trace_contention_end(sem, -EINTR);
 	return ERR_PTR(-EINTR);
 }
 
@@ -1109,6 +1114,8 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 wait:
 	/* wait until we successfully acquire the lock */
 	set_current_state(state);
+	trace_contention_begin(sem, LCB_F_WRITE);
+
 	for (;;) {
 		if (rwsem_try_write_lock(sem, &waiter)) {
 			/* rwsem_try_write_lock() implies ACQUIRE on success */
@@ -1148,6 +1155,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	__set_current_state(TASK_RUNNING);
 	raw_spin_unlock_irq(&sem->wait_lock);
 	lockevent_inc(rwsem_wlock);
+	trace_contention_end(sem, 0);
 	return sem;
 
 out_nolock:
@@ -1159,6 +1167,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	raw_spin_unlock_irq(&sem->wait_lock);
 	wake_up_q(&wake_q);
 	lockevent_inc(rwsem_wlock_fail);
+	trace_contention_end(sem, -EINTR);
 	return ERR_PTR(-EINTR);
 }
 
diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
index 9ee381e4d2a4..f2654d2fe43a 100644
--- a/kernel/locking/semaphore.c
+++ b/kernel/locking/semaphore.c
@@ -32,6 +32,7 @@
 #include <linux/semaphore.h>
 #include <linux/spinlock.h>
 #include <linux/ftrace.h>
+#include <trace/events/lock.h>
 
 static noinline void __down(struct semaphore *sem);
 static noinline int __down_interruptible(struct semaphore *sem);
@@ -205,7 +206,7 @@ struct semaphore_waiter {
  * constant, and thus optimised away by the compiler.  Likewise the
  * 'timeout' parameter for the cases without timeouts.
  */
-static inline int __sched __down_common(struct semaphore *sem, long state,
+static inline int __sched ___down_common(struct semaphore *sem, long state,
 								long timeout)
 {
 	struct semaphore_waiter waiter;
@@ -236,6 +237,18 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
 	return -EINTR;
 }
 
+static inline int __sched __down_common(struct semaphore *sem, long state,
+					long timeout)
+{
+	int ret;
+
+	trace_contention_begin(sem, 0);
+	ret = ___down_common(sem, state, timeout);
+	trace_contention_end(sem, ret);
+
+	return ret;
+}
+
 static noinline void __sched __down(struct semaphore *sem)
 {
 	__down_common(sem, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
-- 
2.35.1.894.gb6a874cedc-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-22 18:57 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
@ 2022-03-28 11:29   ` Peter Zijlstra
  2022-03-28 17:41     ` Namhyung Kim
  2022-03-28 11:39   ` Peter Zijlstra
  1 sibling, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2022-03-28 11:29 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Tue, Mar 22, 2022 at 11:57:09AM -0700, Namhyung Kim wrote:
> Adding the lock contention tracepoints in various lock function slow
> paths.  Note that each arch can define spinlock differently, I only
> added it only to the generic qspinlock for now.
> 
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  kernel/locking/mutex.c        |  3 +++
>  kernel/locking/percpu-rwsem.c |  3 +++
>  kernel/locking/qrwlock.c      |  9 +++++++++
>  kernel/locking/qspinlock.c    |  5 +++++
>  kernel/locking/rtmutex.c      | 11 +++++++++++
>  kernel/locking/rwbase_rt.c    |  3 +++
>  kernel/locking/rwsem.c        |  9 +++++++++
>  kernel/locking/semaphore.c    | 15 ++++++++++++++-
>  8 files changed, 57 insertions(+), 1 deletion(-)

I had conflicts in rwsem.c due to Waiman's patches, but that was simple
enough to resolve. However, I had a good look at the other sites and
ended up with the below...

Yes, I know I'm the one that suggested the percpu thing, but upon
looking again it missed the largest part of percpu_down_write(), which
very much includes that RCU grace period and waiting for the readers to
bugger off

Also, rwbase_rt was missing the entire READ side -- yes, I see that's
also covered by the rtmuex.c part, but that's on a different address and
with different flags, and it's very confusing to not have it annotated.

Anyway, I'll queue this patch with the below folded in for post -rc1.

---

--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -155,7 +155,6 @@ static void percpu_rwsem_wait(struct per
 	}
 	spin_unlock_irq(&sem->waiters.lock);
 
-	trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ : LCB_F_WRITE));
 	while (wait) {
 		set_current_state(TASK_UNINTERRUPTIBLE);
 		if (!smp_load_acquire(&wq_entry.private))
@@ -163,7 +162,6 @@ static void percpu_rwsem_wait(struct per
 		schedule();
 	}
 	__set_current_state(TASK_RUNNING);
-	trace_contention_end(sem, 0);
 }
 
 bool __sched __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
@@ -174,9 +172,11 @@ bool __sched __percpu_down_read(struct p
 	if (try)
 		return false;
 
+	trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_READ);
 	preempt_enable();
 	percpu_rwsem_wait(sem, /* .reader = */ true);
 	preempt_disable();
+	trace_contention_end(sem, 0);
 
 	return true;
 }
@@ -219,6 +219,7 @@ void __sched percpu_down_write(struct pe
 {
 	might_sleep();
 	rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
+	trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
 
 	/* Notify readers to take the slow path. */
 	rcu_sync_enter(&sem->rss);
@@ -240,6 +241,7 @@ void __sched percpu_down_write(struct pe
 
 	/* Wait for all active readers to complete. */
 	rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
+	trace_contention_end(sem, 0);
 }
 EXPORT_SYMBOL_GPL(percpu_down_write);
 
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -35,7 +35,7 @@ void queued_read_lock_slowpath(struct qr
 	}
 	atomic_sub(_QR_BIAS, &lock->cnts);
 
-	trace_contention_begin(lock, LCB_F_READ | LCB_F_SPIN);
+	trace_contention_begin(lock, LCB_F_SPIN | LCB_F_READ);
 
 	/*
 	 * Put the reader into the wait queue
@@ -67,7 +67,7 @@ void queued_write_lock_slowpath(struct q
 {
 	int cnts;
 
-	trace_contention_begin(lock, LCB_F_WRITE | LCB_F_SPIN);
+	trace_contention_begin(lock, LCB_F_SPIN | LCB_F_WRITE);
 
 	/* Put the writer into the wait queue */
 	arch_spin_lock(&lock->wait_lock);
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -112,6 +112,8 @@ static int __sched __rwbase_read_lock(st
 	 * Reader2 to call up_read(), which might be unbound.
 	 */
 
+	trace_contention_begin(rwb, LCB_F_RT | LCB_F_READ);
+
 	/*
 	 * For rwlocks this returns 0 unconditionally, so the below
 	 * !ret conditionals are optimized out.
@@ -130,6 +132,8 @@ static int __sched __rwbase_read_lock(st
 	raw_spin_unlock_irq(&rtm->wait_lock);
 	if (!ret)
 		rwbase_rtmutex_unlock(rtm);
+
+	trace_contention_end(rwb, ret);
 	return ret;
 }
 
@@ -247,7 +251,7 @@ static int __sched rwbase_write_lock(str
 		goto out_unlock;
 
 	rwbase_set_and_save_current_state(state);
-	trace_contention_begin(rwb, LCB_F_WRITE | LCB_F_RT);
+	trace_contention_begin(rwb, LCB_F_RT | LCB_F_WRITE);
 	for (;;) {
 		/* Optimized out for rwlocks */
 		if (rwbase_signal_pending_state(state, current)) {

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-22 18:57 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
  2022-03-28 11:29   ` Peter Zijlstra
@ 2022-03-28 11:39   ` Peter Zijlstra
  2022-03-28 17:48     ` Namhyung Kim
  1 sibling, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2022-03-28 11:39 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Tue, Mar 22, 2022 at 11:57:09AM -0700, Namhyung Kim wrote:
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index ee2fd7614a93..c88deda77cf2 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
>  	}
>  
>  	set_current_state(state);
> +	trace_contention_begin(lock, 0);
>  	for (;;) {
>  		bool first;
>  
> @@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
>  skip_wait:
>  	/* got the lock - cleanup and rejoice! */
>  	lock_acquired(&lock->dep_map, ip);
> +	trace_contention_end(lock, 0);
>  
>  	if (ww_ctx)
>  		ww_mutex_lock_acquired(ww, ww_ctx);

(note: it's possible to get to this trace_contention_end() without ever
having passed a _begin -- fixed in the below)

> @@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
>  err:
>  	__set_current_state(TASK_RUNNING);
>  	__mutex_remove_waiter(lock, &waiter);
> +	trace_contention_end(lock, ret);
>  err_early_kill:
>  	raw_spin_unlock(&lock->wait_lock);
>  	debug_mutex_free_waiter(&waiter);


So there was one thing here, that might or might not be important, but
is somewhat inconsistent with the whole thing. That is, do you want to
include optimistic spinning in the contention time or not?

Because currently you do it sometimes.

Also, if you were to add LCB_F_MUTEX then you could have something like:


--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -602,12 +602,14 @@ __mutex_lock_common(struct mutex *lock,
 	preempt_disable();
 	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
 
+	trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
 	if (__mutex_trylock(lock) ||
 	    mutex_optimistic_spin(lock, ww_ctx, NULL)) {
 		/* got the lock, yay! */
 		lock_acquired(&lock->dep_map, ip);
 		if (ww_ctx)
 			ww_mutex_set_context_fastpath(ww, ww_ctx);
+		trace_contention_end(lock, 0);
 		preempt_enable();
 		return 0;
 	}
@@ -644,7 +646,7 @@ __mutex_lock_common(struct mutex *lock,
 	}
 
 	set_current_state(state);
-	trace_contention_begin(lock, 0);
+	trace_contention_begin(lock, LCB_F_MUTEX);
 	for (;;) {
 		bool first;
 
@@ -684,10 +686,16 @@ __mutex_lock_common(struct mutex *lock,
 		 * state back to RUNNING and fall through the next schedule(),
 		 * or we must see its unlock and acquire.
 		 */
-		if (__mutex_trylock_or_handoff(lock, first) ||
-		    (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
+		if (__mutex_trylock_or_handoff(lock, first))
 			break;
 
+		if (first) {
+			trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
+			if (mutex_optimistic_spin(lock, ww_ctx, &waiter))
+				break;
+			trace_contention_begin(lock, LCB_F_MUTEX);
+		}
+
 		raw_spin_lock(&lock->wait_lock);
 	}
 	raw_spin_lock(&lock->wait_lock);
@@ -723,8 +731,8 @@ __mutex_lock_common(struct mutex *lock,
 err:
 	__set_current_state(TASK_RUNNING);
 	__mutex_remove_waiter(lock, &waiter);
-	trace_contention_end(lock, ret);
 err_early_kill:
+	trace_contention_end(lock, ret);
 	raw_spin_unlock(&lock->wait_lock);
 	debug_mutex_free_waiter(&waiter);
 	mutex_release(&lock->dep_map, ip);

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-28 11:29   ` Peter Zijlstra
@ 2022-03-28 17:41     ` Namhyung Kim
  0 siblings, 0 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-28 17:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

Hi Peter,

On Mon, Mar 28, 2022 at 4:29 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Mar 22, 2022 at 11:57:09AM -0700, Namhyung Kim wrote:
> > Adding the lock contention tracepoints in various lock function slow
> > paths.  Note that each arch can define spinlock differently, I only
> > added it only to the generic qspinlock for now.
> >
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> >  kernel/locking/mutex.c        |  3 +++
> >  kernel/locking/percpu-rwsem.c |  3 +++
> >  kernel/locking/qrwlock.c      |  9 +++++++++
> >  kernel/locking/qspinlock.c    |  5 +++++
> >  kernel/locking/rtmutex.c      | 11 +++++++++++
> >  kernel/locking/rwbase_rt.c    |  3 +++
> >  kernel/locking/rwsem.c        |  9 +++++++++
> >  kernel/locking/semaphore.c    | 15 ++++++++++++++-
> >  8 files changed, 57 insertions(+), 1 deletion(-)
>
> I had conflicts in rwsem.c due to Waiman's patches, but that was simple
> enough to resolve. However, I had a good look at the other sites and
> ended up with the below...
>
> Yes, I know I'm the one that suggested the percpu thing, but upon
> looking again it missed the largest part of percpu_down_write(), which
> very much includes that RCU grace period and waiting for the readers to
> bugger off
>
> Also, rwbase_rt was missing the entire READ side -- yes, I see that's
> also covered by the rtmuex.c part, but that's on a different address and
> with different flags, and it's very confusing to not have it annotated.
>
> Anyway, I'll queue this patch with the below folded in for post -rc1.

Thanks for doing this, the changes look good.

Namhyung

>
> ---
>
> --- a/kernel/locking/percpu-rwsem.c
> +++ b/kernel/locking/percpu-rwsem.c
> @@ -155,7 +155,6 @@ static void percpu_rwsem_wait(struct per
>         }
>         spin_unlock_irq(&sem->waiters.lock);
>
> -       trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ : LCB_F_WRITE));
>         while (wait) {
>                 set_current_state(TASK_UNINTERRUPTIBLE);
>                 if (!smp_load_acquire(&wq_entry.private))
> @@ -163,7 +162,6 @@ static void percpu_rwsem_wait(struct per
>                 schedule();
>         }
>         __set_current_state(TASK_RUNNING);
> -       trace_contention_end(sem, 0);
>  }
>
>  bool __sched __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
> @@ -174,9 +172,11 @@ bool __sched __percpu_down_read(struct p
>         if (try)
>                 return false;
>
> +       trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_READ);
>         preempt_enable();
>         percpu_rwsem_wait(sem, /* .reader = */ true);
>         preempt_disable();
> +       trace_contention_end(sem, 0);
>
>         return true;
>  }
> @@ -219,6 +219,7 @@ void __sched percpu_down_write(struct pe
>  {
>         might_sleep();
>         rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
> +       trace_contention_begin(sem, LCB_F_PERCPU | LCB_F_WRITE);
>
>         /* Notify readers to take the slow path. */
>         rcu_sync_enter(&sem->rss);
> @@ -240,6 +241,7 @@ void __sched percpu_down_write(struct pe
>
>         /* Wait for all active readers to complete. */
>         rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE);
> +       trace_contention_end(sem, 0);
>  }
>  EXPORT_SYMBOL_GPL(percpu_down_write);
>
> --- a/kernel/locking/qrwlock.c
> +++ b/kernel/locking/qrwlock.c
> @@ -35,7 +35,7 @@ void queued_read_lock_slowpath(struct qr
>         }
>         atomic_sub(_QR_BIAS, &lock->cnts);
>
> -       trace_contention_begin(lock, LCB_F_READ | LCB_F_SPIN);
> +       trace_contention_begin(lock, LCB_F_SPIN | LCB_F_READ);
>
>         /*
>          * Put the reader into the wait queue
> @@ -67,7 +67,7 @@ void queued_write_lock_slowpath(struct q
>  {
>         int cnts;
>
> -       trace_contention_begin(lock, LCB_F_WRITE | LCB_F_SPIN);
> +       trace_contention_begin(lock, LCB_F_SPIN | LCB_F_WRITE);
>
>         /* Put the writer into the wait queue */
>         arch_spin_lock(&lock->wait_lock);
> --- a/kernel/locking/rwbase_rt.c
> +++ b/kernel/locking/rwbase_rt.c
> @@ -112,6 +112,8 @@ static int __sched __rwbase_read_lock(st
>          * Reader2 to call up_read(), which might be unbound.
>          */
>
> +       trace_contention_begin(rwb, LCB_F_RT | LCB_F_READ);
> +
>         /*
>          * For rwlocks this returns 0 unconditionally, so the below
>          * !ret conditionals are optimized out.
> @@ -130,6 +132,8 @@ static int __sched __rwbase_read_lock(st
>         raw_spin_unlock_irq(&rtm->wait_lock);
>         if (!ret)
>                 rwbase_rtmutex_unlock(rtm);
> +
> +       trace_contention_end(rwb, ret);
>         return ret;
>  }
>
> @@ -247,7 +251,7 @@ static int __sched rwbase_write_lock(str
>                 goto out_unlock;
>
>         rwbase_set_and_save_current_state(state);
> -       trace_contention_begin(rwb, LCB_F_WRITE | LCB_F_RT);
> +       trace_contention_begin(rwb, LCB_F_RT | LCB_F_WRITE);
>         for (;;) {
>                 /* Optimized out for rwlocks */
>                 if (rwbase_signal_pending_state(state, current)) {

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-28 11:39   ` Peter Zijlstra
@ 2022-03-28 17:48     ` Namhyung Kim
  2022-03-30 11:08       ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Namhyung Kim @ 2022-03-28 17:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Mon, Mar 28, 2022 at 4:39 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Mar 22, 2022 at 11:57:09AM -0700, Namhyung Kim wrote:
> > diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> > index ee2fd7614a93..c88deda77cf2 100644
> > --- a/kernel/locking/mutex.c
> > +++ b/kernel/locking/mutex.c
> > @@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
> >       }
> >
> >       set_current_state(state);
> > +     trace_contention_begin(lock, 0);
> >       for (;;) {
> >               bool first;
> >
> > @@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
> >  skip_wait:
> >       /* got the lock - cleanup and rejoice! */
> >       lock_acquired(&lock->dep_map, ip);
> > +     trace_contention_end(lock, 0);
> >
> >       if (ww_ctx)
> >               ww_mutex_lock_acquired(ww, ww_ctx);
>
> (note: it's possible to get to this trace_contention_end() without ever
> having passed a _begin -- fixed in the below)
>
> > @@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
> >  err:
> >       __set_current_state(TASK_RUNNING);
> >       __mutex_remove_waiter(lock, &waiter);
> > +     trace_contention_end(lock, ret);
> >  err_early_kill:
> >       raw_spin_unlock(&lock->wait_lock);
> >       debug_mutex_free_waiter(&waiter);
>
>
> So there was one thing here, that might or might not be important, but
> is somewhat inconsistent with the whole thing. That is, do you want to
> include optimistic spinning in the contention time or not?

Yes, this was in a grey area and would create begin -> begin -> end
path for mutexes.  But I think tools can handle it with the flags.

>
> Because currently you do it sometimes.
>
> Also, if you were to add LCB_F_MUTEX then you could have something like:

Yep, I'm ok with having the mutex flag.  Do you want me to send
v5 with this change or would you like to do it by yourself?

Thanks,
Namhyung


>
>
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -602,12 +602,14 @@ __mutex_lock_common(struct mutex *lock,
>         preempt_disable();
>         mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
>
> +       trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
>         if (__mutex_trylock(lock) ||
>             mutex_optimistic_spin(lock, ww_ctx, NULL)) {
>                 /* got the lock, yay! */
>                 lock_acquired(&lock->dep_map, ip);
>                 if (ww_ctx)
>                         ww_mutex_set_context_fastpath(ww, ww_ctx);
> +               trace_contention_end(lock, 0);
>                 preempt_enable();
>                 return 0;
>         }
> @@ -644,7 +646,7 @@ __mutex_lock_common(struct mutex *lock,
>         }
>
>         set_current_state(state);
> -       trace_contention_begin(lock, 0);
> +       trace_contention_begin(lock, LCB_F_MUTEX);
>         for (;;) {
>                 bool first;
>
> @@ -684,10 +686,16 @@ __mutex_lock_common(struct mutex *lock,
>                  * state back to RUNNING and fall through the next schedule(),
>                  * or we must see its unlock and acquire.
>                  */
> -               if (__mutex_trylock_or_handoff(lock, first) ||
> -                   (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
> +               if (__mutex_trylock_or_handoff(lock, first))
>                         break;
>
> +               if (first) {
> +                       trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
> +                       if (mutex_optimistic_spin(lock, ww_ctx, &waiter))
> +                               break;
> +                       trace_contention_begin(lock, LCB_F_MUTEX);
> +               }
> +
>                 raw_spin_lock(&lock->wait_lock);
>         }
>         raw_spin_lock(&lock->wait_lock);
> @@ -723,8 +731,8 @@ __mutex_lock_common(struct mutex *lock,
>  err:
>         __set_current_state(TASK_RUNNING);
>         __mutex_remove_waiter(lock, &waiter);
> -       trace_contention_end(lock, ret);
>  err_early_kill:
> +       trace_contention_end(lock, ret);
>         raw_spin_unlock(&lock->wait_lock);
>         debug_mutex_free_waiter(&waiter);
>         mutex_release(&lock->dep_map, ip);

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-28 17:48     ` Namhyung Kim
@ 2022-03-30 11:08       ` Peter Zijlstra
  2022-03-30 19:03         ` Namhyung Kim
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2022-03-30 11:08 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Mon, Mar 28, 2022 at 10:48:59AM -0700, Namhyung Kim wrote:
> > Also, if you were to add LCB_F_MUTEX then you could have something like:
> 
> Yep, I'm ok with having the mutex flag.  Do you want me to send
> v5 with this change or would you like to do it by yourself?

I'll frob my thing on top. No need to repost.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-30 11:08       ` Peter Zijlstra
@ 2022-03-30 19:03         ` Namhyung Kim
  2022-03-31 11:59           ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Namhyung Kim @ 2022-03-30 19:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Wed, Mar 30, 2022 at 4:09 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Mar 28, 2022 at 10:48:59AM -0700, Namhyung Kim wrote:
> > > Also, if you were to add LCB_F_MUTEX then you could have something like:
> >
> > Yep, I'm ok with having the mutex flag.  Do you want me to send
> > v5 with this change or would you like to do it by yourself?
>
> I'll frob my thing on top. No need to repost.

Cool, thanks for doing this!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-30 19:03         ` Namhyung Kim
@ 2022-03-31 11:59           ` Peter Zijlstra
  2022-04-01  6:26             ` Namhyung Kim
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2022-03-31 11:59 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Wed, Mar 30, 2022 at 12:03:06PM -0700, Namhyung Kim wrote:
> On Wed, Mar 30, 2022 at 4:09 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, Mar 28, 2022 at 10:48:59AM -0700, Namhyung Kim wrote:
> > > > Also, if you were to add LCB_F_MUTEX then you could have something like:
> > >
> > > Yep, I'm ok with having the mutex flag.  Do you want me to send
> > > v5 with this change or would you like to do it by yourself?
> >
> > I'll frob my thing on top. No need to repost.
> 
> Cool, thanks for doing this!

I've since pushed out the lot to:

  git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core

It builds, but I've not actually used it. Much appreciated if you could
test.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-31 11:59           ` Peter Zijlstra
@ 2022-04-01  6:26             ` Namhyung Kim
  2022-04-01  9:25               ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Namhyung Kim @ 2022-04-01  6:26 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Thu, Mar 31, 2022 at 01:59:16PM +0200, Peter Zijlstra wrote:
> I've since pushed out the lot to:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
> 
> It builds, but I've not actually used it. Much appreciated if you could
> test.
> 

I've tested it and it worked well.  Thanks for your work!

And we need to add the below too..

Thanks,
Namhyung

----8<----

diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h
index db5bdbb9b9c0..9463a93132c3 100644
--- a/include/trace/events/lock.h
+++ b/include/trace/events/lock.h
@@ -114,7 +114,8 @@ TRACE_EVENT(contention_begin,
 				{ LCB_F_READ,		"READ" },
 				{ LCB_F_WRITE,		"WRITE" },
 				{ LCB_F_RT,		"RT" },
-				{ LCB_F_PERCPU,		"PERCPU" }
+				{ LCB_F_PERCPU,		"PERCPU" },
+				{ LCB_F_MUTEX,		"MUTEX" }
 			  ))
 );
 

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-04-01  6:26             ` Namhyung Kim
@ 2022-04-01  9:25               ` Peter Zijlstra
  0 siblings, 0 replies; 29+ messages in thread
From: Peter Zijlstra @ 2022-04-01  9:25 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf, Hyeonggon Yoo

On Thu, Mar 31, 2022 at 11:26:17PM -0700, Namhyung Kim wrote:
> On Thu, Mar 31, 2022 at 01:59:16PM +0200, Peter Zijlstra wrote:
> > I've since pushed out the lot to:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
> > 
> > It builds, but I've not actually used it. Much appreciated if you could
> > test.
> > 
> 
> I've tested it and it worked well.  Thanks for your work!
> 
> And we need to add the below too..

Thanks

> ----8<----
> 
> diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h
> index db5bdbb9b9c0..9463a93132c3 100644
> --- a/include/trace/events/lock.h
> +++ b/include/trace/events/lock.h
> @@ -114,7 +114,8 @@ TRACE_EVENT(contention_begin,
>  				{ LCB_F_READ,		"READ" },
>  				{ LCB_F_WRITE,		"WRITE" },
>  				{ LCB_F_RT,		"RT" },
> -				{ LCB_F_PERCPU,		"PERCPU" }
> +				{ LCB_F_PERCPU,		"PERCPU" },
> +				{ LCB_F_MUTEX,		"MUTEX" }
>  			  ))
>  );

Duh, indeed, folded!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-22 12:59               ` Steven Rostedt
@ 2022-03-22 16:39                 ` Namhyung Kim
  0 siblings, 0 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-22 16:39 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Boqun Feng, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Tue, Mar 22, 2022 at 5:59 AM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Mon, 21 Mar 2022 22:31:30 -0700
> Namhyung Kim <namhyung@kernel.org> wrote:
>
> > > Thanks for the info.  But it's unclear to me if it provides the custom
> > > event with the same or different name.  Can I use both of the original
> > > and the custom events at the same time?
>
> Sorry, missed your previous question.

No problem!

>
> >
> > I've read the code and understood that it's a separate event that can
> > be used together.  Then I think we can leave the tracepoint with the
> > return value and let users customize it for their needs later.
>
> Right, thanks for looking deeper at it.

And thanks for your review.  I'll post a v4 soon.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-22  5:31             ` Namhyung Kim
@ 2022-03-22 12:59               ` Steven Rostedt
  2022-03-22 16:39                 ` Namhyung Kim
  0 siblings, 1 reply; 29+ messages in thread
From: Steven Rostedt @ 2022-03-22 12:59 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Boqun Feng, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Mon, 21 Mar 2022 22:31:30 -0700
Namhyung Kim <namhyung@kernel.org> wrote:

> > Thanks for the info.  But it's unclear to me if it provides the custom
> > event with the same or different name.  Can I use both of the original
> > and the custom events at the same time?  

Sorry, missed your previous question.

> 
> I've read the code and understood that it's a separate event that can
> be used together.  Then I think we can leave the tracepoint with the
> return value and let users customize it for their needs later.

Right, thanks for looking deeper at it.

-- Steve

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-19  0:11           ` Namhyung Kim
@ 2022-03-22  5:31             ` Namhyung Kim
  2022-03-22 12:59               ` Steven Rostedt
  0 siblings, 1 reply; 29+ messages in thread
From: Namhyung Kim @ 2022-03-22  5:31 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Boqun Feng, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Fri, Mar 18, 2022 at 5:11 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Fri, Mar 18, 2022 at 3:07 PM Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > On Fri, 18 Mar 2022 14:55:27 -0700
> > Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > > > > This looks a littl ugly ;-/ Maybe we can rename __down_common() to
> > > > > ___down_common() and implement __down_common() as:
> > > > >
> > > > >       static inline int __sched __down_common(...)
> > > > >       {
> > > > >               int ret;
> > > > >               trace_contention_begin(sem, 0);
> > > > >               ret = ___down_common(...);
> > > > >               trace_contention_end(sem, ret);
> > > > >               return ret;
> > > > >       }
> > > > >
> > > > > Thoughts?
> > > >
> > > > Yeah, that works, except I think he wants a few extra
> > > > __set_current_state()'s like so:
> > >
> > > Not anymore, I decided not to because of noise in the task state.
> > >
> > > Also I'm considering two tracepoints for the return path to reduce
> > > the buffer size as Mathieu suggested.  Normally it'd return with 0
> > > so we can ignore it in the contention_end.  For non-zero cases,
> > > we can add a new tracepoint to save the return value.
> >
> > I don't think you need two tracepoints, but one that you can override.
> >
> > We have eprobes that let you create a trace event on top of a current trace
> > event that can limit or extend what is traced in the buffer.
> >
> > And I also have custom events that can be placed on top of any existing
> > tracepoint that has full access to what is sent into the tracepoint on not
> > just what is available to the trace event:
> >
> >   https://lore.kernel.org/all/20220312232551.181178712@goodmis.org/
>
> Thanks for the info.  But it's unclear to me if it provides the custom
> event with the same or different name.  Can I use both of the original
> and the custom events at the same time?

I've read the code and understood that it's a separate event that can
be used together.  Then I think we can leave the tracepoint with the
return value and let users customize it for their needs later.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-18 22:07         ` Steven Rostedt
@ 2022-03-19  0:11           ` Namhyung Kim
  2022-03-22  5:31             ` Namhyung Kim
  0 siblings, 1 reply; 29+ messages in thread
From: Namhyung Kim @ 2022-03-19  0:11 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Peter Zijlstra, Boqun Feng, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Fri, Mar 18, 2022 at 3:07 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Fri, 18 Mar 2022 14:55:27 -0700
> Namhyung Kim <namhyung@kernel.org> wrote:
>
> > > > This looks a littl ugly ;-/ Maybe we can rename __down_common() to
> > > > ___down_common() and implement __down_common() as:
> > > >
> > > >       static inline int __sched __down_common(...)
> > > >       {
> > > >               int ret;
> > > >               trace_contention_begin(sem, 0);
> > > >               ret = ___down_common(...);
> > > >               trace_contention_end(sem, ret);
> > > >               return ret;
> > > >       }
> > > >
> > > > Thoughts?
> > >
> > > Yeah, that works, except I think he wants a few extra
> > > __set_current_state()'s like so:
> >
> > Not anymore, I decided not to because of noise in the task state.
> >
> > Also I'm considering two tracepoints for the return path to reduce
> > the buffer size as Mathieu suggested.  Normally it'd return with 0
> > so we can ignore it in the contention_end.  For non-zero cases,
> > we can add a new tracepoint to save the return value.
>
> I don't think you need two tracepoints, but one that you can override.
>
> We have eprobes that let you create a trace event on top of a current trace
> event that can limit or extend what is traced in the buffer.
>
> And I also have custom events that can be placed on top of any existing
> tracepoint that has full access to what is sent into the tracepoint on not
> just what is available to the trace event:
>
>   https://lore.kernel.org/all/20220312232551.181178712@goodmis.org/

Thanks for the info.  But it's unclear to me if it provides the custom
event with the same or different name.  Can I use both of the original
and the custom events at the same time?

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-18 21:55       ` Namhyung Kim
@ 2022-03-18 22:07         ` Steven Rostedt
  2022-03-19  0:11           ` Namhyung Kim
  0 siblings, 1 reply; 29+ messages in thread
From: Steven Rostedt @ 2022-03-18 22:07 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Boqun Feng, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Fri, 18 Mar 2022 14:55:27 -0700
Namhyung Kim <namhyung@kernel.org> wrote:

> > > This looks a littl ugly ;-/ Maybe we can rename __down_common() to
> > > ___down_common() and implement __down_common() as:
> > >
> > >       static inline int __sched __down_common(...)
> > >       {
> > >               int ret;
> > >               trace_contention_begin(sem, 0);
> > >               ret = ___down_common(...);
> > >               trace_contention_end(sem, ret);
> > >               return ret;
> > >       }
> > >
> > > Thoughts?  
> >
> > Yeah, that works, except I think he wants a few extra
> > __set_current_state()'s like so:  
> 
> Not anymore, I decided not to because of noise in the task state.
> 
> Also I'm considering two tracepoints for the return path to reduce
> the buffer size as Mathieu suggested.  Normally it'd return with 0
> so we can ignore it in the contention_end.  For non-zero cases,
> we can add a new tracepoint to save the return value.

I don't think you need two tracepoints, but one that you can override.

We have eprobes that let you create a trace event on top of a current trace
event that can limit or extend what is traced in the buffer.

And I also have custom events that can be placed on top of any existing
tracepoint that has full access to what is sent into the tracepoint on not
just what is available to the trace event:

  https://lore.kernel.org/all/20220312232551.181178712@goodmis.org/

-- Steve

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-18 16:43     ` Peter Zijlstra
@ 2022-03-18 21:55       ` Namhyung Kim
  2022-03-18 22:07         ` Steven Rostedt
  0 siblings, 1 reply; 29+ messages in thread
From: Namhyung Kim @ 2022-03-18 21:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, Ingo Molnar, Will Deacon, Waiman Long, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

Hello,

On Fri, Mar 18, 2022 at 9:43 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Mar 18, 2022 at 08:55:32PM +0800, Boqun Feng wrote:
> > On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> > [...]
> > > @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> > >                                                             long timeout)
> > >  {
> > >     struct semaphore_waiter waiter;
> > > +   bool tracing = false;
> > >
> > >     list_add_tail(&waiter.list, &sem->wait_list);
> > >     waiter.task = current;
> > > @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> > >             if (unlikely(timeout <= 0))
> > >                     goto timed_out;
> > >             __set_current_state(state);
> > > +           if (!tracing) {
> > > +                   trace_contention_begin(sem, 0);
> >
> > This looks a littl ugly ;-/ Maybe we can rename __down_common() to
> > ___down_common() and implement __down_common() as:
> >
> >       static inline int __sched __down_common(...)
> >       {
> >               int ret;
> >               trace_contention_begin(sem, 0);
> >               ret = ___down_common(...);
> >               trace_contention_end(sem, ret);
> >               return ret;
> >       }
> >
> > Thoughts?
>
> Yeah, that works, except I think he wants a few extra
> __set_current_state()'s like so:

Not anymore, I decided not to because of noise in the task state.

Also I'm considering two tracepoints for the return path to reduce
the buffer size as Mathieu suggested.  Normally it'd return with 0
so we can ignore it in the contention_end.  For non-zero cases,
we can add a new tracepoint to save the return value.

Thanks,
Namhyung

>
> diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
> index 9ee381e4d2a4..e2049a7e0ea4 100644
> --- a/kernel/locking/semaphore.c
> +++ b/kernel/locking/semaphore.c
> @@ -205,8 +205,7 @@ struct semaphore_waiter {
>   * constant, and thus optimised away by the compiler.  Likewise the
>   * 'timeout' parameter for the cases without timeouts.
>   */
> -static inline int __sched __down_common(struct semaphore *sem, long state,
> -                                                               long timeout)
> +static __always_inline int ___down_common(struct semaphore *sem, long state, long timeout)
>  {
>         struct semaphore_waiter waiter;
>
> @@ -227,15 +226,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
>                         return 0;
>         }
>
> - timed_out:
> +timed_out:
>         list_del(&waiter.list);
>         return -ETIME;
>
> - interrupted:
> +interrupted:
>         list_del(&waiter.list);
>         return -EINTR;
>  }
>
> +static __always_inline int __down_common(struct semaphore *sem, long state, long timeout)
> +{
> +       int ret;
> +
> +       __set_current_state(state);
> +       trace_contention_begin(sem, 0);
> +       ret = ___down_common(sem, state, timeout);
> +       __set_current_state(TASK_RUNNING);
> +       trace_contention_end(sem, ret);
> +
> +       return ret;
> +}
> +
>  static noinline void __sched __down(struct semaphore *sem)
>  {
>         __down_common(sem, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-17 18:19   ` Hyeonggon Yoo
@ 2022-03-18 21:43     ` Namhyung Kim
  0 siblings, 0 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-18 21:43 UTC (permalink / raw)
  To: Hyeonggon Yoo
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, LKML, Thomas Gleixner, Steven Rostedt,
	Byungchul Park, Paul E. McKenney, Mathieu Desnoyers,
	Arnd Bergmann, Radoslaw Burny, linux-arch, bpf

On Thu, Mar 17, 2022 at 11:19 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
>
> On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> > Adding the lock contention tracepoints in various lock function slow
> > paths.  Note that each arch can define spinlock differently, I only
> > added it only to the generic qspinlock for now.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> >  kernel/locking/mutex.c        |  3 +++
> >  kernel/locking/percpu-rwsem.c |  3 +++
> >  kernel/locking/qrwlock.c      |  9 +++++++++
> >  kernel/locking/qspinlock.c    |  5 +++++
> >  kernel/locking/rtmutex.c      | 11 +++++++++++
> >  kernel/locking/rwbase_rt.c    |  3 +++
> >  kernel/locking/rwsem.c        |  9 +++++++++
> >  kernel/locking/semaphore.c    | 14 +++++++++++++-
> >  8 files changed, 56 insertions(+), 1 deletion(-)
> >
>
> [...]
>
> > diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
> > index 9ee381e4d2a4..e3c19668dfee 100644
> > --- a/kernel/locking/semaphore.c
> > +++ b/kernel/locking/semaphore.c
> > @@ -32,6 +32,7 @@
> >  #include <linux/semaphore.h>
> >  #include <linux/spinlock.h>
> >  #include <linux/ftrace.h>
> > +#include <trace/events/lock.h>
> >
> >  static noinline void __down(struct semaphore *sem);
> >  static noinline int __down_interruptible(struct semaphore *sem);
> > @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> >                                                               long timeout)
> >  {
> >       struct semaphore_waiter waiter;
> > +     bool tracing = false;
> >
> >       list_add_tail(&waiter.list, &sem->wait_list);
> >       waiter.task = current;
> > @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> >               if (unlikely(timeout <= 0))
> >                       goto timed_out;
> >               __set_current_state(state);
> > +             if (!tracing) {
> > +                     trace_contention_begin(sem, 0);
> > +                     tracing = true;
> > +             }
> >               raw_spin_unlock_irq(&sem->lock);
> >               timeout = schedule_timeout(timeout);
> >               raw_spin_lock_irq(&sem->lock);
> > -             if (waiter.up)
> > +             if (waiter.up) {
> > +                     trace_contention_end(sem, 0);
> >                       return 0;
> > +             }
> >       }
> >
> >   timed_out:
> > +     if (tracing)
> > +             trace_contention_end(sem, -ETIME);
> >       list_del(&waiter.list);
> >       return -ETIME;
> >
> >   interrupted:
> > +     if (tracing)
> > +             trace_contention_end(sem, -EINTR);
> >       list_del(&waiter.list);
> >       return -EINTR;
> >  }
>
> why not simply remove tracing variable and call trace_contention_begin()
> earlier like in rwsem? we can ignore it if ret != 0.

Right, will change.  But we should not ignore the return value.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-17 13:45   ` Mathieu Desnoyers
  2022-03-17 16:10     ` Steven Rostedt
@ 2022-03-18 21:34     ` Namhyung Kim
  1 sibling, 0 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-18 21:34 UTC (permalink / raw)
  To: Mathieu Desnoyers
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, linux-kernel, Thomas Gleixner, rostedt,
	Byungchul Park, paulmck, Arnd Bergmann, Radoslaw Burny,
	linux-arch, bpf

On Thu, Mar 17, 2022 at 6:45 AM Mathieu Desnoyers
<mathieu.desnoyers@efficios.com> wrote:
>
> ----- On Mar 16, 2022, at 6:45 PM, Namhyung Kim namhyung@kernel.org wrote:
>
> > Adding the lock contention tracepoints in various lock function slow
> > paths.  Note that each arch can define spinlock differently, I only
> > added it only to the generic qspinlock for now.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> > kernel/locking/mutex.c        |  3 +++
> > kernel/locking/percpu-rwsem.c |  3 +++
> > kernel/locking/qrwlock.c      |  9 +++++++++
> > kernel/locking/qspinlock.c    |  5 +++++
> > kernel/locking/rtmutex.c      | 11 +++++++++++
> > kernel/locking/rwbase_rt.c    |  3 +++
> > kernel/locking/rwsem.c        |  9 +++++++++
> > kernel/locking/semaphore.c    | 14 +++++++++++++-
> > 8 files changed, 56 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> > index ee2fd7614a93..c88deda77cf2 100644
> > --- a/kernel/locking/mutex.c
> > +++ b/kernel/locking/mutex.c
> > @@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> > unsigned int subclas
> >       }
> >
> >       set_current_state(state);
> > +     trace_contention_begin(lock, 0);
>
> This should be LCB_F_SPIN rather than the hardcoded 0.

I don't think so.  LCB_F_SPIN is for spin locks indicating that
it's spinning on a cpu.  And the value is not 0.

>
> >       for (;;) {
> >               bool first;
> >
> > @@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> > unsigned int subclas
> > skip_wait:
> >       /* got the lock - cleanup and rejoice! */
> >       lock_acquired(&lock->dep_map, ip);
> > +     trace_contention_end(lock, 0);
> >
> >       if (ww_ctx)
> >               ww_mutex_lock_acquired(ww, ww_ctx);
> > @@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> > unsigned int subclas
> > err:
> >       __set_current_state(TASK_RUNNING);
> >       __mutex_remove_waiter(lock, &waiter);
> > +     trace_contention_end(lock, ret);
> > err_early_kill:
> >       raw_spin_unlock(&lock->wait_lock);
> >       debug_mutex_free_waiter(&waiter);
> > diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> > index c9fdae94e098..833043613af6 100644
> > --- a/kernel/locking/percpu-rwsem.c
> > +++ b/kernel/locking/percpu-rwsem.c
> > @@ -9,6 +9,7 @@
> > #include <linux/sched/task.h>
> > #include <linux/sched/debug.h>
> > #include <linux/errno.h>
> > +#include <trace/events/lock.h>
> >
> > int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
> >                       const char *name, struct lock_class_key *key)
> > @@ -154,6 +155,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore
> > *sem, bool reader)
> >       }
> >       spin_unlock_irq(&sem->waiters.lock);
> >
> > +     trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ :
> > LCB_F_WRITE));
> >       while (wait) {
> >               set_current_state(TASK_UNINTERRUPTIBLE);
> >               if (!smp_load_acquire(&wq_entry.private))
> > @@ -161,6 +163,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore
> > *sem, bool reader)
> >               schedule();
> >       }
> >       __set_current_state(TASK_RUNNING);
> > +     trace_contention_end(sem, 0);
>
> So for the reader-write locks, and percpu rwlocks, the "trace contention end" will always
> have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste of trace
> buffer space to always have space for a return value that is always 0.

Right, I think it'd be better to have a new tracepoint for the error cases
and get rid of the return value in the contention_end.

Like contention_error or contention_return ?

>
> Sorry if I missed prior dicussions of that topic, but why introduce this single
> "trace contention begin/end" muxer tracepoint with flags rather than per-locking-type
> tracepoint ? The per-locking-type tracepoint could be tuned to only have the fields
> that are needed for each locking type.

No prior discussions on that topic and thanks for bringing it out.

Having per-locking-type tracepoints will help if you want to filter
out specific types of locks efficiently.  Otherwise it'd be simpler
for users to have a single set of tracepoints to handle all locking
types like the existing lockdep tracepoints do.

As it's in a contended path, I think it's allowed to be a little bit
less efficient and the flags can tell which type of locks it's tracing
so you can filter it out anyway.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-18 12:55   ` Boqun Feng
  2022-03-18 13:24     ` Hyeonggon Yoo
@ 2022-03-18 16:43     ` Peter Zijlstra
  2022-03-18 21:55       ` Namhyung Kim
  1 sibling, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2022-03-18 16:43 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Namhyung Kim, Ingo Molnar, Will Deacon, Waiman Long, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Fri, Mar 18, 2022 at 08:55:32PM +0800, Boqun Feng wrote:
> On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> [...]
> > @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> >  								long timeout)
> >  {
> >  	struct semaphore_waiter waiter;
> > +	bool tracing = false;
> >  
> >  	list_add_tail(&waiter.list, &sem->wait_list);
> >  	waiter.task = current;
> > @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> >  		if (unlikely(timeout <= 0))
> >  			goto timed_out;
> >  		__set_current_state(state);
> > +		if (!tracing) {
> > +			trace_contention_begin(sem, 0);
> 
> This looks a littl ugly ;-/ Maybe we can rename __down_common() to
> ___down_common() and implement __down_common() as:
> 
> 	static inline int __sched __down_common(...)
> 	{
> 		int ret;
> 		trace_contention_begin(sem, 0);
> 		ret = ___down_common(...);
> 		trace_contention_end(sem, ret);
> 		return ret;
> 	}
> 
> Thoughts?

Yeah, that works, except I think he wants a few extra
__set_current_state()'s like so:

diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
index 9ee381e4d2a4..e2049a7e0ea4 100644
--- a/kernel/locking/semaphore.c
+++ b/kernel/locking/semaphore.c
@@ -205,8 +205,7 @@ struct semaphore_waiter {
  * constant, and thus optimised away by the compiler.  Likewise the
  * 'timeout' parameter for the cases without timeouts.
  */
-static inline int __sched __down_common(struct semaphore *sem, long state,
-								long timeout)
+static __always_inline int ___down_common(struct semaphore *sem, long state, long timeout)
 {
 	struct semaphore_waiter waiter;
 
@@ -227,15 +226,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
 			return 0;
 	}
 
- timed_out:
+timed_out:
 	list_del(&waiter.list);
 	return -ETIME;
 
- interrupted:
+interrupted:
 	list_del(&waiter.list);
 	return -EINTR;
 }
 
+static __always_inline int __down_common(struct semaphore *sem, long state, long timeout)
+{
+	int ret;
+
+	__set_current_state(state);
+	trace_contention_begin(sem, 0);
+	ret = ___down_common(sem, state, timeout);
+	__set_current_state(TASK_RUNNING);
+	trace_contention_end(sem, ret);
+
+	return ret;
+}
+
 static noinline void __sched __down(struct semaphore *sem)
 {
 	__down_common(sem, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-18 13:24     ` Hyeonggon Yoo
@ 2022-03-18 13:28       ` Hyeonggon Yoo
  0 siblings, 0 replies; 29+ messages in thread
From: Hyeonggon Yoo @ 2022-03-18 13:28 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Namhyung Kim, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Steven Rostedt,
	Byungchul Park, Paul E. McKenney, Mathieu Desnoyers,
	Arnd Bergmann, Radoslaw Burny, linux-arch, bpf

On Fri, Mar 18, 2022 at 01:24:24PM +0000, Hyeonggon Yoo wrote:
> On Fri, Mar 18, 2022 at 08:55:32PM +0800, Boqun Feng wrote:
> > On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> > [...]
> > > @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> > >  								long timeout)
> > >  {
> > >  	struct semaphore_waiter waiter;
> > > +	bool tracing = false;
> > >  
> > >  	list_add_tail(&waiter.list, &sem->wait_list);
> > >  	waiter.task = current;
> > > @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> > >  		if (unlikely(timeout <= 0))
> > >  			goto timed_out;
> > >  		__set_current_state(state);
> > > +		if (!tracing) {
> > > +			trace_contention_begin(sem, 0);
> > 
> > This looks a littl ugly ;-/
> 
> I agree this can be simplified a bit.
> 
> > Maybe we can rename __down_common() to
> > ___down_common() and implement __down_common() as:
> > 
> > 	static inline int __sched __down_common(...)
> > 	{
> > 		int ret;
> > 		trace_contention_begin(sem, 0);
> > 		ret = ___down_common(...);
> > 		trace_contention_end(sem, ret);
> > 		return ret;
> > 	}
> > 
> > Thoughts?
> >
> 
> But IMO inlining tracepoints is generally not a good idea.
> Will increase kernel size a lot.
>

Ah, it's already inlined. Sorry.

> > Regards,
> > Boqun
> > 
> > > +			tracing = true;
> > > +		}
> > >  		raw_spin_unlock_irq(&sem->lock);
> > >  		timeout = schedule_timeout(timeout);
> > >  		raw_spin_lock_irq(&sem->lock);
> > > -		if (waiter.up)
> > > +		if (waiter.up) {
> > > +			trace_contention_end(sem, 0);
> > >  			return 0;
> > > +		}
> > >  	}
> > >  
> > >   timed_out:
> > > +	if (tracing)
> > > +		trace_contention_end(sem, -ETIME);
> > >  	list_del(&waiter.list);
> > >  	return -ETIME;
> > >  
> > >   interrupted:
> > > +	if (tracing)
> > > +		trace_contention_end(sem, -EINTR);
> > >  	list_del(&waiter.list);
> > >  	return -EINTR;
> > >  }
> > > -- 
> > > 2.35.1.894.gb6a874cedc-goog
> > > 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-18 12:55   ` Boqun Feng
@ 2022-03-18 13:24     ` Hyeonggon Yoo
  2022-03-18 13:28       ` Hyeonggon Yoo
  2022-03-18 16:43     ` Peter Zijlstra
  1 sibling, 1 reply; 29+ messages in thread
From: Hyeonggon Yoo @ 2022-03-18 13:24 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Namhyung Kim, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Waiman Long, LKML, Thomas Gleixner, Steven Rostedt,
	Byungchul Park, Paul E. McKenney, Mathieu Desnoyers,
	Arnd Bergmann, Radoslaw Burny, linux-arch, bpf

On Fri, Mar 18, 2022 at 08:55:32PM +0800, Boqun Feng wrote:
> On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> [...]
> > @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> >  								long timeout)
> >  {
> >  	struct semaphore_waiter waiter;
> > +	bool tracing = false;
> >  
> >  	list_add_tail(&waiter.list, &sem->wait_list);
> >  	waiter.task = current;
> > @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
> >  		if (unlikely(timeout <= 0))
> >  			goto timed_out;
> >  		__set_current_state(state);
> > +		if (!tracing) {
> > +			trace_contention_begin(sem, 0);
> 
> This looks a littl ugly ;-/

I agree this can be simplified a bit.

> Maybe we can rename __down_common() to
> ___down_common() and implement __down_common() as:
> 
> 	static inline int __sched __down_common(...)
> 	{
> 		int ret;
> 		trace_contention_begin(sem, 0);
> 		ret = ___down_common(...);
> 		trace_contention_end(sem, ret);
> 		return ret;
> 	}
> 
> Thoughts?
>

But IMO inlining tracepoints is generally not a good idea.
Will increase kernel size a lot.

> Regards,
> Boqun
> 
> > +			tracing = true;
> > +		}
> >  		raw_spin_unlock_irq(&sem->lock);
> >  		timeout = schedule_timeout(timeout);
> >  		raw_spin_lock_irq(&sem->lock);
> > -		if (waiter.up)
> > +		if (waiter.up) {
> > +			trace_contention_end(sem, 0);
> >  			return 0;
> > +		}
> >  	}
> >  
> >   timed_out:
> > +	if (tracing)
> > +		trace_contention_end(sem, -ETIME);
> >  	list_del(&waiter.list);
> >  	return -ETIME;
> >  
> >   interrupted:
> > +	if (tracing)
> > +		trace_contention_end(sem, -EINTR);
> >  	list_del(&waiter.list);
> >  	return -EINTR;
> >  }
> > -- 
> > 2.35.1.894.gb6a874cedc-goog
> > 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-16 22:45 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
  2022-03-17 13:45   ` Mathieu Desnoyers
  2022-03-17 18:19   ` Hyeonggon Yoo
@ 2022-03-18 12:55   ` Boqun Feng
  2022-03-18 13:24     ` Hyeonggon Yoo
  2022-03-18 16:43     ` Peter Zijlstra
  2 siblings, 2 replies; 29+ messages in thread
From: Boqun Feng @ 2022-03-18 12:55 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, LKML,
	Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
[...]
> @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
>  								long timeout)
>  {
>  	struct semaphore_waiter waiter;
> +	bool tracing = false;
>  
>  	list_add_tail(&waiter.list, &sem->wait_list);
>  	waiter.task = current;
> @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
>  		if (unlikely(timeout <= 0))
>  			goto timed_out;
>  		__set_current_state(state);
> +		if (!tracing) {
> +			trace_contention_begin(sem, 0);

This looks a littl ugly ;-/ Maybe we can rename __down_common() to
___down_common() and implement __down_common() as:

	static inline int __sched __down_common(...)
	{
		int ret;
		trace_contention_begin(sem, 0);
		ret = ___down_common(...);
		trace_contention_end(sem, ret);
		return ret;
	}

Thoughts?

Regards,
Boqun

> +			tracing = true;
> +		}
>  		raw_spin_unlock_irq(&sem->lock);
>  		timeout = schedule_timeout(timeout);
>  		raw_spin_lock_irq(&sem->lock);
> -		if (waiter.up)
> +		if (waiter.up) {
> +			trace_contention_end(sem, 0);
>  			return 0;
> +		}
>  	}
>  
>   timed_out:
> +	if (tracing)
> +		trace_contention_end(sem, -ETIME);
>  	list_del(&waiter.list);
>  	return -ETIME;
>  
>   interrupted:
> +	if (tracing)
> +		trace_contention_end(sem, -EINTR);
>  	list_del(&waiter.list);
>  	return -EINTR;
>  }
> -- 
> 2.35.1.894.gb6a874cedc-goog
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-16 22:45 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
  2022-03-17 13:45   ` Mathieu Desnoyers
@ 2022-03-17 18:19   ` Hyeonggon Yoo
  2022-03-18 21:43     ` Namhyung Kim
  2022-03-18 12:55   ` Boqun Feng
  2 siblings, 1 reply; 29+ messages in thread
From: Hyeonggon Yoo @ 2022-03-17 18:19 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, LKML, Thomas Gleixner, Steven Rostedt,
	Byungchul Park, Paul E. McKenney, Mathieu Desnoyers,
	Arnd Bergmann, Radoslaw Burny, linux-arch, bpf

On Wed, Mar 16, 2022 at 03:45:48PM -0700, Namhyung Kim wrote:
> Adding the lock contention tracepoints in various lock function slow
> paths.  Note that each arch can define spinlock differently, I only
> added it only to the generic qspinlock for now.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  kernel/locking/mutex.c        |  3 +++
>  kernel/locking/percpu-rwsem.c |  3 +++
>  kernel/locking/qrwlock.c      |  9 +++++++++
>  kernel/locking/qspinlock.c    |  5 +++++
>  kernel/locking/rtmutex.c      | 11 +++++++++++
>  kernel/locking/rwbase_rt.c    |  3 +++
>  kernel/locking/rwsem.c        |  9 +++++++++
>  kernel/locking/semaphore.c    | 14 +++++++++++++-
>  8 files changed, 56 insertions(+), 1 deletion(-)
>

[...]

> diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
> index 9ee381e4d2a4..e3c19668dfee 100644
> --- a/kernel/locking/semaphore.c
> +++ b/kernel/locking/semaphore.c
> @@ -32,6 +32,7 @@
>  #include <linux/semaphore.h>
>  #include <linux/spinlock.h>
>  #include <linux/ftrace.h>
> +#include <trace/events/lock.h>
>  
>  static noinline void __down(struct semaphore *sem);
>  static noinline int __down_interruptible(struct semaphore *sem);
> @@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
>  								long timeout)
>  {
>  	struct semaphore_waiter waiter;
> +	bool tracing = false;
>  
>  	list_add_tail(&waiter.list, &sem->wait_list);
>  	waiter.task = current;
> @@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
>  		if (unlikely(timeout <= 0))
>  			goto timed_out;
>  		__set_current_state(state);
> +		if (!tracing) {
> +			trace_contention_begin(sem, 0);
> +			tracing = true;
> +		}
>  		raw_spin_unlock_irq(&sem->lock);
>  		timeout = schedule_timeout(timeout);
>  		raw_spin_lock_irq(&sem->lock);
> -		if (waiter.up)
> +		if (waiter.up) {
> +			trace_contention_end(sem, 0);
>  			return 0;
> +		}
>  	}
>  
>   timed_out:
> +	if (tracing)
> +		trace_contention_end(sem, -ETIME);
>  	list_del(&waiter.list);
>  	return -ETIME;
>  
>   interrupted:
> +	if (tracing)
> +		trace_contention_end(sem, -EINTR);
>  	list_del(&waiter.list);
>  	return -EINTR;
>  }

why not simply remove tracing variable and call trace_contention_begin()
earlier like in rwsem? we can ignore it if ret != 0.

-- 
Thank you, You are awesome!
Hyeonggon :-)

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-17 16:10     ` Steven Rostedt
@ 2022-03-17 16:43       ` Mathieu Desnoyers
  0 siblings, 0 replies; 29+ messages in thread
From: Mathieu Desnoyers @ 2022-03-17 16:43 UTC (permalink / raw)
  To: rostedt
  Cc: Namhyung Kim, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Waiman Long, Boqun Feng, linux-kernel, Thomas Gleixner,
	Byungchul Park, paulmck, Arnd Bergmann, Radoslaw Burny,
	linux-arch, bpf

----- On Mar 17, 2022, at 12:10 PM, rostedt rostedt@goodmis.org wrote:

> On Thu, 17 Mar 2022 09:45:28 -0400 (EDT)
> Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
> 
>> > *sem, bool reader)
>> > 		schedule();
>> > 	}
>> > 	__set_current_state(TASK_RUNNING);
>> > +	trace_contention_end(sem, 0);
>> 
>> So for the reader-write locks, and percpu rwlocks, the "trace contention end"
>> will always
>> have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste
>> of trace
>> buffer space to always have space for a return value that is always 0.
>> 
>> Sorry if I missed prior dicussions of that topic, but why introduce this single
>> "trace contention begin/end" muxer tracepoint with flags rather than
>> per-locking-type
>> tracepoint ? The per-locking-type tracepoint could be tuned to only have the
>> fields
>> that are needed for each locking type.
> 
> per-locking-type tracepoint will also add a bigger footprint.

If you are talking about code and data size footprint in the kernel, yes, we agree
there.

> 
> One extra byte is not an issue.

The implementation uses a 32-bit integer.

But given that this only traces contention, it's probably not as important to
shrink the event size as if it would be for tracing every uncontended lock/unlock.

> This is just the tracepoints. You can still
> attach your own specific LTTng trace events that ignores the zero
> parameter, and can multiplex into specific types of trace events on your
> end.

Indeed, I could, as I do for system call entry/exit tracing. But I suspect it would
not be worth it for contended locks, because I don't expect those events to be frequent
enough in the trace to justify the added code/data footprint, as you pointed out.

> 
> I prefer the current approach as it keeps the tracing footprint down.

Likewise. I just wanted to make sure this was done knowing the trace buffer vs kernel
code/data overhead trade-off.

Thanks,

Mathieu

> 
> -- Steve

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-17 13:45   ` Mathieu Desnoyers
@ 2022-03-17 16:10     ` Steven Rostedt
  2022-03-17 16:43       ` Mathieu Desnoyers
  2022-03-18 21:34     ` Namhyung Kim
  1 sibling, 1 reply; 29+ messages in thread
From: Steven Rostedt @ 2022-03-17 16:10 UTC (permalink / raw)
  To: Mathieu Desnoyers
  Cc: Namhyung Kim, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Waiman Long, Boqun Feng, linux-kernel, Thomas Gleixner,
	Byungchul Park, paulmck, Arnd Bergmann, Radoslaw Burny,
	linux-arch, bpf

On Thu, 17 Mar 2022 09:45:28 -0400 (EDT)
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:

> > *sem, bool reader)
> > 		schedule();
> > 	}
> > 	__set_current_state(TASK_RUNNING);
> > +	trace_contention_end(sem, 0);  
> 
> So for the reader-write locks, and percpu rwlocks, the "trace contention end" will always
> have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste of trace
> buffer space to always have space for a return value that is always 0.
> 
> Sorry if I missed prior dicussions of that topic, but why introduce this single
> "trace contention begin/end" muxer tracepoint with flags rather than per-locking-type
> tracepoint ? The per-locking-type tracepoint could be tuned to only have the fields
> that are needed for each locking type.

per-locking-type tracepoint will also add a bigger footprint.

One extra byte is not an issue. This is just the tracepoints. You can still
attach your own specific LTTng trace events that ignores the zero
parameter, and can multiplex into specific types of trace events on your
end.

I prefer the current approach as it keeps the tracing footprint down.

-- Steve

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-16 22:45 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
@ 2022-03-17 13:45   ` Mathieu Desnoyers
  2022-03-17 16:10     ` Steven Rostedt
  2022-03-18 21:34     ` Namhyung Kim
  2022-03-17 18:19   ` Hyeonggon Yoo
  2022-03-18 12:55   ` Boqun Feng
  2 siblings, 2 replies; 29+ messages in thread
From: Mathieu Desnoyers @ 2022-03-17 13:45 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
	Boqun Feng, linux-kernel, Thomas Gleixner, rostedt,
	Byungchul Park, paulmck, Arnd Bergmann, Radoslaw Burny,
	linux-arch, bpf

----- On Mar 16, 2022, at 6:45 PM, Namhyung Kim namhyung@kernel.org wrote:

> Adding the lock contention tracepoints in various lock function slow
> paths.  Note that each arch can define spinlock differently, I only
> added it only to the generic qspinlock for now.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> kernel/locking/mutex.c        |  3 +++
> kernel/locking/percpu-rwsem.c |  3 +++
> kernel/locking/qrwlock.c      |  9 +++++++++
> kernel/locking/qspinlock.c    |  5 +++++
> kernel/locking/rtmutex.c      | 11 +++++++++++
> kernel/locking/rwbase_rt.c    |  3 +++
> kernel/locking/rwsem.c        |  9 +++++++++
> kernel/locking/semaphore.c    | 14 +++++++++++++-
> 8 files changed, 56 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index ee2fd7614a93..c88deda77cf2 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> unsigned int subclas
> 	}
> 
> 	set_current_state(state);
> +	trace_contention_begin(lock, 0);

This should be LCB_F_SPIN rather than the hardcoded 0.

> 	for (;;) {
> 		bool first;
> 
> @@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> unsigned int subclas
> skip_wait:
> 	/* got the lock - cleanup and rejoice! */
> 	lock_acquired(&lock->dep_map, ip);
> +	trace_contention_end(lock, 0);
> 
> 	if (ww_ctx)
> 		ww_mutex_lock_acquired(ww, ww_ctx);
> @@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state,
> unsigned int subclas
> err:
> 	__set_current_state(TASK_RUNNING);
> 	__mutex_remove_waiter(lock, &waiter);
> +	trace_contention_end(lock, ret);
> err_early_kill:
> 	raw_spin_unlock(&lock->wait_lock);
> 	debug_mutex_free_waiter(&waiter);
> diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
> index c9fdae94e098..833043613af6 100644
> --- a/kernel/locking/percpu-rwsem.c
> +++ b/kernel/locking/percpu-rwsem.c
> @@ -9,6 +9,7 @@
> #include <linux/sched/task.h>
> #include <linux/sched/debug.h>
> #include <linux/errno.h>
> +#include <trace/events/lock.h>
> 
> int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
> 			const char *name, struct lock_class_key *key)
> @@ -154,6 +155,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore
> *sem, bool reader)
> 	}
> 	spin_unlock_irq(&sem->waiters.lock);
> 
> +	trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ :
> LCB_F_WRITE));
> 	while (wait) {
> 		set_current_state(TASK_UNINTERRUPTIBLE);
> 		if (!smp_load_acquire(&wq_entry.private))
> @@ -161,6 +163,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore
> *sem, bool reader)
> 		schedule();
> 	}
> 	__set_current_state(TASK_RUNNING);
> +	trace_contention_end(sem, 0);

So for the reader-write locks, and percpu rwlocks, the "trace contention end" will always
have ret=0. Likewise for qspinlock, qrwlock, and rtlock. It seems to be a waste of trace
buffer space to always have space for a return value that is always 0.

Sorry if I missed prior dicussions of that topic, but why introduce this single
"trace contention begin/end" muxer tracepoint with flags rather than per-locking-type
tracepoint ? The per-locking-type tracepoint could be tuned to only have the fields
that are needed for each locking type.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] locking: Apply contention tracepoints in the slow path
  2022-03-16 22:45 [PATCH 0/2] locking: Add new lock contention tracepoints (v3) Namhyung Kim
@ 2022-03-16 22:45 ` Namhyung Kim
  2022-03-17 13:45   ` Mathieu Desnoyers
                     ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Namhyung Kim @ 2022-03-16 22:45 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng
  Cc: LKML, Thomas Gleixner, Steven Rostedt, Byungchul Park,
	Paul E. McKenney, Mathieu Desnoyers, Arnd Bergmann,
	Radoslaw Burny, linux-arch, bpf

Adding the lock contention tracepoints in various lock function slow
paths.  Note that each arch can define spinlock differently, I only
added it only to the generic qspinlock for now.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 kernel/locking/mutex.c        |  3 +++
 kernel/locking/percpu-rwsem.c |  3 +++
 kernel/locking/qrwlock.c      |  9 +++++++++
 kernel/locking/qspinlock.c    |  5 +++++
 kernel/locking/rtmutex.c      | 11 +++++++++++
 kernel/locking/rwbase_rt.c    |  3 +++
 kernel/locking/rwsem.c        |  9 +++++++++
 kernel/locking/semaphore.c    | 14 +++++++++++++-
 8 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index ee2fd7614a93..c88deda77cf2 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 	}
 
 	set_current_state(state);
+	trace_contention_begin(lock, 0);
 	for (;;) {
 		bool first;
 
@@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 skip_wait:
 	/* got the lock - cleanup and rejoice! */
 	lock_acquired(&lock->dep_map, ip);
+	trace_contention_end(lock, 0);
 
 	if (ww_ctx)
 		ww_mutex_lock_acquired(ww, ww_ctx);
@@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 err:
 	__set_current_state(TASK_RUNNING);
 	__mutex_remove_waiter(lock, &waiter);
+	trace_contention_end(lock, ret);
 err_early_kill:
 	raw_spin_unlock(&lock->wait_lock);
 	debug_mutex_free_waiter(&waiter);
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index c9fdae94e098..833043613af6 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -9,6 +9,7 @@
 #include <linux/sched/task.h>
 #include <linux/sched/debug.h>
 #include <linux/errno.h>
+#include <trace/events/lock.h>
 
 int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
 			const char *name, struct lock_class_key *key)
@@ -154,6 +155,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore *sem, bool reader)
 	}
 	spin_unlock_irq(&sem->waiters.lock);
 
+	trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ : LCB_F_WRITE));
 	while (wait) {
 		set_current_state(TASK_UNINTERRUPTIBLE);
 		if (!smp_load_acquire(&wq_entry.private))
@@ -161,6 +163,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore *sem, bool reader)
 		schedule();
 	}
 	__set_current_state(TASK_RUNNING);
+	trace_contention_end(sem, 0);
 }
 
 bool __sched __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
index ec36b73f4733..b9f6f963d77f 100644
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -12,6 +12,7 @@
 #include <linux/percpu.h>
 #include <linux/hardirq.h>
 #include <linux/spinlock.h>
+#include <trace/events/lock.h>
 
 /**
  * queued_read_lock_slowpath - acquire read lock of a queue rwlock
@@ -34,6 +35,8 @@ void queued_read_lock_slowpath(struct qrwlock *lock)
 	}
 	atomic_sub(_QR_BIAS, &lock->cnts);
 
+	trace_contention_begin(lock, LCB_F_READ | LCB_F_SPIN);
+
 	/*
 	 * Put the reader into the wait queue
 	 */
@@ -51,6 +54,8 @@ void queued_read_lock_slowpath(struct qrwlock *lock)
 	 * Signal the next one in queue to become queue head
 	 */
 	arch_spin_unlock(&lock->wait_lock);
+
+	trace_contention_end(lock, 0);
 }
 EXPORT_SYMBOL(queued_read_lock_slowpath);
 
@@ -62,6 +67,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
 {
 	int cnts;
 
+	trace_contention_begin(lock, LCB_F_WRITE | LCB_F_SPIN);
+
 	/* Put the writer into the wait queue */
 	arch_spin_lock(&lock->wait_lock);
 
@@ -79,5 +86,7 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
 	} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));
 unlock:
 	arch_spin_unlock(&lock->wait_lock);
+
+	trace_contention_end(lock, 0);
 }
 EXPORT_SYMBOL(queued_write_lock_slowpath);
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cbff6ba53d56..65a9a10caa6f 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,6 +22,7 @@
 #include <linux/prefetch.h>
 #include <asm/byteorder.h>
 #include <asm/qspinlock.h>
+#include <trace/events/lock.h>
 
 /*
  * Include queued spinlock statistics code
@@ -401,6 +402,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	idx = node->count++;
 	tail = encode_tail(smp_processor_id(), idx);
 
+	trace_contention_begin(lock, LCB_F_SPIN);
+
 	/*
 	 * 4 nodes are allocated based on the assumption that there will
 	 * not be nested NMIs taking spinlocks. That may not be true in
@@ -554,6 +557,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	pv_kick_node(lock, next);
 
 release:
+	trace_contention_end(lock, 0);
+
 	/*
 	 * release the node
 	 */
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 8555c4efe97c..7779ee8abc2a 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -24,6 +24,8 @@
 #include <linux/sched/wake_q.h>
 #include <linux/ww_mutex.h>
 
+#include <trace/events/lock.h>
+
 #include "rtmutex_common.h"
 
 #ifndef WW_RT
@@ -1579,6 +1581,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
 
 	set_current_state(state);
 
+	trace_contention_begin(lock, LCB_F_RT);
+
 	ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk);
 	if (likely(!ret))
 		ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter);
@@ -1601,6 +1605,9 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
 	 * unconditionally. We might have to fix that up.
 	 */
 	fixup_rt_mutex_waiters(lock);
+
+	trace_contention_end(lock, ret);
+
 	return ret;
 }
 
@@ -1683,6 +1690,8 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
 	/* Save current state and set state to TASK_RTLOCK_WAIT */
 	current_save_and_set_rtlock_wait_state();
 
+	trace_contention_begin(lock, LCB_F_RT);
+
 	task_blocks_on_rt_mutex(lock, &waiter, current, NULL, RT_MUTEX_MIN_CHAINWALK);
 
 	for (;;) {
@@ -1712,6 +1721,8 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
 	 */
 	fixup_rt_mutex_waiters(lock);
 	debug_rt_mutex_free_waiter(&waiter);
+
+	trace_contention_end(lock, 0);
 }
 
 static __always_inline void __sched rtlock_slowlock(struct rt_mutex_base *lock)
diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c
index 6fd3162e4098..ec7b1fda7982 100644
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -247,11 +247,13 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
 		goto out_unlock;
 
 	rwbase_set_and_save_current_state(state);
+	trace_contention_begin(rwb, LCB_F_WRITE | LCB_F_RT);
 	for (;;) {
 		/* Optimized out for rwlocks */
 		if (rwbase_signal_pending_state(state, current)) {
 			rwbase_restore_current_state();
 			__rwbase_write_unlock(rwb, 0, flags);
+			trace_contention_end(rwb, -EINTR);
 			return -EINTR;
 		}
 
@@ -265,6 +267,7 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
 		set_current_state(state);
 	}
 	rwbase_restore_current_state();
+	trace_contention_end(rwb, 0);
 
 out_unlock:
 	raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index acde5d6f1254..465db7bd84f8 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -27,6 +27,7 @@
 #include <linux/export.h>
 #include <linux/rwsem.h>
 #include <linux/atomic.h>
+#include <trace/events/lock.h>
 
 #ifndef CONFIG_PREEMPT_RT
 #include "lock_events.h"
@@ -1014,6 +1015,8 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 	raw_spin_unlock_irq(&sem->wait_lock);
 	wake_up_q(&wake_q);
 
+	trace_contention_begin(sem, LCB_F_READ);
+
 	/* wait to be given the lock */
 	for (;;) {
 		set_current_state(state);
@@ -1035,6 +1038,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 
 	__set_current_state(TASK_RUNNING);
 	lockevent_inc(rwsem_rlock);
+	trace_contention_end(sem, 0);
 	return sem;
 
 out_nolock:
@@ -1042,6 +1046,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
 	raw_spin_unlock_irq(&sem->wait_lock);
 	__set_current_state(TASK_RUNNING);
 	lockevent_inc(rwsem_rlock_fail);
+	trace_contention_end(sem, -EINTR);
 	return ERR_PTR(-EINTR);
 }
 
@@ -1109,6 +1114,8 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 wait:
 	/* wait until we successfully acquire the lock */
 	set_current_state(state);
+	trace_contention_begin(sem, LCB_F_WRITE);
+
 	for (;;) {
 		if (rwsem_try_write_lock(sem, &waiter)) {
 			/* rwsem_try_write_lock() implies ACQUIRE on success */
@@ -1148,6 +1155,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	__set_current_state(TASK_RUNNING);
 	raw_spin_unlock_irq(&sem->wait_lock);
 	lockevent_inc(rwsem_wlock);
+	trace_contention_end(sem, 0);
 	return sem;
 
 out_nolock:
@@ -1159,6 +1167,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
 	raw_spin_unlock_irq(&sem->wait_lock);
 	wake_up_q(&wake_q);
 	lockevent_inc(rwsem_wlock_fail);
+	trace_contention_end(sem, -EINTR);
 	return ERR_PTR(-EINTR);
 }
 
diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
index 9ee381e4d2a4..e3c19668dfee 100644
--- a/kernel/locking/semaphore.c
+++ b/kernel/locking/semaphore.c
@@ -32,6 +32,7 @@
 #include <linux/semaphore.h>
 #include <linux/spinlock.h>
 #include <linux/ftrace.h>
+#include <trace/events/lock.h>
 
 static noinline void __down(struct semaphore *sem);
 static noinline int __down_interruptible(struct semaphore *sem);
@@ -209,6 +210,7 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
 								long timeout)
 {
 	struct semaphore_waiter waiter;
+	bool tracing = false;
 
 	list_add_tail(&waiter.list, &sem->wait_list);
 	waiter.task = current;
@@ -220,18 +222,28 @@ static inline int __sched __down_common(struct semaphore *sem, long state,
 		if (unlikely(timeout <= 0))
 			goto timed_out;
 		__set_current_state(state);
+		if (!tracing) {
+			trace_contention_begin(sem, 0);
+			tracing = true;
+		}
 		raw_spin_unlock_irq(&sem->lock);
 		timeout = schedule_timeout(timeout);
 		raw_spin_lock_irq(&sem->lock);
-		if (waiter.up)
+		if (waiter.up) {
+			trace_contention_end(sem, 0);
 			return 0;
+		}
 	}
 
  timed_out:
+	if (tracing)
+		trace_contention_end(sem, -ETIME);
 	list_del(&waiter.list);
 	return -ETIME;
 
  interrupted:
+	if (tracing)
+		trace_contention_end(sem, -EINTR);
 	list_del(&waiter.list);
 	return -EINTR;
 }
-- 
2.35.1.894.gb6a874cedc-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2022-04-01  9:26 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-22 18:57 [PATCH 0/2] locking: Add new lock contention tracepoints (v4) Namhyung Kim
2022-03-22 18:57 ` [PATCH 1/2] locking: Add lock contention tracepoints Namhyung Kim
2022-03-22 18:57 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
2022-03-28 11:29   ` Peter Zijlstra
2022-03-28 17:41     ` Namhyung Kim
2022-03-28 11:39   ` Peter Zijlstra
2022-03-28 17:48     ` Namhyung Kim
2022-03-30 11:08       ` Peter Zijlstra
2022-03-30 19:03         ` Namhyung Kim
2022-03-31 11:59           ` Peter Zijlstra
2022-04-01  6:26             ` Namhyung Kim
2022-04-01  9:25               ` Peter Zijlstra
  -- strict thread matches above, loose matches on Subject: below --
2022-03-16 22:45 [PATCH 0/2] locking: Add new lock contention tracepoints (v3) Namhyung Kim
2022-03-16 22:45 ` [PATCH 2/2] locking: Apply contention tracepoints in the slow path Namhyung Kim
2022-03-17 13:45   ` Mathieu Desnoyers
2022-03-17 16:10     ` Steven Rostedt
2022-03-17 16:43       ` Mathieu Desnoyers
2022-03-18 21:34     ` Namhyung Kim
2022-03-17 18:19   ` Hyeonggon Yoo
2022-03-18 21:43     ` Namhyung Kim
2022-03-18 12:55   ` Boqun Feng
2022-03-18 13:24     ` Hyeonggon Yoo
2022-03-18 13:28       ` Hyeonggon Yoo
2022-03-18 16:43     ` Peter Zijlstra
2022-03-18 21:55       ` Namhyung Kim
2022-03-18 22:07         ` Steven Rostedt
2022-03-19  0:11           ` Namhyung Kim
2022-03-22  5:31             ` Namhyung Kim
2022-03-22 12:59               ` Steven Rostedt
2022-03-22 16:39                 ` Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).