All of lore.kernel.org
 help / color / mirror / Atom feed
From: Manfred Spraul <manfred@colorfullife.com>
To: Alan Stern <stern@rowland.harvard.edu>
Cc: paulmck@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
	netfilter-devel@vger.kernel.org, netdev@vger.kernel.org,
	oleg@redhat.com, akpm@linux-foundation.org, mingo@redhat.com,
	dave@stgolabs.net, tj@kernel.org, arnd@arndb.de,
	linux-arch@vger.kernel.org, will.deacon@arm.com,
	peterz@infradead.org, parri.andrea@gmail.com,
	torvalds@linux-foundation.org,
	Pablo Neira Ayuso <pablo@netfilter.org>,
	Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>,
	Florian Westphal <fw@strlen.de>,
	"David S. Miller" <davem@davemloft.net>,
	coreteam@netfilter.org, 1vier1@web.de
Subject: Re: [PATCH RFC 01/26] netfilter: Replace spin_unlock_wait() with lock/unlock pair
Date: Thu, 6 Jul 2017 20:43:28 +0200	[thread overview]
Message-ID: <ee509620-8ba5-be45-743e-f077a457c01d@colorfullife.com> (raw)
In-Reply-To: <Pine.LNX.4.44L0.1707031543170.2027-100000@iolanthe.rowland.org>

[-- Attachment #1: Type: text/plain, Size: 519 bytes --]

Hi Alan,

On 07/03/2017 09:57 PM, Alan Stern wrote:
>
> (Alternatively, you could make nf_conntrack_all_unlock() do a
> lock+unlock on all the locks in the array, just like
> nf_conntrack_all_lock().  But of course, that would be a lot less
> efficient.)
Hmmmm.

Someone with a weakly ordered system who can test this?
semop() has a very short hotpath.

Either with aim9.shared_memory.ops_per_sec or

#sem-scalebench -t 10 -m 0
https://github.com/manfred-colorfu/ipcscale/blob/master/sem-scalebench.cpp
--
     Manfred

[-- Attachment #2: 0002-ipc-sem.c-avoid-smp_load_acuqire-in-the-hot-path.patch --]
[-- Type: text/x-patch, Size: 3330 bytes --]

>From b549e0281b66124b62aa94543f91b0e616abaf52 Mon Sep 17 00:00:00 2001
From: Manfred Spraul <manfred@colorfullife.com>
Date: Thu, 6 Jul 2017 20:05:44 +0200
Subject: [PATCH 2/2] ipc/sem.c: avoid smp_load_acuqire() in the hot-path

Alan Stern came up with an interesting idea:
If we perform a spin_lock()/spin_unlock() pair in the slow path, then
we can skip the smp_load_acquire() in the hot path.

What do you think?

* When we removed the smp_mb() from the hot path, it was a user space
  visible speed-up of 11%:

  https://lists.01.org/pipermail/lkp/2017-February/005520.html

* On x86, there is no improvement - as smp_load_acquire is READ_ONCE().

* Slowing down the slow path should not hurt:
  Due to the hysteresis code, the slow path is at least factor 10
  rarer than it was before.

Especially: Who is able to test it?

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 ipc/sem.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 947dc23..75a4358 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -186,16 +186,15 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
  *	* either local or global sem_lock() for read.
  *
  * Memory ordering:
- * Most ordering is enforced by using spin_lock() and spin_unlock().
+ * All ordering is enforced by using spin_lock() and spin_unlock().
  * The special case is use_global_lock:
  * Setting it from non-zero to 0 is a RELEASE, this is ensured by
- * using smp_store_release().
- * Testing if it is non-zero is an ACQUIRE, this is ensured by using
- * smp_load_acquire().
- * Setting it from 0 to non-zero must be ordered with regards to
- * this smp_load_acquire(), this is guaranteed because the smp_load_acquire()
- * is inside a spin_lock() and after a write from 0 to non-zero a
- * spin_lock()+spin_unlock() is done.
+ * performing spin_lock()/spin_lock() on every semaphore before setting to
+ * non-zero.
+ * Setting it from 0 to non-zero is an ACQUIRE, this is ensured by
+ * performing spin_lock()/spin_lock() on every semaphore after setting to
+ * non-zero.
+ * Testing if it is non-zero is within spin_lock(), no need for a barrier.
  */
 
 #define sc_semmsl	sem_ctls[0]
@@ -325,13 +324,20 @@ static void complexmode_tryleave(struct sem_array *sma)
 		return;
 	}
 	if (sma->use_global_lock == 1) {
+		int i;
+		struct sem *sem;
 		/*
 		 * Immediately after setting use_global_lock to 0,
-		 * a simple op can start. Thus: all memory writes
-		 * performed by the current operation must be visible
-		 * before we set use_global_lock to 0.
+		 * a simple op can start.
+		 * Perform a full lock/unlock, to guarantee memory
+		 * ordering.
 		 */
-		smp_store_release(&sma->use_global_lock, 0);
+		for (i = 0; i < sma->sem_nsems; i++) {
+			sem = sma->sem_base + i;
+			spin_lock(&sem->lock);
+			spin_unlock(&sem->lock);
+		}
+		sma->use_global_lock = 0;
 	} else {
 		sma->use_global_lock--;
 	}
@@ -379,8 +385,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 		 */
 		spin_lock(&sem->lock);
 
-		/* pairs with smp_store_release() */
-		if (!smp_load_acquire(&sma->use_global_lock)) {
+		if (!sma->use_global_lock) {
 			/* fast path successful! */
 			return sops->sem_num;
 		}
-- 
2.9.4


WARNING: multiple messages have this Message-ID (diff)
From: Manfred Spraul <manfred@colorfullife.com>
To: Alan Stern <stern@rowland.harvard.edu>
Cc: paulmck@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
	netfilter-devel@vger.kernel.org, netdev@vger.kernel.org,
	oleg@redhat.com, akpm@linux-foundation.org, mingo@redhat.com,
	dave@stgolabs.net, tj@kernel.org, arnd@arndb.de,
	linux-arch@vger.kernel.org, will.deacon@arm.com,
	peterz@infradead.org, parri.andrea@gmail.com,
	torvalds@linux-foundation.org,
	Pablo Neira Ayuso <pablo@netfilter.org>,
	Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>,
	Florian Westphal <fw@strlen.de>,
	"David S. Miller" <davem@davemloft.net>,
	coreteam@netfilter.org, 1vier1@web.de
Subject: Re: [PATCH RFC 01/26] netfilter: Replace spin_unlock_wait() with lock/unlock pair
Date: Thu, 6 Jul 2017 20:43:28 +0200	[thread overview]
Message-ID: <ee509620-8ba5-be45-743e-f077a457c01d@colorfullife.com> (raw)
In-Reply-To: <Pine.LNX.4.44L0.1707031543170.2027-100000@iolanthe.rowland.org>

[-- Attachment #1: Type: text/plain, Size: 519 bytes --]

Hi Alan,

On 07/03/2017 09:57 PM, Alan Stern wrote:
>
> (Alternatively, you could make nf_conntrack_all_unlock() do a
> lock+unlock on all the locks in the array, just like
> nf_conntrack_all_lock().  But of course, that would be a lot less
> efficient.)
Hmmmm.

Someone with a weakly ordered system who can test this?
semop() has a very short hotpath.

Either with aim9.shared_memory.ops_per_sec or

#sem-scalebench -t 10 -m 0
https://github.com/manfred-colorfu/ipcscale/blob/master/sem-scalebench.cpp
--
     Manfred

[-- Attachment #2: 0002-ipc-sem.c-avoid-smp_load_acuqire-in-the-hot-path.patch --]
[-- Type: text/x-patch, Size: 3329 bytes --]

From b549e0281b66124b62aa94543f91b0e616abaf52 Mon Sep 17 00:00:00 2001
From: Manfred Spraul <manfred@colorfullife.com>
Date: Thu, 6 Jul 2017 20:05:44 +0200
Subject: [PATCH 2/2] ipc/sem.c: avoid smp_load_acuqire() in the hot-path

Alan Stern came up with an interesting idea:
If we perform a spin_lock()/spin_unlock() pair in the slow path, then
we can skip the smp_load_acquire() in the hot path.

What do you think?

* When we removed the smp_mb() from the hot path, it was a user space
  visible speed-up of 11%:

  https://lists.01.org/pipermail/lkp/2017-February/005520.html

* On x86, there is no improvement - as smp_load_acquire is READ_ONCE().

* Slowing down the slow path should not hurt:
  Due to the hysteresis code, the slow path is at least factor 10
  rarer than it was before.

Especially: Who is able to test it?

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 ipc/sem.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 947dc23..75a4358 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -186,16 +186,15 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
  *	* either local or global sem_lock() for read.
  *
  * Memory ordering:
- * Most ordering is enforced by using spin_lock() and spin_unlock().
+ * All ordering is enforced by using spin_lock() and spin_unlock().
  * The special case is use_global_lock:
  * Setting it from non-zero to 0 is a RELEASE, this is ensured by
- * using smp_store_release().
- * Testing if it is non-zero is an ACQUIRE, this is ensured by using
- * smp_load_acquire().
- * Setting it from 0 to non-zero must be ordered with regards to
- * this smp_load_acquire(), this is guaranteed because the smp_load_acquire()
- * is inside a spin_lock() and after a write from 0 to non-zero a
- * spin_lock()+spin_unlock() is done.
+ * performing spin_lock()/spin_lock() on every semaphore before setting to
+ * non-zero.
+ * Setting it from 0 to non-zero is an ACQUIRE, this is ensured by
+ * performing spin_lock()/spin_lock() on every semaphore after setting to
+ * non-zero.
+ * Testing if it is non-zero is within spin_lock(), no need for a barrier.
  */
 
 #define sc_semmsl	sem_ctls[0]
@@ -325,13 +324,20 @@ static void complexmode_tryleave(struct sem_array *sma)
 		return;
 	}
 	if (sma->use_global_lock == 1) {
+		int i;
+		struct sem *sem;
 		/*
 		 * Immediately after setting use_global_lock to 0,
-		 * a simple op can start. Thus: all memory writes
-		 * performed by the current operation must be visible
-		 * before we set use_global_lock to 0.
+		 * a simple op can start.
+		 * Perform a full lock/unlock, to guarantee memory
+		 * ordering.
 		 */
-		smp_store_release(&sma->use_global_lock, 0);
+		for (i = 0; i < sma->sem_nsems; i++) {
+			sem = sma->sem_base + i;
+			spin_lock(&sem->lock);
+			spin_unlock(&sem->lock);
+		}
+		sma->use_global_lock = 0;
 	} else {
 		sma->use_global_lock--;
 	}
@@ -379,8 +385,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 		 */
 		spin_lock(&sem->lock);
 
-		/* pairs with smp_store_release() */
-		if (!smp_load_acquire(&sma->use_global_lock)) {
+		if (!sma->use_global_lock) {
 			/* fast path successful! */
 			return sops->sem_num;
 		}
-- 
2.9.4


WARNING: multiple messages have this Message-ID (diff)
From: Manfred Spraul <manfred@colorfullife.com>
To: Alan Stern <stern@rowland.harvard.edu>
Cc: paulmck@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
	netfilter-devel@vger.kernel.org, netdev@vger.kernel.org,
	oleg@redhat.com, akpm@linux-foundation.org, mingo@redhat.com,
	dave@stgolabs.net, tj@kernel.org, arnd@arndb.de,
	linux-arch@vger.kernel.org, will.deacon@arm.com,
	peterz@infradead.org, parri.andrea@gmail.com,
	torvalds@linux-foundation.org,
	Pablo Neira Ayuso <pablo@netfilter.org>,
	Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>,
	Florian Westphal <fw@strlen.de>,
	"David S. Miller" <davem@davemloft.net>,
	coreteam@netfilter.org, 1vier1@web.de
Subject: Re: [PATCH RFC 01/26] netfilter: Replace spin_unlock_wait() with lock/unlock pair
Date: Thu, 6 Jul 2017 20:43:28 +0200	[thread overview]
Message-ID: <ee509620-8ba5-be45-743e-f077a457c01d@colorfullife.com> (raw)
Message-ID: <20170706184328.7OJIUgtmjXrMQCcXYAmuMxjalqOcCw1AQnnaK9EcqwQ@z> (raw)
In-Reply-To: <Pine.LNX.4.44L0.1707031543170.2027-100000@iolanthe.rowland.org>

[-- Attachment #1: Type: text/plain, Size: 519 bytes --]

Hi Alan,

On 07/03/2017 09:57 PM, Alan Stern wrote:
>
> (Alternatively, you could make nf_conntrack_all_unlock() do a
> lock+unlock on all the locks in the array, just like
> nf_conntrack_all_lock().  But of course, that would be a lot less
> efficient.)
Hmmmm.

Someone with a weakly ordered system who can test this?
semop() has a very short hotpath.

Either with aim9.shared_memory.ops_per_sec or

#sem-scalebench -t 10 -m 0
https://github.com/manfred-colorfu/ipcscale/blob/master/sem-scalebench.cpp
--
     Manfred

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-ipc-sem.c-avoid-smp_load_acuqire-in-the-hot-path.patch --]
[-- Type: text/x-patch; name="0002-ipc-sem.c-avoid-smp_load_acuqire-in-the-hot-path.patch", Size: 0 bytes --]



  reply	other threads:[~2017-07-06 18:43 UTC|newest]

Thread overview: 211+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-29 23:59 [PATCH RFC 0/26] Remove spin_unlock_wait() Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 01/26] netfilter: Replace spin_unlock_wait() with lock/unlock pair Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
     [not found]   ` <a6642feb-2f3a-980f-5ed6-2deb79563e6b@colorfullife.com>
2017-07-02  2:00     ` Paul E. McKenney
2017-07-03 14:39     ` Alan Stern
2017-07-03 14:39       ` Alan Stern
2017-07-03 17:14       ` Paul E. McKenney
2017-07-03 19:01         ` Manfred Spraul
2017-07-03 19:57           ` Alan Stern
2017-07-03 19:57             ` Alan Stern
2017-07-06 18:43             ` Manfred Spraul [this message]
2017-07-06 18:43               ` Manfred Spraul
2017-07-06 18:43               ` Manfred Spraul
2017-07-03 20:04         ` Alan Stern
2017-07-03 20:04           ` Alan Stern
2017-07-03 20:53           ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 02/26] task_work: " Paul E. McKenney
2017-06-30 11:04   ` Oleg Nesterov
2017-06-30 12:50     ` Paul E. McKenney
2017-06-30 15:20       ` Oleg Nesterov
2017-06-30 16:16         ` Paul E. McKenney
2017-06-30 17:21           ` Paul E. McKenney
2017-06-30 19:21           ` Oleg Nesterov
2017-06-30 19:50             ` Alan Stern
2017-06-30 19:50               ` Alan Stern
2017-06-30 20:04               ` Paul E. McKenney
2017-06-30 20:02             ` Paul E. McKenney
2017-06-30 20:19               ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 03/26] sched: " Paul E. McKenney
2017-06-30 10:31   ` Arnd Bergmann
2017-06-30 12:35     ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 04/26] completion: " Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 05/26] exit: " Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 06/26] ipc: " Paul E. McKenney
2017-07-01 19:23   ` Manfred Spraul
2017-07-02  3:16     ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 07/26] drivers/ata: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 08/26] locking: Remove spin_unlock_wait() generic definitions Paul E. McKenney
2017-06-30  9:19   ` Will Deacon
2017-06-30 12:38     ` Paul E. McKenney
2017-06-30 13:13       ` Will Deacon
2017-06-30 22:18         ` Paul E. McKenney
2017-07-03 13:15           ` Will Deacon
2017-07-03 16:18             ` Paul E. McKenney
2017-07-03 16:40               ` Linus Torvalds
2017-07-03 17:13                 ` Will Deacon
2017-07-03 22:30                   ` Paul E. McKenney
2017-07-03 22:49                     ` Linus Torvalds
2017-07-04  0:39                       ` Paul E. McKenney
2017-07-04  0:54                         ` Paul E. McKenney
2017-07-03 21:10                 ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 09/26] alpha: Remove spin_unlock_wait() arch-specific definitions Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 10/26] arc: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 11/26] arm: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 12/26] arm64: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  9:20   ` Will Deacon
2017-06-30  9:20     ` Will Deacon
2017-06-30 17:29     ` Paul E. McKenney
2017-06-30 17:29       ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 13/26] blackfin: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 14/26] hexagon: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 15/26] ia64: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 16/26] m32r: " Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 17/26] metag: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 18/26] mips: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 19/26] mn10300: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 20/26] parisc: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 21/26] powerpc: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-07-02  3:58   ` Boqun Feng
2017-07-05 23:57     ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 22/26] s390: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 23/26] sh: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 24/26] sparc: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 25/26] tile: " Paul E. McKenney
2017-06-30  0:06   ` Linus Torvalds
2017-06-30  0:09     ` Paul E. McKenney
2017-06-30  0:14       ` Paul E. McKenney
2017-06-30  0:10     ` Linus Torvalds
2017-06-30  0:24       ` Paul E. McKenney
2017-06-30  0:01 ` [PATCH RFC 26/26] xtensa: " Paul E. McKenney
2017-06-30  0:01   ` Paul E. McKenney
2017-07-05 23:29 ` [PATCH v2 0/9] Remove spin_unlock_wait() Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 1/9] net/netfilter/nf_conntrack_core: Fix net_conntrack_lock() Paul E. McKenney
2017-07-05 23:31     ` Paul E. McKenney
2017-07-06 18:45     ` Manfred Spraul
2017-07-06 18:45       ` Manfred Spraul
2017-07-06 18:45       ` Manfred Spraul
2017-07-06 20:26       ` Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 2/9] task_work: Replace spin_unlock_wait() with lock/unlock pair Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 3/9] sched: " Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 4/9] completion: " Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 5/9] exit: " Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 6/9] ipc: " Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 7/9] drivers/ata: " Paul E. McKenney
2017-07-05 23:31     ` Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 8/9] locking: Remove spin_unlock_wait() generic definitions Paul E. McKenney
2017-07-05 23:31   ` [PATCH v2 9/9] arch: Remove spin_unlock_wait() arch-specific definitions Paul E. McKenney
2017-07-06 14:12   ` [PATCH v2 0/9] Remove spin_unlock_wait() David Laight
2017-07-06 15:21     ` Paul E. McKenney
2017-07-06 15:21       ` Paul E. McKenney
2017-07-06 15:21       ` Paul E. McKenney
2017-07-06 16:10       ` Peter Zijlstra
2017-07-06 16:10         ` Peter Zijlstra
2017-07-06 16:10         ` Peter Zijlstra
2017-07-06 16:24         ` Paul E. McKenney
2017-07-06 16:24           ` Paul E. McKenney
2017-07-06 16:24           ` Paul E. McKenney
2017-07-06 16:41           ` Peter Zijlstra
2017-07-06 16:41             ` Peter Zijlstra
2017-07-06 16:41             ` Peter Zijlstra
2017-07-06 17:03             ` Paul E. McKenney
2017-07-06 17:03               ` Paul E. McKenney
2017-07-06 17:03               ` Paul E. McKenney
2017-07-06 16:49           ` Alan Stern
2017-07-06 16:54             ` Peter Zijlstra
2017-07-06 16:54               ` Peter Zijlstra
2017-07-06 19:37               ` Alan Stern
2017-07-06 19:37                 ` Alan Stern
2017-07-06 16:05     ` Peter Zijlstra
2017-07-06 16:05       ` Peter Zijlstra
2017-07-06 16:05       ` Peter Zijlstra
2017-07-06 16:20       ` Paul E. McKenney
2017-07-06 16:20         ` Paul E. McKenney
2017-07-06 16:20         ` Paul E. McKenney
2017-07-06 16:50         ` Peter Zijlstra
2017-07-06 16:50           ` Peter Zijlstra
2017-07-06 16:50           ` Peter Zijlstra
2017-07-06 17:08           ` Will Deacon
2017-07-06 17:08             ` Will Deacon
2017-07-06 17:08             ` Will Deacon
2017-07-06 17:29             ` Paul E. McKenney
2017-07-06 17:29               ` Paul E. McKenney
2017-07-06 17:29               ` Paul E. McKenney
2017-07-06 17:18           ` Paul E. McKenney
2017-07-06 17:18             ` Paul E. McKenney
2017-07-06 17:18             ` Paul E. McKenney
2017-07-07  8:31           ` Ingo Molnar
2017-07-07  8:31             ` Ingo Molnar
2017-07-07  8:31             ` Ingo Molnar
2017-07-07  8:44             ` Peter Zijlstra
2017-07-07  8:44               ` Peter Zijlstra
2017-07-07  8:44               ` Peter Zijlstra
2017-07-07 10:33               ` Ingo Molnar
2017-07-07 10:33                 ` Ingo Molnar
2017-07-07 10:33                 ` Ingo Molnar
2017-07-07 11:23                 ` Peter Zijlstra
2017-07-07 11:23                   ` Peter Zijlstra
2017-07-07 11:23                   ` Peter Zijlstra
2017-07-07 14:41             ` Paul E. McKenney
2017-07-07 14:41               ` Paul E. McKenney
2017-07-08  8:43               ` Ingo Molnar
2017-07-08  8:43                 ` Ingo Molnar
2017-07-08 11:41                 ` Paul E. McKenney
2017-07-08 11:41                   ` Paul E. McKenney
2017-07-07 17:47             ` Manfred Spraul
2017-07-07 17:47               ` Manfred Spraul
2017-07-07 17:47               ` Manfred Spraul
2017-07-08  8:35               ` Ingo Molnar
2017-07-08  8:35                 ` Ingo Molnar
2017-07-08 11:39                 ` Paul E. McKenney
2017-07-08 11:39                   ` Paul E. McKenney
2017-07-08 12:30                   ` Ingo Molnar
2017-07-08 12:30                     ` Ingo Molnar
2017-07-08 14:45                     ` Paul E. McKenney
2017-07-08 14:45                       ` Paul E. McKenney
2017-07-08 16:21                     ` Alan Stern
2017-07-08 16:21                       ` Alan Stern
2017-07-10 17:22                       ` Manfred Spraul
2017-07-07  8:06       ` Ingo Molnar
2017-07-07  8:06         ` Ingo Molnar
2017-07-07  8:06         ` Ingo Molnar
2017-07-07  9:32         ` Ingo Molnar
2017-07-07  9:32           ` Ingo Molnar
2017-07-07  9:32           ` Ingo Molnar
2017-07-07 19:27   ` [PATCH v3 " Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 1/9] net/netfilter/nf_conntrack_core: Fix net_conntrack_lock() Paul E. McKenney
2017-07-07 19:28       ` Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 2/9] task_work: Replace spin_unlock_wait() with lock/unlock pair Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 3/9] sched: " Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 4/9] completion: " Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 5/9] exit: " Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 6/9] ipc: " Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 7/9] drivers/ata: " Paul E. McKenney
2017-07-07 19:28       ` Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 8/9] locking: Remove spin_unlock_wait() generic definitions Paul E. McKenney
2017-07-07 19:28     ` [PATCH v3 9/9] arch: Remove spin_unlock_wait() arch-specific definitions Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ee509620-8ba5-be45-743e-f077a457c01d@colorfullife.com \
    --to=manfred@colorfullife.com \
    --cc=1vier1@web.de \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=coreteam@netfilter.org \
    --cc=dave@stgolabs.net \
    --cc=davem@davemloft.net \
    --cc=fw@strlen.de \
    --cc=kadlec@blackhole.kfki.hu \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=oleg@redhat.com \
    --cc=pablo@netfilter.org \
    --cc=parri.andrea@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=stern@rowland.harvard.edu \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.