netfilter-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Fixes for KCSAN findings
@ 2021-06-27 16:19 Manfred Spraul
  2021-06-27 16:19 ` [PATCH 1/2] net/netfilter/nf_conntrack_core: Mark access for KCSAN Manfred Spraul
  2021-06-27 16:19 ` [PATCH 2/2] ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock Manfred Spraul
  0 siblings, 2 replies; 4+ messages in thread
From: Manfred Spraul @ 2021-06-27 16:19 UTC (permalink / raw)
  To: LKML, Andrew Morton, netfilter-devel
  Cc: Davidlohr Bueso, Paul E . McKenney, 1vier1, Manfred Spraul

Extended patch, derived from a KCSAN finding:

- Fix for nf_conntrack_core.
  Unfortunately not tested, I don't have a conntrack setup.
- Fix for ipc/semc.c

@Netfilter-team: Could you integrate patch #1?

@Andrew: Could you add the ipc patch to your mm tree, as candidate for
linux-next?

--
	Manfred


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] net/netfilter/nf_conntrack_core: Mark access for KCSAN
  2021-06-27 16:19 [PATCH 0/2] Fixes for KCSAN findings Manfred Spraul
@ 2021-06-27 16:19 ` Manfred Spraul
  2021-07-06 23:06   ` Pablo Neira Ayuso
  2021-06-27 16:19 ` [PATCH 2/2] ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock Manfred Spraul
  1 sibling, 1 reply; 4+ messages in thread
From: Manfred Spraul @ 2021-06-27 16:19 UTC (permalink / raw)
  To: LKML, Andrew Morton, netfilter-devel
  Cc: Davidlohr Bueso, Paul E . McKenney, 1vier1, Manfred Spraul

KCSAN detected an data race with ipc/sem.c that is intentional.

As nf_conntrack_lock() uses the same algorithm: Update
nf_conntrack_core as well:

nf_conntrack_lock() contains
  a1) spin_lock()
  a2) smp_load_acquire(nf_conntrack_locks_all).

a1) actually accesses one lock from an array of locks.

nf_conntrack_locks_all() contains
  b1) nf_conntrack_locks_all=true (normal write)
  b2) spin_lock()
  b3) spin_unlock()

b2 and b3 are done for every lock.

This guarantees that nf_conntrack_locks_all() prevents any
concurrent nf_conntrack_lock() owners:
If a thread past a1), then b2) will block until that thread releases
the lock.
If the threat is before a1, then b3)+a1) ensure the write b1) is
visible, thus a2) is guaranteed to see the updated value.

But: This is only the latest time when b1) becomes visible.
It may also happen that b1) is visible an undefined amount of time
before the b3). And thus KCSAN will notice a data race.

In addition, the compiler might be too clever.

Solution: Use WRITE_ONCE().

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
---
 net/netfilter/nf_conntrack_core.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index e0befcf8113a..d3f3c91b770e 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -153,7 +153,15 @@ static void nf_conntrack_all_lock(void)
 
 	spin_lock(&nf_conntrack_locks_all_lock);
 
-	nf_conntrack_locks_all = true;
+	/* For nf_contrack_locks_all, only the latest time when another
+	 * CPU will see an update is controlled, by the "release" of the
+	 * spin_lock below.
+	 * The earliest time is not controlled, an thus KCSAN could detect
+	 * a race when nf_conntract_lock() reads the variable.
+	 * WRITE_ONCE() is used to ensure the compiler will not
+	 * optimize the write.
+	 */
+	WRITE_ONCE(nf_conntrack_locks_all, true);
 
 	for (i = 0; i < CONNTRACK_LOCKS; i++) {
 		spin_lock(&nf_conntrack_locks[i]);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock
  2021-06-27 16:19 [PATCH 0/2] Fixes for KCSAN findings Manfred Spraul
  2021-06-27 16:19 ` [PATCH 1/2] net/netfilter/nf_conntrack_core: Mark access for KCSAN Manfred Spraul
@ 2021-06-27 16:19 ` Manfred Spraul
  1 sibling, 0 replies; 4+ messages in thread
From: Manfred Spraul @ 2021-06-27 16:19 UTC (permalink / raw)
  To: LKML, Andrew Morton, netfilter-devel
  Cc: Davidlohr Bueso, Paul E . McKenney, 1vier1, Manfred Spraul

The patch solves three weaknesses in ipc/sem.c:

1) The initial read of use_global_lock in sem_lock() is an
intentional race. KCSAN detects these accesses and prints
a warning.

2) The code assumes that plain C read/writes are not
mangled by the CPU or the compiler.

3) The comment it sysvipc_sem_proc_show() was hard to
understand: The rest of the comments in ipc/sem.c speaks
about sem_perm.lock, and suddenly this function speaks
about ipc_lock_object().

To solve 1) and 2), use READ_ONCE()/WRITE_ONCE().
Plain C reads are used in code that owns sma->sem_perm.lock.

The comment is updated to solve 3)

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
---
 ipc/sem.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index bf534c74293e..b7608502f9d8 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -217,6 +217,8 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
  * this smp_load_acquire(), this is guaranteed because the smp_load_acquire()
  * is inside a spin_lock() and after a write from 0 to non-zero a
  * spin_lock()+spin_unlock() is done.
+ * To prevent the compiler/cpu temporarily writing 0 to use_global_lock,
+ * READ_ONCE()/WRITE_ONCE() is used.
  *
  * 2) queue.status: (SEM_BARRIER_2)
  * Initialization is done while holding sem_lock(), so no further barrier is
@@ -342,10 +344,10 @@ static void complexmode_enter(struct sem_array *sma)
 		 * Nothing to do, just reset the
 		 * counter until we return to simple mode.
 		 */
-		sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
+		WRITE_ONCE(sma->use_global_lock, USE_GLOBAL_LOCK_HYSTERESIS);
 		return;
 	}
-	sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
+	WRITE_ONCE(sma->use_global_lock, USE_GLOBAL_LOCK_HYSTERESIS);
 
 	for (i = 0; i < sma->sem_nsems; i++) {
 		sem = &sma->sems[i];
@@ -371,7 +373,8 @@ static void complexmode_tryleave(struct sem_array *sma)
 		/* See SEM_BARRIER_1 for purpose/pairing */
 		smp_store_release(&sma->use_global_lock, 0);
 	} else {
-		sma->use_global_lock--;
+		WRITE_ONCE(sma->use_global_lock,
+				sma->use_global_lock-1);
 	}
 }
 
@@ -412,7 +415,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 	 * Initial check for use_global_lock. Just an optimization,
 	 * no locking, no memory barrier.
 	 */
-	if (!sma->use_global_lock) {
+	if (!READ_ONCE(sma->use_global_lock)) {
 		/*
 		 * It appears that no complex operation is around.
 		 * Acquire the per-semaphore lock.
@@ -2435,7 +2438,8 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it)
 
 	/*
 	 * The proc interface isn't aware of sem_lock(), it calls
-	 * ipc_lock_object() directly (in sysvipc_find_ipc).
+	 * ipc_lock_object(), i.e. spin_lock(&sma->sem_perm.lock).
+	 * (in sysvipc_find_ipc)
 	 * In order to stay compatible with sem_lock(), we must
 	 * enter / leave complex_mode.
 	 */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/2] net/netfilter/nf_conntrack_core: Mark access for KCSAN
  2021-06-27 16:19 ` [PATCH 1/2] net/netfilter/nf_conntrack_core: Mark access for KCSAN Manfred Spraul
@ 2021-07-06 23:06   ` Pablo Neira Ayuso
  0 siblings, 0 replies; 4+ messages in thread
From: Pablo Neira Ayuso @ 2021-07-06 23:06 UTC (permalink / raw)
  To: Manfred Spraul
  Cc: LKML, Andrew Morton, netfilter-devel, Davidlohr Bueso,
	Paul E . McKenney, 1vier1

On Sun, Jun 27, 2021 at 06:19:18PM +0200, Manfred Spraul wrote:
> KCSAN detected an data race with ipc/sem.c that is intentional.
> 
> As nf_conntrack_lock() uses the same algorithm: Update
> nf_conntrack_core as well:
> 
> nf_conntrack_lock() contains
>   a1) spin_lock()
>   a2) smp_load_acquire(nf_conntrack_locks_all).
> 
> a1) actually accesses one lock from an array of locks.
> 
> nf_conntrack_locks_all() contains
>   b1) nf_conntrack_locks_all=true (normal write)
>   b2) spin_lock()
>   b3) spin_unlock()
> 
> b2 and b3 are done for every lock.
> 
> This guarantees that nf_conntrack_locks_all() prevents any
> concurrent nf_conntrack_lock() owners:
> If a thread past a1), then b2) will block until that thread releases
> the lock.
> If the threat is before a1, then b3)+a1) ensure the write b1) is
> visible, thus a2) is guaranteed to see the updated value.
> 
> But: This is only the latest time when b1) becomes visible.
> It may also happen that b1) is visible an undefined amount of time
> before the b3). And thus KCSAN will notice a data race.
> 
> In addition, the compiler might be too clever.
> 
> Solution: Use WRITE_ONCE().

Applied, thanks.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-07-06 23:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-27 16:19 [PATCH 0/2] Fixes for KCSAN findings Manfred Spraul
2021-06-27 16:19 ` [PATCH 1/2] net/netfilter/nf_conntrack_core: Mark access for KCSAN Manfred Spraul
2021-07-06 23:06   ` Pablo Neira Ayuso
2021-06-27 16:19 ` [PATCH 2/2] ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock Manfred Spraul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).