From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752038AbdGaJgY (ORCPT ); Mon, 31 Jul 2017 05:36:24 -0400 Received: from merlin.infradead.org ([205.233.59.134]:34960 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751910AbdGaJgX (ORCPT ); Mon, 31 Jul 2017 05:36:23 -0400 Date: Mon, 31 Jul 2017 11:36:12 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@kernel.org, pbonzini@redhat.com, dvyukov@google.com, hpa@zytor.com Cc: linux-tip-commits@vger.kernel.org, Steven Rostedt Subject: Re: [tip:locking/urgent] locking/static_key: Fix concurrent static_key_slow_inc() Message-ID: <20170731093612.7v23kxkk47i56io6@hirez.programming.kicks-ass.net> References: <1466527937-69798-1-git-send-email-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 24, 2016 at 01:59:06AM -0700, tip-bot for Paolo Bonzini wrote: > +++ b/kernel/jump_label.c > @@ -58,13 +58,36 @@ static void jump_label_update(struct static_key *key); > > void static_key_slow_inc(struct static_key *key) > { > + int v, v1; > + > STATIC_KEY_CHECK_USE(); > - if (atomic_inc_not_zero(&key->enabled)) > - return; > + > + /* > + * Careful if we get concurrent static_key_slow_inc() calls; > + * later calls must wait for the first one to _finish_ the > + * jump_label_update() process. At the same time, however, > + * the jump_label_update() call below wants to see > + * static_key_enabled(&key) for jumps to be updated properly. > + * > + * So give a special meaning to negative key->enabled: it sends > + * static_key_slow_inc() down the slow path, and it is non-zero > + * so it counts as "enabled" in jump_label_update(). Note that > + * atomic_inc_unless_negative() checks >= 0, so roll our own. > + */ > + for (v = atomic_read(&key->enabled); v > 0; v = v1) { > + v1 = atomic_cmpxchg(&key->enabled, v, v + 1); > + if (likely(v1 == v)) > + return; > + } > > jump_label_lock(); > - if (atomic_inc_return(&key->enabled) == 1) > + if (atomic_read(&key->enabled) == 0) { > + atomic_set(&key->enabled, -1); > jump_label_update(key); > + atomic_set(&key->enabled, 1); > + } else { > + atomic_inc(&key->enabled); > + } > jump_label_unlock(); > } So I was recently looking at this again and got all paranoid. Do we want something like so? --- kernel/jump_label.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index d11c506a6ac3..69d07e78e48b 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -103,7 +103,7 @@ EXPORT_SYMBOL_GPL(static_key_disable); void static_key_slow_inc(struct static_key *key) { - int v, v1; + int v; STATIC_KEY_CHECK_USE(); @@ -119,18 +119,28 @@ void static_key_slow_inc(struct static_key *key) * so it counts as "enabled" in jump_label_update(). Note that * atomic_inc_unless_negative() checks >= 0, so roll our own. */ - for (v = atomic_read(&key->enabled); v > 0; v = v1) { - v1 = atomic_cmpxchg(&key->enabled, v, v + 1); - if (likely(v1 == v)) + for (v = atomic_read(&key->enabled); v > 0;) { + if (atomic_try_cmpxchg(&key->enabled, &v, v+1)) return; } cpus_read_lock(); jump_label_lock(); - if (atomic_read(&key->enabled) == 0) { - atomic_set(&key->enabled, -1); + + if (atomic_try_cmpxchg(&key->enabled, 0, -1)) { + /* + * smp_mb implied, must have -1 before proceeding to change + * text. + */ jump_label_update(key); - atomic_set(&key->enabled, 1); + + /* + * smp_mb, such that we finish modifying text before enabling + * the fast path. Probably not needed because modifying text is + * likely to serialize everything. Be paranoid. + */ + smp_mb__before_atomic(); + atomic_add(2, &key->enabled); /* -1 -> 1 */ } else { atomic_inc(&key->enabled); }