From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752454AbdGaNEM (ORCPT ); Mon, 31 Jul 2017 09:04:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44104 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752420AbdGaNEK (ORCPT ); Mon, 31 Jul 2017 09:04:10 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com AE08B1005AB Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=pbonzini@redhat.com Subject: Re: [tip:locking/urgent] locking/static_key: Fix concurrent static_key_slow_inc() To: Peter Zijlstra , torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, mingo@kernel.org, dvyukov@google.com, hpa@zytor.com Cc: linux-tip-commits@vger.kernel.org, Steven Rostedt References: <1466527937-69798-1-git-send-email-pbonzini@redhat.com> <20170731093612.7v23kxkk47i56io6@hirez.programming.kicks-ass.net> From: Paolo Bonzini Message-ID: <671056c7-5fc7-60c0-4035-b11d43d95bf1@redhat.com> Date: Mon, 31 Jul 2017 15:04:06 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170731093612.7v23kxkk47i56io6@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 31 Jul 2017 13:04:09 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 31/07/2017 11:36, Peter Zijlstra wrote: > On Fri, Jun 24, 2016 at 01:59:06AM -0700, tip-bot for Paolo Bonzini wrote: >> +++ b/kernel/jump_label.c >> @@ -58,13 +58,36 @@ static void jump_label_update(struct static_key *key); >> >> void static_key_slow_inc(struct static_key *key) >> { >> + int v, v1; >> + >> STATIC_KEY_CHECK_USE(); >> - if (atomic_inc_not_zero(&key->enabled)) >> - return; >> + >> + /* >> + * Careful if we get concurrent static_key_slow_inc() calls; >> + * later calls must wait for the first one to _finish_ the >> + * jump_label_update() process. At the same time, however, >> + * the jump_label_update() call below wants to see >> + * static_key_enabled(&key) for jumps to be updated properly. >> + * >> + * So give a special meaning to negative key->enabled: it sends >> + * static_key_slow_inc() down the slow path, and it is non-zero >> + * so it counts as "enabled" in jump_label_update(). Note that >> + * atomic_inc_unless_negative() checks >= 0, so roll our own. >> + */ >> + for (v = atomic_read(&key->enabled); v > 0; v = v1) { >> + v1 = atomic_cmpxchg(&key->enabled, v, v + 1); >> + if (likely(v1 == v)) >> + return; >> + } >> >> jump_label_lock(); >> - if (atomic_inc_return(&key->enabled) == 1) >> + if (atomic_read(&key->enabled) == 0) { >> + atomic_set(&key->enabled, -1); >> jump_label_update(key); >> + atomic_set(&key->enabled, 1); >> + } else { >> + atomic_inc(&key->enabled); >> + } >> jump_label_unlock(); >> } > > > So I was recently looking at this again and got all paranoid. Do we want > something like so? Though I agree with the paranoia sentiment, no: - key->enabled cannot go from 0 to nonzero outside jump_label_mutex. For this reason the atomic_try_cmpxchg is unnecessary. - the (implied) smp_mb before jump_label_update is not interesting, but I don't think it is useful because: 1) during the jump_label_update there is no correspondence between what static_key_enabled returns and what the text look like; 2) what would it even be pairing with? - the smp_mb (though it could be a smp_wmb or atomic_set_release) initially triggered my paranoia indeed. But even then, I can't see why you would need it because there's nothing it pairs with. Rather, it's *any use of key->enabled outside jump_label_lock* (meaning: any use of static_key_enabled and static_key_count outside the core jump_label.c code) that should be handled with care. And indeed, while there aren't many, I think two of them are wrong (and not fixed by your patch): - include/linux/cpuset.h defines nr_cpusets which uses static_key_count. It makes no sense to call it outside cpuset_mutex, and indeed that's how kernel/cgroup/cpuset.c uses it (nr_cpusets <- generate_sched_domains <- rebuild_sched_domains_locked). The function should be moved inside kernel/cgroup/cpuset.c since the mutex is static. - kernel/cgroup/cgroup.c only enables/disables at init, so using static_key_enabled should be fine. - kernel/tracepoint.c only manipulates tracepoint static keys under tracepoints_mutex, and uses static_key_enabled under the same mutex, so it's fine. - net/ipv4/udp.c and net/ipv6/udp.c want to implement a "once-only" increment of the static key. It's racy and maybe we should provide a new API static_key_enable_forever: void static_key_enable_forever(struct static_key *key) { STATIC_KEY_CHECK_USE(); if (atomic_read(&key->enabled) > 0) return; cpus_read_lock(); jump_label_lock(); if (atomic_read(&key->enabled) == 0) { atomic_set(&key->enabled, -1); jump_label_update(key); atomic_set(&key->enabled, 1); } jump_label_unlock(); cpus_read_unlock(); } EXPORT_SYMBOL_GPL(static_key_enable_forever); I can prepare a patch if you agree. Paolo > --- > kernel/jump_label.c | 24 +++++++++++++++++------- > 1 file changed, 17 insertions(+), 7 deletions(-) > > diff --git a/kernel/jump_label.c b/kernel/jump_label.c > index d11c506a6ac3..69d07e78e48b 100644 > --- a/kernel/jump_label.c > +++ b/kernel/jump_label.c > @@ -103,7 +103,7 @@ EXPORT_SYMBOL_GPL(static_key_disable); > > void static_key_slow_inc(struct static_key *key) > { > - int v, v1; > + int v; > > STATIC_KEY_CHECK_USE(); > > @@ -119,18 +119,28 @@ void static_key_slow_inc(struct static_key *key) > * so it counts as "enabled" in jump_label_update(). Note that > * atomic_inc_unless_negative() checks >= 0, so roll our own. > */ > - for (v = atomic_read(&key->enabled); v > 0; v = v1) { > - v1 = atomic_cmpxchg(&key->enabled, v, v + 1); > - if (likely(v1 == v)) > + for (v = atomic_read(&key->enabled); v > 0;) { > + if (atomic_try_cmpxchg(&key->enabled, &v, v+1)) > return; > } > > cpus_read_lock(); > jump_label_lock(); > - if (atomic_read(&key->enabled) == 0) { > - atomic_set(&key->enabled, -1); > + > + if (atomic_try_cmpxchg(&key->enabled, 0, -1)) { > + /* > + * smp_mb implied, must have -1 before proceeding to change > + * text. > + */ > jump_label_update(key); > - atomic_set(&key->enabled, 1); > + > + /* > + * smp_mb, such that we finish modifying text before enabling > + * the fast path. Probably not needed because modifying text is > + * likely to serialize everything. Be paranoid. > + */ > + smp_mb__before_atomic(); > + atomic_add(2, &key->enabled); /* -1 -> 1 */ > } else { > atomic_inc(&key->enabled); > } >