From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS, USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE125C282C3 for ; Thu, 24 Jan 2019 23:59:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 66F72218A5 for ; Thu, 24 Jan 2019 23:59:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dcLmVilJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726825AbfAXX7D (ORCPT ); Thu, 24 Jan 2019 18:59:03 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:38699 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726015AbfAXX7C (ORCPT ); Thu, 24 Jan 2019 18:59:02 -0500 Received: by mail-pf1-f195.google.com with SMTP id q1so3815879pfi.5 for ; Thu, 24 Jan 2019 15:59:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=3aT9uaKOCZDQ2ePmLL9l7j+fbMVTg7XHZ+4TbBx/xjs=; b=dcLmVilJKtMIB59Yzq2vywULsaQnf2oE6ds2QXlgNXkHUivnahYkzhR83gazK9KK5p mRNG7oIiFVc2sODfUE7Ds6M6byCcpB9607P5DPcfIY4vxMQB52Dr62RYh0eMtCrtPPuH qQEf8YBXWxv2rJeVcoHPyrNfFEUmEPCpbLj8gpVRSwrhMlE0IV+JnYwUFo3eL6fXFXTx kN876jRPpeNylk/ZLUlzMff9qYZ+lZpD8Y9PeJZmzzBhyT7mgwUmWcFmd1q9pSqHtjM2 RoZduG/RXlQGr58jppgJPkI3SArM0c3Ms/kjXq0cYlm2oa/bU/+LgPewO/KWi5qs5VvT 7QUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=3aT9uaKOCZDQ2ePmLL9l7j+fbMVTg7XHZ+4TbBx/xjs=; b=JvkAQy71SkOgkLbgRnoLXc/8IKp0Jr2H3uC2/emlQICBbCMcjlIpsGraj/Wt4+QBX6 pD2J58XcNglFaEHOmwnDSQKCS1egctqau44VheFuuNbTcZ++L6+EhxJnR++QSH+r0/vw 2uH9VVM/rh38KyOHmXFmmJ430hPdMqTC2qqNmdeibbqOLiFN2Z/wJGh6qr8cvCCcAOLo TMO9a92m/1tcyllJcNbY7Ep11Mlm8s+aPluYTbIwldSEPa4p4jdrgW4PwyzAofmK6tOh MUdeDC6zUZ4Wd/lC/oBW4M3ycl+MiYIdtBiLK2Re/SA6G1/uxwWPhA4T/94P1taVcXyd LfUA== X-Gm-Message-State: AJcUukcK1J02rc1DwDoEEzP94wAg2Bc0VsDwfrh64FSEIt/nNqcMHiyg LMm1Y0fNvec1ZzYesVM5aXI= X-Google-Smtp-Source: ALg8bN5EFHmY4YO96n59pRy2MjyV3rB31O47022gDrwrXHlXnlsaUxqn2bgEW+n//RLt10wsQCaqlg== X-Received: by 2002:a63:cf02:: with SMTP id j2mr7984569pgg.113.1548374341645; Thu, 24 Jan 2019 15:59:01 -0800 (PST) Received: from ast-mbp.dhcp.thefacebook.com ([2620:10d:c090:200::7:5429]) by smtp.gmail.com with ESMTPSA id n7sm30958003pff.36.2019.01.24.15.59.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Jan 2019 15:59:00 -0800 (PST) Date: Thu, 24 Jan 2019 15:58:59 -0800 From: Alexei Starovoitov To: Peter Zijlstra Cc: Alexei Starovoitov , davem@davemloft.net, daniel@iogearbox.net, jakub.kicinski@netronome.com, netdev@vger.kernel.org, kernel-team@fb.com, mingo@redhat.com, will.deacon@arm.com, Paul McKenney , jannh@google.com Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock Message-ID: <20190124235857.xyb5xx2ufr6x5mbt@ast-mbp.dhcp.thefacebook.com> References: <20190124041403.2100609-1-ast@kernel.org> <20190124041403.2100609-2-ast@kernel.org> <20190124180109.GA27771@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190124180109.GA27771@hirez.programming.kicks-ass.net> User-Agent: NeoMutt/20180223 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote: > > Thanks for having kernel/locking people on Cc... > > On Wed, Jan 23, 2019 at 08:13:55PM -0800, Alexei Starovoitov wrote: > > > Implementation details: > > - on !SMP bpf_spin_lock() becomes nop > > Because no BPF program is preemptible? I don't see any assertions or > even a comment that says this code is non-preemptible. > > AFAICT some of the BPF_RUN_PROG things are under rcu_read_lock() only, > which is not sufficient. nope. all bpf prog types disable preemption. That is must have for all sorts of things to work properly. If there is a prog type that doing rcu_read_lock only it's a serious bug. About a year or so ago we audited everything specifically to make sure everything disables preemption before calling bpf progs. I'm pretty sure nothing crept in in the mean time. > > - on architectures that don't support queued_spin_lock trivial lock is used. > > Note that arch_spin_lock cannot be used, since not all archs agree that > > zero == unlocked and sizeof(arch_spinlock_t) != sizeof(__u32). > > I really don't much like direct usage of qspinlock; esp. not as a > surprise. > > Why does it matter if 0 means unlocked; that's what > __ARCH_SPIN_LOCK_UNLOCKED is for. > > I get the sizeof(__u32) thing, but why not key off of that? what do you mean by 'key off of that' ? to use arch_spinlock_t instead of qspinlock ? That was my first attempt, but then I painfully found that its size on parisc is 16 bytes and we're not going to penalize bpf to waste that much space because of single architecture. sizeof(arch_spinlock_t) can be 1 byte too (on sparc). That would fit in __u32, but I figured it's cleaner to use qspinlock on all archs that support it and dumb_spin_lock on archs that dont. Another option is use to arch_spinlock_t when its sizeof==4 and use dumb_spin_lock otherwise. It's doable, but imo still less clean than using qspinlock due to zero init. Since zero init is a lot less map work that zero inits all elements already. If arch_spinlock_t is used than at map init time we would need to walk all elements and do __ARCH_SPIN_LOCK_UNLOCKED assignment (and maps can have millions of elements). Not horrible, but 100% waste of cycles for x86/arm64 where qspinlock is used. Such waste can be workaround further by doing ugly #idef __ARCH_SPIN_LOCK_UNLOCKED == 0 -> don't do init loop. And then add another #ifdef for archs with sizeof(arch_spinlock_t)!=4 to keep zero init for all map types that support bpf_spin_lock via dumb_spin_lock. Clearly at that point we're getting into ugliness everywhere. Hence I've used qspinlock directly. > > Next steps: > > - allow bpf_spin_lock in other map types (like cgroup local storage) > > - introduce BPF_F_LOCK flag for bpf_map_update() syscall and helper > > to request kernel to grab bpf_spin_lock before rewriting the value. > > That will serialize access to map elements. > > So clearly this map stuff is shared between bpf proglets, otherwise > there would not be a need for locking. But what happens if one is from > task context and another from IRQ context? > > I don't see a local_irq_save()/restore() anywhere. What avoids the > trivial lock inversion? > and from NMI ... progs are not preemptable and map syscall accessors have bpf_prog_active counters. So nmi/kprobe progs will not be running when syscall is running. Hence dead lock is not possible and irq_save is not needed. > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > > index a74972b07e74..2e98e4caf5aa 100644 > > --- a/kernel/bpf/helpers.c > > +++ b/kernel/bpf/helpers.c > > @@ -221,6 +221,63 @@ const struct bpf_func_proto bpf_get_current_comm_proto = { > > .arg2_type = ARG_CONST_SIZE, > > }; > > > > +#ifndef CONFIG_QUEUED_SPINLOCKS > > +struct dumb_spin_lock { > > + atomic_t val; > > +}; > > +#endif > > + > > +notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock) > > +{ > > +#if defined(CONFIG_SMP) > > +#ifdef CONFIG_QUEUED_SPINLOCKS > > + struct qspinlock *qlock = (void *)lock; > > + > > + BUILD_BUG_ON(sizeof(*qlock) != sizeof(*lock)); > > + queued_spin_lock(qlock); > > +#else > > + struct dumb_spin_lock *qlock = (void *)lock; > > + > > + BUILD_BUG_ON(sizeof(*qlock) != sizeof(*lock)); > > + do { > > + while (atomic_read(&qlock->val) != 0) > > + cpu_relax(); > > + } while (atomic_cmpxchg(&qlock->val, 0, 1) != 0); > > +#endif > > +#endif > > + return 0; > > +} > > + > > +const struct bpf_func_proto bpf_spin_lock_proto = { > > + .func = bpf_spin_lock, > > + .gpl_only = false, > > + .ret_type = RET_VOID, > > + .arg1_type = ARG_PTR_TO_SPIN_LOCK, > > +}; > > + > > +notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock) > > +{ > > +#if defined(CONFIG_SMP) > > +#ifdef CONFIG_QUEUED_SPINLOCKS > > + struct qspinlock *qlock = (void *)lock; > > + > > + queued_spin_unlock(qlock); > > +#else > > + struct dumb_spin_lock *qlock = (void *)lock; > > + > > + atomic_set(&qlock->val, 0); > > And this is broken... That should've been atomic_set_release() at the > very least. right. good catch. > And this would again be the moment where I go pester you about the BPF > memory model :-) hehe :) How do you propose to define it in a way that it applies to all archs and yet doesn't penalize x86 ? "Assume minimum execution ordering model" the way kernel does unfortunately is not usable, since bpf doesn't have a luxury of using nice #defines that convert into nops on x86.