From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06F78C282C0 for ; Fri, 25 Jan 2019 23:25:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C94F2218D0 for ; Fri, 25 Jan 2019 23:25:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729520AbfAYXZN (ORCPT ); Fri, 25 Jan 2019 18:25:13 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:58060 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729496AbfAYXZN (ORCPT ); Fri, 25 Jan 2019 18:25:13 -0500 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x0PNO284102033 for ; Fri, 25 Jan 2019 18:25:11 -0500 Received: from e15.ny.us.ibm.com (e15.ny.us.ibm.com [129.33.205.205]) by mx0a-001b2d01.pphosted.com with ESMTP id 2q8a224561-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 25 Jan 2019 18:25:11 -0500 Received: from localhost by e15.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 25 Jan 2019 23:25:11 -0000 Received: from b01cxnp23033.gho.pok.ibm.com (9.57.198.28) by e15.ny.us.ibm.com (146.89.104.202) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 25 Jan 2019 23:25:06 -0000 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp23033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0PNP5o221954716 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 25 Jan 2019 23:25:05 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 99D59B205F; Fri, 25 Jan 2019 23:25:05 +0000 (GMT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DC887B2066; Fri, 25 Jan 2019 23:25:04 +0000 (GMT) Received: from paulmck-ThinkPad-W541 (unknown [9.85.151.60]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Fri, 25 Jan 2019 23:25:04 +0000 (GMT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id AF0FB16C2D38; Fri, 25 Jan 2019 14:51:12 -0800 (PST) Date: Fri, 25 Jan 2019 14:51:12 -0800 From: "Paul E. McKenney" To: Jann Horn Cc: Alexei Starovoitov , Peter Zijlstra , Alexei Starovoitov , "David S. Miller" , Daniel Borkmann , jakub.kicinski@netronome.com, Network Development , kernel-team@fb.com, Ingo Molnar , Will Deacon Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock Reply-To: paulmck@linux.ibm.com References: <20190124041403.2100609-1-ast@kernel.org> <20190124041403.2100609-2-ast@kernel.org> <20190124180109.GA27771@hirez.programming.kicks-ass.net> <20190124185652.GB17767@hirez.programming.kicks-ass.net> <20190124234232.GY4240@linux.ibm.com> <20190125000515.jizijxz4n735gclx@ast-mbp.dhcp.thefacebook.com> <20190125012224.GZ4240@linux.ibm.com> <20190125041152.GA4240@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 19012523-0068-0000-0000-00000388D007 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010477; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000277; SDB=6.01151770; UDB=6.00600335; IPR=6.00932114; MB=3.00025291; MTD=3.00000008; XFM=3.00000015; UTC=2019-01-25 23:25:09 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19012523-0069-0000-0000-00004744F2FC Message-Id: <20190125225112.GF4240@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-01-25_15:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=605 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901250177 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Fri, Jan 25, 2019 at 05:18:12PM +0100, Jann Horn wrote: > On Fri, Jan 25, 2019 at 5:12 AM Paul E. McKenney wrote: > > On Fri, Jan 25, 2019 at 02:46:55AM +0100, Jann Horn wrote: > > > On Fri, Jan 25, 2019 at 2:22 AM Paul E. McKenney wrote: > > > > On Thu, Jan 24, 2019 at 04:05:16PM -0800, Alexei Starovoitov wrote: > > > > > On Thu, Jan 24, 2019 at 03:42:32PM -0800, Paul E. McKenney wrote: > > > > > > On Thu, Jan 24, 2019 at 07:56:52PM +0100, Peter Zijlstra wrote: > > > > > > > On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote: > > > > > > > > > > > > > > > > Thanks for having kernel/locking people on Cc... > > > > > > > > > > > > > > > > On Wed, Jan 23, 2019 at 08:13:55PM -0800, Alexei Starovoitov wrote: > > > > > > > > > > > > > > > > > Implementation details: > > > > > > > > > - on !SMP bpf_spin_lock() becomes nop > > > > > > > > > > > > > > > > Because no BPF program is preemptible? I don't see any assertions or > > > > > > > > even a comment that says this code is non-preemptible. > > > > > > > > > > > > > > > > AFAICT some of the BPF_RUN_PROG things are under rcu_read_lock() only, > > > > > > > > which is not sufficient. > > > > > > > > > > > > > > > > > - on architectures that don't support queued_spin_lock trivial lock is used. > > > > > > > > > Note that arch_spin_lock cannot be used, since not all archs agree that > > > > > > > > > zero == unlocked and sizeof(arch_spinlock_t) != sizeof(__u32). > > > > > > > > > > > > > > > > I really don't much like direct usage of qspinlock; esp. not as a > > > > > > > > surprise. > > > > > > > > > > > > Substituting the lightweight-reader SRCU as discussed earlier would allow > > > > > > use of a more generic locking primitive, for example, one that allowed > > > > > > blocking, at least in cases were the context allowed this. > > > > > > > > > > > > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git > > > > > > branch srcu-lr.2019.01.16a. > > > > > > > > > > > > One advantage of a more generic locking primitive would be keeping BPF > > > > > > programs independent of internal changes to spinlock primitives. > > > > > > > > > > Let's keep "srcu in bpf" discussion separate from bpf_spin_lock discussion. > > > > > bpf is not switching to srcu any time soon. > > > > > If/when it happens it will be only for certain prog+map types > > > > > like bpf syscall probes that need to be able to do copy_from_user > > > > > from bpf prog. > > > > > > > > Hmmm... What prevents BPF programs from looping infinitely within an > > > > RCU reader, and as you noted, preemption disabled? > > > > > > > > If BPF programs are in fact allowed to loop infinitely, it would be > > > > very good for the health of the kernel to have preemption enabled. > > > > And to be within an SRCU read-side critical section instead of an RCU > > > > read-side critical section. > > > > > > The BPF verifier prevents loops; this is in push_insn() in > > > kernel/bpf/verifier.c, which errors out with -EINVAL when a back edge > > > is encountered. For non-root programs, that limits the maximum number > > > of instructions per eBPF engine execution to > > > BPF_MAXINSNS*MAX_TAIL_CALL_CNT==4096*32==131072 (but that includes > > > call instructions, which can cause relatively expensive operations > > > like hash table lookups). For programs created with CAP_SYS_ADMIN, > > > things get more tricky because you can create your own functions and > > > call them repeatedly; I'm not sure whether the pessimal runtime there > > > becomes exponential, or whether there is some check that catches this. > > > > Whew!!! ;-) > > > > So no more than (say) 100 milliseconds? > > Depends on RLIMIT_MEMLOCK and on how hard userspace is trying to make > things slow, I guess - if userspace manages to create a hashtable, > with a few dozen megabytes in size, with worst-case assignment of > elements to buckets (everything in a single bucket), every lookup call > on that bucket becomes a linked list traversal through a list that > must be stored in main memory because it's too big for the CPU caches. > I don't know into how much time that translates. So perhaps you have a candidate BPF program for the RCU CPU stall warning challenge, then. ;-) Thanx, Paul