From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 439C8C282C0 for ; Fri, 25 Jan 2019 10:09:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0F94E218B0 for ; Fri, 25 Jan 2019 10:09:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="yaPxeXIN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728238AbfAYKJW (ORCPT ); Fri, 25 Jan 2019 05:09:22 -0500 Received: from merlin.infradead.org ([205.233.59.134]:40006 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727996AbfAYKJW (ORCPT ); Fri, 25 Jan 2019 05:09:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=IDf2/wNhe2hDjGG5f0doEfycwkwfr91xthB84Uqcw0g=; b=yaPxeXIN3NAjcif3bEQxgHXGT sfrDB97EBJQmYfS24C9nLLZGc76/83UDAQ3mwOYAvVnt9DpihadRJR+/EC6HCoxFulkmqmZv4GyZ4 eNGRr1UCnF4d+COJDX0YVeMeA+xthQhgTzaDZGe7tXrfAmXaY+hzPAE15cjue0WQVaKebkllX3TZN rIVKZ7DFTzVHvUAcHk5+C8kxUPorLm+w9FOcp0+a0/6WCcGBlqMs1zJVZ7JiheZy3QNtzBTYAvoH8 kQOPKrPsOx7ce8OZoYFMFVHjxw66S5FgCKdnVyewifBcnNLzKF7DJVCRA3JFLrFMnP0S7wqWpj9dN +YYdmBZ7w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gmyQ7-0003jX-VM; Fri, 25 Jan 2019 10:09:08 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id A3D4C201EC170; Fri, 25 Jan 2019 11:09:06 +0100 (CET) Date: Fri, 25 Jan 2019 11:09:06 +0100 From: Peter Zijlstra To: Alexei Starovoitov Cc: Alexei Starovoitov , davem@davemloft.net, daniel@iogearbox.net, jakub.kicinski@netronome.com, netdev@vger.kernel.org, kernel-team@fb.com, mingo@redhat.com, will.deacon@arm.com, Paul McKenney , jannh@google.com Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock Message-ID: <20190125100906.GB4500@hirez.programming.kicks-ass.net> References: <20190124041403.2100609-1-ast@kernel.org> <20190124041403.2100609-2-ast@kernel.org> <20190124180109.GA27771@hirez.programming.kicks-ass.net> <20190124235857.xyb5xx2ufr6x5mbt@ast-mbp.dhcp.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190124235857.xyb5xx2ufr6x5mbt@ast-mbp.dhcp.thefacebook.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, Jan 24, 2019 at 03:58:59PM -0800, Alexei Starovoitov wrote: > On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote: > > So clearly this map stuff is shared between bpf proglets, otherwise > > there would not be a need for locking. But what happens if one is from > > task context and another from IRQ context? > > > > I don't see a local_irq_save()/restore() anywhere. What avoids the > > trivial lock inversion? > > > and from NMI ... > > progs are not preemptable and map syscall accessors have bpf_prog_active counters. > So nmi/kprobe progs will not be running when syscall is running. > Hence dead lock is not possible and irq_save is not needed. What about the progs that run from SoftIRQ ? Since that bpf_prog_active thing isn't inside BPF_PROG_RUN() what is to stop say: reuseport_select_sock() ... BPF_PROG_RUN() bpf_spin_lock() ... BPF_PROG_RUN() bpf_spin_lock() // forever more Unless you stick that bpf_prog_active stuff inside BPF_PROG_RUN itself, I don't see how you can fundamentally avoid this happening (now or in the future).