From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31C48C282C0 for ; Sat, 26 Jan 2019 00:59:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F0C7021855 for ; Sat, 26 Jan 2019 00:59:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o9o5SH1u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726362AbfAZA7k (ORCPT ); Fri, 25 Jan 2019 19:59:40 -0500 Received: from mail-ot1-f65.google.com ([209.85.210.65]:34591 "EHLO mail-ot1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725550AbfAZA7k (ORCPT ); Fri, 25 Jan 2019 19:59:40 -0500 Received: by mail-ot1-f65.google.com with SMTP id t5so10266064otk.1 for ; Fri, 25 Jan 2019 16:59:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Fw5TJsAreanVTqQQtw6N3xNPMNt9apfGcUD/oZ3bkkI=; b=o9o5SH1u0bJZ51010pT1CjpGjO2J9IjFSL58bLDoBb0Et+Mjn4KdRqmsZlpZwGXxHZ 9eIksZLct6NHFtoODSsVUr/FZOipUD6NFhsOyMQgmBp6pDelQyfMd0SyapD33UKoN8Yp vDO2xL2/rIYCBOwGg1ZRWaPbwA/Yf3F/Xyo2h/JjzFgKudsspLCD3Jqe5lJIKi1KavZ3 oJqrJm83R5J6XDn8lbaQfzxXWGhqfDWP1zFbfagxg28x0GzXLS1IwJD52CJ+KzRbhQod EsrOJttn9hIqTd9CwlQg/Du+A8m7wGiAAnPlv4MuyyO+oC4T4m8aS3Vdb5UehpIGsqV5 loSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Fw5TJsAreanVTqQQtw6N3xNPMNt9apfGcUD/oZ3bkkI=; b=FtjGtBjBTS3netojqXvCdWG5dlDMtQsdIm0b1TuAuNeuHawCU7EvUXUZBr1q/mZVJ9 SwMaWHq5UfLF8BUywsUao31RNPPbiPeaeteabYTyyTPBF5ikeWYXVHvRTTTw7f/sud3y Ss/MVE7hp0TcaG+klkBsbd4MRz/HEm9feDWPGZX3DbzKUPbuPXjEiY6OJbCeXIM/jaLy R6SGd/Y8goq9yI2FOFDXH7ewm4HN/7r3WdCLTsUfYnw81N3GPlE0sjcewmQUp0/NxvAc +L34WrlfqVV6lKE3HhcilQYvsu86xCqJ+Cnm2IIQjhyLn0GGySgirAp7wRC76IgkH559 mpww== X-Gm-Message-State: AJcUukd5+UDs/5HtTxVC+QUp2K6vNBGtQz8mCZpXvll9ClK0Y7RkulAQ VmGuPwQAqUuwJHQBqtvTKY13igEMLCQ114zAMGlc6w== X-Google-Smtp-Source: ALg8bN6Pt7qkdxA1fyujxuMLLAYDvOfvVWgDc7xXniMjwrv88xaIlgwDm5WraOiaPaP9tXC/EljRyp4onJPmzAzXUxo= X-Received: by 2002:a9d:4e06:: with SMTP id p6mr10131786otf.73.1548464378966; Fri, 25 Jan 2019 16:59:38 -0800 (PST) MIME-Version: 1.0 References: <20190124041403.2100609-2-ast@kernel.org> <20190124180109.GA27771@hirez.programming.kicks-ass.net> <20190124185652.GB17767@hirez.programming.kicks-ass.net> <20190124234232.GY4240@linux.ibm.com> <20190125000515.jizijxz4n735gclx@ast-mbp.dhcp.thefacebook.com> <20190125012224.GZ4240@linux.ibm.com> <20190125041152.GA4240@linux.ibm.com> <20190125225112.GF4240@linux.ibm.com> <20190125234403.iisj5woztm4afwgh@ast-mbp.dhcp.thefacebook.com> In-Reply-To: From: Jann Horn Date: Sat, 26 Jan 2019 01:59:12 +0100 Message-ID: Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock To: Alexei Starovoitov Cc: "Paul E. McKenney" , Peter Zijlstra , Alexei Starovoitov , "David S. Miller" , Daniel Borkmann , jakub.kicinski@netronome.com, Network Development , kernel-team@fb.com, Ingo Molnar , Will Deacon Content-Type: text/plain; charset="UTF-8" Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Sat, Jan 26, 2019 at 1:43 AM Jann Horn wrote: > On Sat, Jan 26, 2019 at 12:44 AM Alexei Starovoitov > wrote: > > > > On Fri, Jan 25, 2019 at 02:51:12PM -0800, Paul E. McKenney wrote: > > > > > > > > > > So no more than (say) 100 milliseconds? > > > > > > > > Depends on RLIMIT_MEMLOCK and on how hard userspace is trying to make > > > > things slow, I guess - if userspace manages to create a hashtable, > > > > with a few dozen megabytes in size, with worst-case assignment of > > > > elements to buckets (everything in a single bucket), every lookup call > > > > on that bucket becomes a linked list traversal through a list that > > > > must be stored in main memory because it's too big for the CPU caches. > > > > I don't know into how much time that translates. > > > > > > So perhaps you have a candidate BPF program for the RCU CPU stall warning > > > challenge, then. ;-) > > > > I'd like to see one that can defeat jhash + random seed. > > Assuming that the map isn't created by root with BPF_F_ZERO_SEED: > > The dumb approach would be to put things into the map, try to measure > via timing/sidechannel whether you got collisions, and then keep > trying different keys, and keep them if the timing indicates a > collision. That'd probably be pretty slow and annoying though. Two > years ago, I implemented something similar to leak information about > virtual addresses from Firefox by measuring hash bucket collisions > from JavaScript (but to be fair, it was easier there because you can > resize the hash table): > https://thejh.net/misc/firefox-cve-2016-9904-and-cve-2017-5378-bugreport > > But I think there's an easier way, too: The jhash seed is just 32 > bits, and AFAICS the BPF API leaks information about that seed through > BPF_MAP_GET_NEXT_KEY: Stuff two random keys into the hash table, run > BPF_MAP_GET_NEXT_KEY with attr->key==NULL, and see which key is > returned. Do that around 32 times, and you should have roughly enough > information to bruteforce the jhash seed? Recovering the seed should > then be relatively quick, 2^32 iterations of a fast hash don't take > terribly long. > > That said, I don't think this is interesting enough to spend the time > necessary to implement it. :P Oh, and actually, you can probably also detect a collision in a simpler way: - insert A - insert B - query BPF_MAP_GET_NEXT_KEY - delete A - delete B - insert B - insert A - query BPF_MAP_GET_NEXT_KEY - delete A - delete B If the two BPF_MAP_GET_NEXT_KEY queries return the same result, A and B are in different buckets; if they return different results, A and B are in the same bucket, I think?