From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 603EAC4646C for ; Fri, 21 Jun 2019 23:17:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 34860208C3 for ; Fri, 21 Jun 2019 23:17:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B9Z3H8QT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726169AbfFUXRC (ORCPT ); Fri, 21 Jun 2019 19:17:02 -0400 Received: from mail-pf1-f202.google.com ([209.85.210.202]:36301 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726049AbfFUXRA (ORCPT ); Fri, 21 Jun 2019 19:17:00 -0400 Received: by mail-pf1-f202.google.com with SMTP id h15so5277456pfn.3 for ; Fri, 21 Jun 2019 16:17:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=t/AyvcKYelZOkRBej05NHVNdPUqzSYdpnSoVrXwFeq8=; b=B9Z3H8QTIhgljGVr21xxKuCQSAUteNzuLnIR45pj/KSavVYu52KUe4EWiL+820PeIA 1eiH126tYB7qw608tdE5kY3eI5zRmST/YSgm4d/c3YdK1M2olH91FofinMNIIDadRGNg gAMbqzKlLkdr6Uge2EXPpRMBz0Wk+gF20YOWGGGUb1ZDV2fVCuzr9naZCqorcIilwkAt cBdORtrxwPwBQu0AyoS3TY2SP7xnJdWKNB2Y/TB8QFKgz0jNto7uOiMQ7ozcLCKxY+Cf YoMw3tgrihHmO1nA/M0E63tM2JESnkITwBbVQ1j8zrt4IpygjwOv2TdZr/hkrChN1GcO IZeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=t/AyvcKYelZOkRBej05NHVNdPUqzSYdpnSoVrXwFeq8=; b=JBwqp92clIIwOzLel7c9+ieN13wU8hsCck0wftrIFKMbSisLq4Kf83pcFWOYcl3Ne5 uzfod2LXMViyBkLijenU5lnaf7A7yvOgIplJa7tH8tpbNLLlHijK5Ob1jxVKVCDBZJCE raUS0+ap3EKFnIq2pYsKtwaMGt19yuGnb5sssa1VwiAfO7A5aqKD1+CbojeKsTurKI76 axmzbT4TXsjrj/u4Pcr1qtI6xrvAXmp37HA7W46g/rldrbuEI80oBzEcNX3S4etMNJYI 16S8PLJXF6hXe6WBsmmaevN8Cy6ZdBhFs/6Nunyl061O970NHmakDEX9H48jEdVffDEK zycA== X-Gm-Message-State: APjAAAXjJhxF34GENxbM2MbSB+zdB6YfoA0OybSb3QU6xFnNc8JtP/z5 6pn3XHJ6grqF678n5I9pBd6ueLiop8Og X-Google-Smtp-Source: APXvYqzlq6hsSVc/T2F+nA1QdHOIWXppI7xnktxRsvwQKhadXg35L+WlMBX2eh3IOkqD8n6fwdypPYZa4bkL X-Received: by 2002:a63:6cc3:: with SMTP id h186mr864592pgc.292.1561159019516; Fri, 21 Jun 2019 16:16:59 -0700 (PDT) Date: Fri, 21 Jun 2019 16:16:45 -0700 In-Reply-To: <20190621231650.32073-1-brianvv@google.com> Message-Id: <20190621231650.32073-2-brianvv@google.com> Mime-Version: 1.0 References: <20190621231650.32073-1-brianvv@google.com> X-Mailer: git-send-email 2.22.0.410.gd8fdbe21b5-goog Subject: [RFC PATCH 1/6] bpf: add bpf_map_value_size and bp_map_copy_value helper functions From: Brian Vazquez To: Brian Vazquez , Alexei Starovoitov , Daniel Borkmann , "David S . Miller" Cc: Stanislav Fomichev , Willem de Bruijn , Petar Penkov , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Brian Vazquez Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move reusable code from map_lookup_elem to helper functions to avoid code duplication in kernel/bpf/syscall.c Suggested-by: Stanislav Fomichev Signed-off-by: Brian Vazquez --- kernel/bpf/syscall.c | 134 +++++++++++++++++++++++-------------------- 1 file changed, 73 insertions(+), 61 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 7713cf39795a4..a1823a50f9be0 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -126,6 +126,76 @@ static struct bpf_map *find_and_alloc_map(union bpf_attr *attr) return map; } +static u32 bpf_map_value_size(struct bpf_map *map) +{ + if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH || + map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH || + map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY || + map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE) + return round_up(map->value_size, 8) * num_possible_cpus(); + else if (IS_FD_MAP(map)) + return sizeof(u32); + else + return map->value_size; +} + +static int bpf_map_copy_value(struct bpf_map *map, void *key, void *value, + __u64 flags) +{ + void *ptr; + int err; + + if (bpf_map_is_dev_bound(map)) + return bpf_map_offload_lookup_elem(map, key, value); + + preempt_disable(); + this_cpu_inc(bpf_prog_active); + if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH || + map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { + err = bpf_percpu_hash_copy(map, key, value); + } else if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) { + err = bpf_percpu_array_copy(map, key, value); + } else if (map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE) { + err = bpf_percpu_cgroup_storage_copy(map, key, value); + } else if (map->map_type == BPF_MAP_TYPE_STACK_TRACE) { + err = bpf_stackmap_copy(map, key, value); + } else if (IS_FD_ARRAY(map)) { + err = bpf_fd_array_map_lookup_elem(map, key, value); + } else if (IS_FD_HASH(map)) { + err = bpf_fd_htab_map_lookup_elem(map, key, value); + } else if (map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) { + err = bpf_fd_reuseport_array_lookup_elem(map, key, value); + } else if (map->map_type == BPF_MAP_TYPE_QUEUE || + map->map_type == BPF_MAP_TYPE_STACK) { + err = map->ops->map_peek_elem(map, value); + } else { + rcu_read_lock(); + if (map->ops->map_lookup_elem_sys_only) + ptr = map->ops->map_lookup_elem_sys_only(map, key); + else + ptr = map->ops->map_lookup_elem(map, key); + if (IS_ERR(ptr)) { + err = PTR_ERR(ptr); + } else if (!ptr) { + err = -ENOENT; + } else { + err = 0; + if (flags & BPF_F_LOCK) + /* lock 'ptr' and copy everything but lock */ + copy_map_value_locked(map, value, ptr, true); + else + copy_map_value(map, value, ptr); + /* mask lock, since value wasn't zero inited */ + check_and_init_map_lock(map, value); + } + rcu_read_unlock(); + } + this_cpu_dec(bpf_prog_active); + preempt_enable(); + + return err; +} + void *bpf_map_area_alloc(size_t size, int numa_node) { /* We really just want to fail instead of triggering OOM killer @@ -729,7 +799,7 @@ static int map_lookup_elem(union bpf_attr *attr) void __user *uvalue = u64_to_user_ptr(attr->value); int ufd = attr->map_fd; struct bpf_map *map; - void *key, *value, *ptr; + void *key, *value; u32 value_size; struct fd f; int err; @@ -761,72 +831,14 @@ static int map_lookup_elem(union bpf_attr *attr) goto err_put; } - if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH || - map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH || - map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY || - map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE) - value_size = round_up(map->value_size, 8) * num_possible_cpus(); - else if (IS_FD_MAP(map)) - value_size = sizeof(u32); - else - value_size = map->value_size; + value_size = bpf_map_value_size(map); err = -ENOMEM; value = kmalloc(value_size, GFP_USER | __GFP_NOWARN); if (!value) goto free_key; - if (bpf_map_is_dev_bound(map)) { - err = bpf_map_offload_lookup_elem(map, key, value); - goto done; - } - - preempt_disable(); - this_cpu_inc(bpf_prog_active); - if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH || - map->map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { - err = bpf_percpu_hash_copy(map, key, value); - } else if (map->map_type == BPF_MAP_TYPE_PERCPU_ARRAY) { - err = bpf_percpu_array_copy(map, key, value); - } else if (map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE) { - err = bpf_percpu_cgroup_storage_copy(map, key, value); - } else if (map->map_type == BPF_MAP_TYPE_STACK_TRACE) { - err = bpf_stackmap_copy(map, key, value); - } else if (IS_FD_ARRAY(map)) { - err = bpf_fd_array_map_lookup_elem(map, key, value); - } else if (IS_FD_HASH(map)) { - err = bpf_fd_htab_map_lookup_elem(map, key, value); - } else if (map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) { - err = bpf_fd_reuseport_array_lookup_elem(map, key, value); - } else if (map->map_type == BPF_MAP_TYPE_QUEUE || - map->map_type == BPF_MAP_TYPE_STACK) { - err = map->ops->map_peek_elem(map, value); - } else { - rcu_read_lock(); - if (map->ops->map_lookup_elem_sys_only) - ptr = map->ops->map_lookup_elem_sys_only(map, key); - else - ptr = map->ops->map_lookup_elem(map, key); - if (IS_ERR(ptr)) { - err = PTR_ERR(ptr); - } else if (!ptr) { - err = -ENOENT; - } else { - err = 0; - if (attr->flags & BPF_F_LOCK) - /* lock 'ptr' and copy everything but lock */ - copy_map_value_locked(map, value, ptr, true); - else - copy_map_value(map, value, ptr); - /* mask lock, since value wasn't zero inited */ - check_and_init_map_lock(map, value); - } - rcu_read_unlock(); - } - this_cpu_dec(bpf_prog_active); - preempt_enable(); - -done: + err = bpf_map_copy_value(map, key, value, attr->flags); if (err) goto free_value; -- 2.22.0.410.gd8fdbe21b5-goog