From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 850C6C6FA82 for ; Wed, 21 Sep 2022 17:00:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7A706B007B; Wed, 21 Sep 2022 13:00:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2BC5940007; Wed, 21 Sep 2022 13:00:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B55B56B007E; Wed, 21 Sep 2022 13:00:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A8CE16B007B for ; Wed, 21 Sep 2022 13:00:22 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 73AF91C667B for ; Wed, 21 Sep 2022 17:00:22 +0000 (UTC) X-FDA: 79936705884.13.2F731AD Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by imf22.hostedemail.com (Postfix) with ESMTP id 30B7BC0020 for ; Wed, 21 Sep 2022 17:00:21 +0000 (UTC) Received: by mail-pg1-f181.google.com with SMTP id t70so6540104pgc.5 for ; Wed, 21 Sep 2022 10:00:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=E8MAYkdVIQP0FJEM6EBF4l8Xg/qgqUjpGYemv3sFZKw=; b=Qlgug0EJTn9PhpLyeh5mxZ2/UgDdHf7kf/FXA3gXzZJQImIqnKk1qK72+iKUYP8Jmn Kp8KiFpG5E+RRflbtRDIb8lBO48xrjsDhj3VW+NCLDbK5wXD7ehjODlRWXgXzpQfRpKO kErn7juPi38n3YWxo8c71AlpBn8KSyYyfB9OhzeFhlfSS94dz4nyUOVPfTzX19ziXUc1 4hOC1yDAom+aQhvQ+Ka3nL+yQE+JoGYamz/4p+g20oKVz88RehgrKH6TsC5zdKM5ve0O JeOSaqmiubTlLv9wDJl4oNpd74MA+/N/cYIxyA3XO9aKRZWfYkpKWoa/nlhO8KLJaxiP rU/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=E8MAYkdVIQP0FJEM6EBF4l8Xg/qgqUjpGYemv3sFZKw=; b=HzyMp9xNuA/hX5VyViuR5w3hhpLSRJLnjQuJSs9wYR3f9bXYACZri0tH7p8fxy9vL7 0L7daxiqRU056ulaqT5wbVa30jb/n5ae3P7TECk3MObTlzqmRHVq/gabtaFuVRmHHrc4 oI1OJUYSWLvJdLM7HWosCFd5vulTgm0fyYROLeZS9xZ2o2bFOQureANl2XgnGcbbhX1W jci7NHl7MSMlX8oWFKazcsxOlvaTkIRi/mogKrevDmoKIQufNG46lFPJZuaBjesvX5IZ ULbt+n4eoNLZsZrY5wUcyla23rjCRtiERUVx/6OF2hfz6nSRfJIfx8LRra5xvhTO5gLw PqMA== X-Gm-Message-State: ACrzQf2VN+d//PfzCBCisy9jFkUYx+FwSgbjkCcL9mkGfJ15RcGqcqWN SLBUG6agZ9Kdc9we8Y9EnKw= X-Google-Smtp-Source: AMsMyM47Iwz30ngRYs701fJgHSVfpL6lW6gHhsxmWmTedwveJGUmb3HZL9GZo+r5hRLHTT+3qGXjNw== X-Received: by 2002:a05:6a00:1484:b0:547:89e:272c with SMTP id v4-20020a056a00148400b00547089e272cmr30005190pfu.0.1663779621239; Wed, 21 Sep 2022 10:00:21 -0700 (PDT) Received: from vultr.guest ([2001:19f0:6001:488e:5400:4ff:fe25:7db8]) by smtp.gmail.com with ESMTPSA id mp4-20020a17090b190400b002006f8e7688sm2102495pjb.32.2022.09.21.10.00.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Sep 2022 10:00:20 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, songmuchun@bytedance.com, akpm@linux-foundation.org, tj@kernel.org, lizefan.x@bytedance.com Cc: cgroups@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mm@kvack.org, Yafang Shao Subject: [RFC PATCH bpf-next 06/10] bpf: Introduce new helpers bpf_ringbuf_pages_{alloc,free} Date: Wed, 21 Sep 2022 16:59:58 +0000 Message-Id: <20220921170002.29557-7-laoar.shao@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220921170002.29557-1-laoar.shao@gmail.com> References: <20220921170002.29557-1-laoar.shao@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663779622; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E8MAYkdVIQP0FJEM6EBF4l8Xg/qgqUjpGYemv3sFZKw=; b=bgHg3nTbdO67GCT8PxTxI9qSp17ut2/bi6zSzTAVJpTg5VznR/xuy9w1M0MH9BGGgFwCXE l506VnqVkXodo7QkPUXjz1daysdWLE0f4XQj1Sf7o2BEnW8nVx9J9dtls5XOzAEB+a6/cE 6HM2qLIYdNybIfy/neEpaCgkP5L6Gkw= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Qlgug0EJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663779622; a=rsa-sha256; cv=none; b=PMmUhszwckLKQvOL5SKD2HwcZdA4Rc0ieqPK2LUO7UWAzmszVU6fra6b4PldL4i5RFLo6Y Bk5m8G/5Z9MCndi/CXW+cRHhPYy4KRljGBubSTmr+jn6+DX3GA6t0IJkMntEQoVpkf12gv fvCVbUeu07KS5lX6jh8f8Ai/0xriwZM= X-Rspam-User: X-Stat-Signature: qxtr5w8nhpha1bayn5xzqhmnicrziynd X-Rspamd-Queue-Id: 30B7BC0020 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Qlgug0EJ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.215.181 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com X-Rspamd-Server: rspam08 X-HE-Tag: 1663779621-550277 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allocate pages related memory into the new helper bpf_ringbuf_pages_alloc(), then it can be handled as a single unit. Suggested-by: Andrii Nakryiko Signed-off-by: Yafang Shao Acked-by: Andrii Nakryiko --- kernel/bpf/ringbuf.c | 80 ++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 56 insertions(+), 24 deletions(-) diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index 5eb7820..1e7284c 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -59,6 +59,57 @@ struct bpf_ringbuf_hdr { u32 pg_off; }; +static void bpf_ringbuf_pages_free(struct page **pages, int nr_pages) +{ + int i; + + for (i = 0; i < nr_pages; i++) + __free_page(pages[i]); + bpf_map_area_free(pages, NULL); +} + +static struct page **bpf_ringbuf_pages_alloc(struct bpf_map *map, + int nr_meta_pages, + int nr_data_pages, + int numa_node, + const gfp_t flags) +{ + int nr_pages = nr_meta_pages + nr_data_pages; + struct mem_cgroup *memcg, *old_memcg; + struct page **pages, *page; + int array_size; + int i; + + memcg = bpf_map_get_memcg(map); + old_memcg = set_active_memcg(memcg); + array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages); + pages = bpf_map_area_alloc(array_size, numa_node, NULL); + if (!pages) + goto err; + + for (i = 0; i < nr_pages; i++) { + page = alloc_pages_node(numa_node, flags, 0); + if (!page) { + nr_pages = i; + goto err_free_pages; + } + pages[i] = page; + if (i >= nr_meta_pages) + pages[nr_data_pages + i] = page; + } + set_active_memcg(old_memcg); + bpf_map_put_memcg(memcg); + + return pages; + +err_free_pages: + bpf_ringbuf_pages_free(pages, nr_pages); +err: + set_active_memcg(old_memcg); + bpf_map_put_memcg(memcg); + return NULL; +} + static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, struct bpf_map *map) { @@ -67,10 +118,8 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES; int nr_data_pages = data_sz >> PAGE_SHIFT; int nr_pages = nr_meta_pages + nr_data_pages; - struct page **pages, *page; struct bpf_ringbuf *rb; - size_t array_size; - int i; + struct page **pages; /* Each data page is mapped twice to allow "virtual" * continuous read of samples wrapping around the end of ring @@ -89,22 +138,11 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, * when mmap()'ed in user-space, simplifying both kernel and * user-space implementations significantly. */ - array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages); - pages = bpf_map_area_alloc(array_size, numa_node, map); + pages = bpf_ringbuf_pages_alloc(map, nr_meta_pages, nr_data_pages, + numa_node, flags); if (!pages) return NULL; - for (i = 0; i < nr_pages; i++) { - page = alloc_pages_node(numa_node, flags, 0); - if (!page) { - nr_pages = i; - goto err_free_pages; - } - pages[i] = page; - if (i >= nr_meta_pages) - pages[nr_data_pages + i] = page; - } - rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages, VM_MAP | VM_USERMAP, PAGE_KERNEL); if (rb) { @@ -114,10 +152,6 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, return rb; } -err_free_pages: - for (i = 0; i < nr_pages; i++) - __free_page(pages[i]); - bpf_map_area_free(pages, NULL); return NULL; } @@ -188,12 +222,10 @@ static void bpf_ringbuf_free(struct bpf_ringbuf *rb) * to unmap rb itself with vunmap() below */ struct page **pages = rb->pages; - int i, nr_pages = rb->nr_pages; + int nr_pages = rb->nr_pages; vunmap(rb); - for (i = 0; i < nr_pages; i++) - __free_page(pages[i]); - bpf_map_area_free(pages, NULL); + bpf_ringbuf_pages_free(pages, nr_pages); } static void ringbuf_map_free(struct bpf_map *map) -- 1.8.3.1