All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: Yafang Shao <laoar.shao@gmail.com>,
	ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
	kafai@fb.com, songliubraving@fb.com, yhs@fb.com,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com,
	haoluo@google.com, jolsa@kernel.org, tj@kernel.org,
	dennis@kernel.org, cl@linux.com, akpm@linux-foundation.org,
	penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com,
	roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, vbabka@suse.cz,
	urezki@gmail.com
Cc: linux-mm@kvack.org, bpf@vger.kernel.org,
	Yafang Shao <laoar.shao@gmail.com>
Subject: RE: [PATCH bpf-next 0/7] bpf, mm: bpf memory usage
Date: Fri, 03 Feb 2023 18:15:53 -0800	[thread overview]
Message-ID: <63ddbfd9ae610_6bb1520861@john.notmuch> (raw)
In-Reply-To: <20230202014158.19616-1-laoar.shao@gmail.com>

Yafang Shao wrote:
> Currently we can't get bpf memory usage reliably. bpftool now shows the
> bpf memory footprint, which is difference with bpf memory usage. The
> difference can be quite great between the footprint showed in bpftool
> and the memory actually allocated by bpf in some cases, for example,
> 
> - non-preallocated bpf map
>   The non-preallocated bpf map memory usage is dynamically changed. The
>   allocated elements count can be from 0 to the max entries. But the
>   memory footprint in bpftool only shows a fixed number.
> - bpf metadata consumes more memory than bpf element 
>   In some corner cases, the bpf metadata can consumes a lot more memory
>   than bpf element consumes. For example, it can happen when the element
>   size is quite small.

Just following up slightly on previous comment.

The metadata should be fixed and knowable correct? What I'm getting at
is if this can be calculated directly instead of through a BPF helper
and walking the entire map.

> 
> We need a way to get the bpf memory usage especially there will be more
> and more bpf programs running on the production environment and thus the
> bpf memory usage is not trivial.

In our environments we track map usage so we always know how many entries
are in a map. I don't think we use this to calculate memory footprint
at the moment, but just for map usage. Seems though once you have this
calculating memory footprint can be done out of band because element
and overheads costs are fixed.

> 
> This patchset introduces a new map ops ->map_mem_usage to get the memory
> usage. In this ops, the memory usage is got from the pointers which is
> already allocated by a bpf map. To make the code simple, we igore some
> small pointers as their size are quite small compared with the total
> usage.
> 
> In order to get the memory size from the pointers, some generic mm helpers
> are introduced firstly, for example, percpu_size(), vsize() and kvsize(). 
> 
> This patchset only implements the bpf memory usage for hashtab. I will
> extend it to other maps and bpf progs (bpf progs can dynamically allocate
> memory via bpf_obj_new()) in the future.

My preference would be to calculate this out of band. Walking a
large map and doing it in a critical section to get the memory
usage seems not optimal 

> 
> The detailed result can be found in patch #7.
> 
> Patch #1~#4: Generic mm helpers
> Patch #5   : Introduce new ops
> Patch #6   : Helpers for bpf_mem_alloc
> Patch #7   : hashtab memory usage
> 
> Future works:
> - extend it to other maps
> - extend it to bpf prog
> - per-container bpf memory usage 
> 
> Historical discussions,
> - RFC PATCH v1 mm, bpf: Add BPF into /proc/meminfo
>   https://lwn.net/Articles/917647/  
> - RFC PATCH v2 mm, bpf: Add BPF into /proc/meminfo
>   https://lwn.net/Articles/919848/
> 
> Yafang Shao (7):
>   mm: percpu: fix incorrect size in pcpu_obj_full_size()
>   mm: percpu: introduce percpu_size()
>   mm: vmalloc: introduce vsize()
>   mm: util: introduce kvsize()
>   bpf: add new map ops ->map_mem_usage
>   bpf: introduce bpf_mem_alloc_size()
>   bpf: hashtab memory usage
> 
>  include/linux/bpf.h           |  2 ++
>  include/linux/bpf_mem_alloc.h |  2 ++
>  include/linux/percpu.h        |  1 +
>  include/linux/slab.h          |  1 +
>  include/linux/vmalloc.h       |  1 +
>  kernel/bpf/hashtab.c          | 80 ++++++++++++++++++++++++++++++++++++++++++-
>  kernel/bpf/memalloc.c         | 70 +++++++++++++++++++++++++++++++++++++
>  kernel/bpf/syscall.c          | 18 ++++++----
>  mm/percpu-internal.h          |  4 ++-
>  mm/percpu.c                   | 35 +++++++++++++++++++
>  mm/util.c                     | 15 ++++++++
>  mm/vmalloc.c                  | 17 +++++++++
>  12 files changed, 237 insertions(+), 9 deletions(-)
> 
> -- 
> 1.8.3.1
> 



  parent reply	other threads:[~2023-02-04  2:16 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-02  1:41 [PATCH bpf-next 0/7] bpf, mm: bpf memory usage Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 1/7] mm: percpu: fix incorrect size in pcpu_obj_full_size() Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 2/7] mm: percpu: introduce percpu_size() Yafang Shao
2023-02-02 14:32   ` Christoph Lameter
2023-02-02 15:01     ` Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 3/7] mm: vmalloc: introduce vsize() Yafang Shao
2023-02-02 10:23   ` Christoph Hellwig
2023-02-02 14:10     ` Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 4/7] mm: util: introduce kvsize() Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 5/7] bpf: add new map ops ->map_mem_usage Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 6/7] bpf: introduce bpf_mem_alloc_size() Yafang Shao
2023-02-02  4:53   ` kernel test robot
2023-02-02 14:11     ` Yafang Shao
2023-02-02  1:41 ` [PATCH bpf-next 7/7] bpf: hashtab memory usage Yafang Shao
2023-02-04  2:01   ` John Fastabend
2023-02-05  3:55     ` Yafang Shao
2023-02-08  1:56       ` Alexei Starovoitov
2023-02-08  3:33         ` Yafang Shao
2023-02-08  4:29           ` Alexei Starovoitov
2023-02-08 14:22             ` Yafang Shao
2023-02-05 22:14   ` Cong Wang
2023-02-06 11:52     ` Yafang Shao
2023-02-04  2:15 ` John Fastabend [this message]
2023-02-05  4:03   ` [PATCH bpf-next 0/7] bpf, mm: bpf " Yafang Shao
2023-02-07  0:48     ` Ho-Ren Chuang
2023-02-07  7:02       ` Yafang Shao
2023-02-07  0:53     ` Ho-Ren Chuang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=63ddbfd9ae610_6bb1520861@john.notmuch \
    --to=john.fastabend@gmail.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=daniel@iogearbox.net \
    --cc=dennis@kernel.org \
    --cc=haoluo@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jolsa@kernel.org \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=sdf@google.com \
    --cc=songliubraving@fb.com \
    --cc=tj@kernel.org \
    --cc=urezki@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.