bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Borkmann <daniel@iogearbox.net>
To: Song Liu <songliubraving@fb.com>,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Cc: kernel-team@fb.com, ast@kernel.org, john.fastabend@gmail.com,
	kpsingh@chromium.org
Subject: Re: [PATCH v2 bpf-next 1/2] bpf: introduce BPF_F_PRESERVE_ELEMS for perf event array
Date: Wed, 30 Sep 2020 16:49:27 +0200	[thread overview]
Message-ID: <c7b572d4-df22-db9d-6c01-d2b577c47116@iogearbox.net> (raw)
In-Reply-To: <20200929215659.3938706-2-songliubraving@fb.com>

On 9/29/20 11:56 PM, Song Liu wrote:
[...]
>   
> +static void bpf_fd_array_map_clear(struct bpf_map *map);
> +
> +static void perf_event_fd_array_map_free(struct bpf_map *map)
> +{
> +	if (map->map_flags & BPF_F_PRESERVE_ELEMS)
> +		bpf_fd_array_map_clear(map);
> +	fd_array_map_free(map);
> +}

Not quite sure why you place that here and added the fwd declaration? If you
place perf_event_fd_array_map_free() near perf_event_array_map_ops, then you
also don't need the additional bpf_fd_array_map_clear declaration.

>   static void *prog_fd_array_get_ptr(struct bpf_map *map,
>   				   struct file *map_file, int fd)
>   {
> @@ -1134,6 +1148,9 @@ static void perf_event_fd_array_release(struct bpf_map *map,
>   	struct bpf_event_entry *ee;
>   	int i;
>   
> +	if (map->map_flags & BPF_F_PRESERVE_ELEMS)
> +		return;
> +
>   	rcu_read_lock();
>   	for (i = 0; i < array->map.max_entries; i++) {
>   		ee = READ_ONCE(array->ptrs[i]);
> @@ -1148,7 +1165,7 @@ const struct bpf_map_ops perf_event_array_map_ops = {
>   	.map_meta_equal = bpf_map_meta_equal,
>   	.map_alloc_check = fd_array_map_alloc_check,
>   	.map_alloc = array_map_alloc,
> -	.map_free = fd_array_map_free,
> +	.map_free = perf_event_fd_array_map_free,
>   	.map_get_next_key = array_map_get_next_key,
>   	.map_lookup_elem = fd_array_map_lookup_elem,
>   	.map_delete_elem = fd_array_map_delete_elem,
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 82522f05c0213..ea78eb89f8d67 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -414,6 +414,9 @@ enum {
>   
>   /* Enable memory-mapping BPF map */
>   	BPF_F_MMAPABLE		= (1U << 10),
> +
> +/* Share perf_event among processes */
> +	BPF_F_PRESERVE_ELEMS	= (1U << 11),
>   };
>   
>   /* Flags for BPF_PROG_QUERY. */
> 


  reply	other threads:[~2020-09-30 14:49 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-29 21:56 [PATCH v2 bpf-next 0/2] introduce BPF_F_PRESERVE_ELEMS Song Liu
2020-09-29 21:56 ` [PATCH v2 bpf-next 1/2] bpf: introduce BPF_F_PRESERVE_ELEMS for perf event array Song Liu
2020-09-30 14:49   ` Daniel Borkmann [this message]
2020-09-30 15:04     ` Song Liu
2020-09-29 21:56 ` [PATCH v2 bpf-next 2/2] selftests/bpf: add tests for BPF_F_PRESERVE_ELEMS Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c7b572d4-df22-db9d-6c01-d2b577c47116@iogearbox.net \
    --to=daniel@iogearbox.net \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=kernel-team@fb.com \
    --cc=kpsingh@chromium.org \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).