netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Verbeiren <david.verbeiren@tessares.net>
To: bpf@vger.kernel.org
Cc: netdev@vger.kernel.org,
	David Verbeiren <david.verbeiren@tessares.net>,
	Matthieu Baerts <matthieu.baerts@tessares.net>
Subject: [PATCH bpf] bpf: zero-fill re-used per-cpu map element
Date: Fri, 23 Oct 2020 14:37:54 +0200	[thread overview]
Message-ID: <20201023123754.30304-1-david.verbeiren@tessares.net> (raw)

Zero-fill element values for all cpus, just as when not using
prealloc. This is the only way the bpf program can ensure known
initial values for cpus other than the current one ('onallcpus'
cannot be set when coming from the bpf program).

The scenario is: bpf program inserts some elements in a per-cpu
map, then deletes some (or userspace does). When later adding
new elements using bpf_map_update_elem(), the bpf program can
only set the value of the new elements for the current cpu.
When prealloc is enabled, previously deleted elements are re-used.
Without the fix, values for other cpus remain whatever they were
when the re-used entry was previously freed.

Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements")
Acked-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David Verbeiren <david.verbeiren@tessares.net>
---
 kernel/bpf/hashtab.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 1815e97d4c9c..667553cce65a 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -836,6 +836,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 	bool prealloc = htab_is_prealloc(htab);
 	struct htab_elem *l_new, **pl_new;
 	void __percpu *pptr;
+	int cpu;
 
 	if (prealloc) {
 		if (old_elem) {
@@ -880,6 +881,17 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
 		size = round_up(size, 8);
 		if (prealloc) {
 			pptr = htab_elem_get_ptr(l_new, key_size);
+
+			/* zero-fill element values for all cpus, just as when
+			 * not using prealloc. Only way for bpf program to
+			 * ensure known initial values for cpus other than
+			 * current one (onallcpus=false when coming from bpf
+			 * prog).
+			 */
+			if (!onallcpus)
+				for_each_possible_cpu(cpu)
+					memset((void *)per_cpu_ptr(pptr, cpu),
+					       0, size);
 		} else {
 			/* alloc_percpu zero-fills */
 			pptr = __alloc_percpu_gfp(size, 8,
-- 
2.29.0


             reply	other threads:[~2020-10-23 12:38 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-23 12:37 David Verbeiren [this message]
2020-10-26 22:47 ` [PATCH bpf] bpf: zero-fill re-used per-cpu map element Andrii Nakryiko
2020-10-27 14:48   ` David Verbeiren
2020-10-27 22:13 ` [PATCH bpf v2] " David Verbeiren
2020-10-27 22:55   ` Andrii Nakryiko
2020-10-29 14:44     ` David Verbeiren
2020-10-29 22:40       ` Andrii Nakryiko
2020-11-03 15:47   ` [PATCH bpf v3] " David Verbeiren
2020-11-03 18:19     ` Andrii Nakryiko
2020-11-04 11:23     ` [PATCH bpf v4] " David Verbeiren
2020-11-04 20:45       ` Andrii Nakryiko
2020-11-06  4:10       ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201023123754.30304-1-david.verbeiren@tessares.net \
    --to=david.verbeiren@tessares.net \
    --cc=bpf@vger.kernel.org \
    --cc=matthieu.baerts@tessares.net \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).