All of lore.kernel.org
 help / color / mirror / Atom feed
From: Song Liu <songliubraving@fb.com>
To: "linux-sparse@vger.kernel.org" <linux-sparse@vger.kernel.org>,
	"Luc Van Oostenryck" <luc.vanoostenryck@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>,
	Alexei Starovoitov <ast@kernel.org>
Subject: "warning: context imbalance" in kernel/bpf/hashtab.c
Date: Thu, 5 Nov 2020 19:34:55 +0000	[thread overview]
Message-ID: <4126C08B-532F-4B6A-A3C2-9F7928EA2806@fb.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 3384 bytes --]

Hi, 

We are trying to fix some sparse warning in kernel/bpf/hashtab.c (in bpf-next tree). 
The sparse I use was v0.6.3-76-gf680124b. 

These new warnings are introduced by [1]. Before [1], hashtab.c got

[...]
kernel/bpf/hashtab.c:2089:19: error: subtraction of functions? Share your drugs
kernel/bpf/hashtab.c:1421:9: warning: context imbalance in '__htab_map_lookup_and_delete_batch' - different lock contexts for basic block
kernel/bpf/hashtab.c: note: in included file (through include/linux/workqueue.h, include/linux/bpf.h):
./include/linux/rcupdate.h:693:9: warning: context imbalance in 'bpf_hash_map_seq_find_next' - unexpected unlock
./include/linux/rcupdate.h:693:9: warning: context imbalance in 'bpf_hash_map_seq_stop' - unexpected unlock

With [1], we got a few more warnings:

[...]
kernel/bpf/hashtab.c:2141:19: error: subtraction of functions? Share your drugs
kernel/bpf/hashtab.c:1292:27: warning: context imbalance in 'htab_map_delete_elem' - unexpected unlock
kernel/bpf/hashtab.c:1325:27: warning: context imbalance in 'htab_lru_map_delete_elem' - unexpected unlock
kernel/bpf/hashtab.c: note: in included file (through include/linux/workqueue.h, include/linux/bpf.h):
./include/linux/rcupdate.h:693:9: warning: context imbalance in '__htab_map_lookup_and_delete_batch' - unexpected unlock
./include/linux/rcupdate.h:693:9: warning: context imbalance in 'bpf_hash_map_seq_find_next' - unexpected unlock
./include/linux/rcupdate.h:693:9: warning: context imbalance in 'bpf_hash_map_seq_stop' - unexpected unlock

So htab_map_delete_elem() and htab_lru_map_delete_elem() got new warnings, and 
__htab_map_lookup_and_delete_batch() got a slightly different warning. 

After trying different annotations, including the attached foo.diff by Daniel,
we found the simplest fix was something like:

====================== 8< ========================== 
diff --git i/kernel/bpf/hashtab.c w/kernel/bpf/hashtab.c
index 23f73d4649c9c..faad2061f1167 100644
--- i/kernel/bpf/hashtab.c
+++ w/kernel/bpf/hashtab.c
@@ -1279,6 +1279,7 @@ static int htab_map_delete_elem(struct bpf_map *map, void *key)
        ret = htab_lock_bucket(htab, b, hash, &flags);
        if (ret)
                return ret;
+       ret = 0;

        l = lookup_elem_raw(head, hash, key, key_size);

@@ -1314,6 +1315,7 @@ static int htab_lru_map_delete_elem(struct bpf_map *map, void *key)
        ret = htab_lock_bucket(htab, b, hash, &flags);
        if (ret)
                return ret;
+       ret = 0;

        l = lookup_elem_raw(head, hash, key, key_size);

@@ -1476,6 +1478,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
                ret = htab_lock_bucket(htab, b, batch, &flags);
                if (ret)
                        goto next_batch;
+               ret = 0;
        }

        bucket_cnt = 0;

====================== 8< ========================== 

These three "ret = 0;" make the sparse warning back to before [1]. However, these
don't really make sense, as we know ret must be zero after the if clause. So it
looks like a bug in sparse. Please help us look into this issue. 

Also, there are a few similar warnings from hashtab.c. Please give us guidance on 
how to fix them properly. 

Thanks in advance,
Song

[1] commit 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")


[-- Attachment #2: foo.diff --]
[-- Type: application/octet-stream, Size: 3884 bytes --]

diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 23f73d4649c9..1ed2547863fc 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -158,11 +158,15 @@ static void htab_init_buckets(struct bpf_htab *htab)
 static inline int htab_lock_bucket(const struct bpf_htab *htab,
 				   struct bucket *b, u32 hash,
 				   unsigned long *pflags)
+	__acquires(&b->raw_lock) __acquires(&b->lock)
 {
 	unsigned long flags;
 
 	hash = hash & HASHTAB_MAP_LOCK_MASK;
 
+	__acquire(&b->raw_lock);
+	__acquire(&b->lock);
+
 	migrate_disable();
 	if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
 		__this_cpu_dec(*(htab->map_locked[hash]));
@@ -182,8 +186,13 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
 static inline void htab_unlock_bucket(const struct bpf_htab *htab,
 				      struct bucket *b, u32 hash,
 				      unsigned long flags)
+	__releases(&b->raw_lock) __releases(&b->lock)
 {
 	hash = hash & HASHTAB_MAP_LOCK_MASK;
+
+	__release(&b->raw_lock);
+	__release(&b->lock);
+
 	if (htab_use_raw_lock(htab))
 		raw_spin_unlock_irqrestore(&b->raw_lock, flags);
 	else
@@ -731,8 +740,11 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node)
 	head = &b->head;
 
 	ret = htab_lock_bucket(htab, b, tgt_l->hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return false;
+	}
 
 	hlist_nulls_for_each_entry_rcu(l, n, head, hash_node)
 		if (l == tgt_l) {
@@ -1013,8 +1025,11 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
 	}
 
 	ret = htab_lock_bucket(htab, b, hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return ret;
+	}
 
 	l_old = lookup_elem_raw(head, hash, key, key_size);
 
@@ -1094,8 +1109,11 @@ static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
 	memcpy(l_new->key + round_up(map->key_size, 8), value, map->value_size);
 
 	ret = htab_lock_bucket(htab, b, hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return ret;
+	}
 
 	l_old = lookup_elem_raw(head, hash, key, key_size);
 
@@ -1150,8 +1168,11 @@ static int __htab_percpu_map_update_elem(struct bpf_map *map, void *key,
 	head = &b->head;
 
 	ret = htab_lock_bucket(htab, b, hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return ret;
+	}
 
 	l_old = lookup_elem_raw(head, hash, key, key_size);
 
@@ -1215,8 +1236,11 @@ static int __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key,
 	}
 
 	ret = htab_lock_bucket(htab, b, hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return ret;
+	}
 
 	l_old = lookup_elem_raw(head, hash, key, key_size);
 
@@ -1277,8 +1301,11 @@ static int htab_map_delete_elem(struct bpf_map *map, void *key)
 	head = &b->head;
 
 	ret = htab_lock_bucket(htab, b, hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return ret;
+	}
 
 	l = lookup_elem_raw(head, hash, key, key_size);
 
@@ -1312,8 +1339,11 @@ static int htab_lru_map_delete_elem(struct bpf_map *map, void *key)
 	head = &b->head;
 
 	ret = htab_lock_bucket(htab, b, hash, &flags);
-	if (ret)
+	if (ret) {
+		__release(&b->raw_lock);
+		__release(&b->lock);
 		return ret;
+	}
 
 	l = lookup_elem_raw(head, hash, key, key_size);
 
@@ -1474,8 +1504,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
 	/* do not grab the lock unless need it (bucket_cnt > 0). */
 	if (locked) {
 		ret = htab_lock_bucket(htab, b, batch, &flags);
-		if (ret)
+		if (ret) {
+			__release(&b->raw_lock);
+			__release(&b->lock);
 			goto next_batch;
+		}
 	}
 
 	bucket_cnt = 0;

             reply	other threads:[~2020-11-05 19:35 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-05 19:34 Song Liu [this message]
2020-11-05 21:13 ` "warning: context imbalance" in kernel/bpf/hashtab.c Luc Van Oostenryck
2020-11-05 21:18   ` Linus Torvalds
2020-11-05 21:50     ` Luc Van Oostenryck
2020-11-06 17:30       ` Song Liu
2020-11-06 17:45         ` Linus Torvalds
2020-11-06 19:04           ` Song Liu
2020-11-06 19:44             ` Linus Torvalds
2020-11-06 22:19               ` Luc Van Oostenryck
2020-11-06 22:46                 ` Linus Torvalds
2020-11-07  1:32                   ` Luc Van Oostenryck
2020-11-07 19:39                     ` Linus Torvalds
2020-11-07 20:09                       ` Luc Van Oostenryck
2020-11-07 20:33                         ` Linus Torvalds
2020-11-07 21:23                           ` Luc Van Oostenryck
2020-11-07 22:20                             ` Linus Torvalds
2020-11-08  0:41                               ` Luc Van Oostenryck
2020-11-08  0:52                                 ` Linus Torvalds

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4126C08B-532F-4B6A-A3C2-9F7928EA2806@fb.com \
    --to=songliubraving@fb.com \
    --cc=ast@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=linux-sparse@vger.kernel.org \
    --cc=luc.vanoostenryck@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.