All of lore.kernel.org
 help / color / mirror / Atom feed
* [bpf PATCH v3 0/4] BPF fixes for sockhash
@ 2018-06-22 15:21 John Fastabend
  2018-06-22 15:21 ` [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added John Fastabend
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: John Fastabend @ 2018-06-22 15:21 UTC (permalink / raw)
  To: john.fastabend, ast, daniel, kafai; +Cc: netdev

This addresses two syzbot issues that lead to identifying (by Eric and
Wei) a class of bugs where we don't correctly check for IPv4/v6
sockets and their associated state. The second issue was a locking
omission in sockhash.

The first patch addresses IPv6 socks and fixing an error where
sockhash would overwrite the prot pointer with IPv4 prot. To fix
this build similar solution to TLS ULP. Although we continue to
allow socks in all states not just ESTABLISH in this patch set
because as Martin points out there should be no issue with this
on the sockmap ULP because we don't use the ctx in this code.

The other issue syzbot found that the tcp_close() handler missed
locking the hash bucket lock which could result in corrupting the
sockhash bucket list if delete and close ran at the same time. 
And also the smap_list_remove() routine was not working correctly
at all. This was not caught in my testing because in general my
tests (to date at least lets add some more robust selftest in
bpf-next) do things in the "expected" order, create map, add socks,
delete socks, then tear down maps. The tests we have that do the
ops out of this order where only working on single maps not multi-
maps so we never saw the issue. Thanks syzbot. The fix is to
restructure the tcp_close() lock handling. And fix the obvious
bug in smap_list_remove().

Finally, during review I noticed the release handler was omitted
from the upstream code (patch 4) due to an incorrect merge conflict
fix when I ported the code to latest bpf-next before submitting.

v3: rework patches, dropping ESTABLISH check and adding rcu
    annotation along with the smap_list_remove fix

Also big thanks to Martin for thorough review he caught at least
one case where I missed a rcu_call().

---

John Fastabend (4):
      bpf: sockmap, fix crash when ipv6 sock is added
      bpf: sockmap, fix smap_list_map_remove when psock is in many maps
      bpf: sockhash fix omitted bucket lock in sock_close
      bpf: sockhash, add release routine


 kernel/bpf/sockmap.c |  210 ++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 153 insertions(+), 57 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added
  2018-06-22 15:21 [bpf PATCH v3 0/4] BPF fixes for sockhash John Fastabend
@ 2018-06-22 15:21 ` John Fastabend
  2018-06-23  7:44   ` Martin KaFai Lau
  2018-06-22 15:21 ` [bpf PATCH v3 2/4] bpf: sockmap, fix smap_list_map_remove when psock is in many maps John Fastabend
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: John Fastabend @ 2018-06-22 15:21 UTC (permalink / raw)
  To: john.fastabend, ast, daniel, kafai; +Cc: netdev

This fixes a crash where we assign tcp_prot to IPv6 sockets instead
of tcpv6_prot.

Previously we overwrote the sk->prot field with tcp_prot even in the
AF_INET6 case. This patch ensures the correct tcp_prot and tcpv6_prot
are used.

Tested with 'netserver -6' and 'netperf -H [IPv6]' as well as
'netperf -H [IPv4]'. The ESTABLISHED check resolves the previously
crashing case here.

Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support")
Reported-by: syzbot+5c063698bdbfac19f363@syzkaller.appspotmail.com
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Wei Wang <weiwan@google.com>
---
 kernel/bpf/sockmap.c |   58 +++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 48 insertions(+), 10 deletions(-)

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 52a91d8..d7fd17a 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -140,6 +140,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
 static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
 static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
 			    int offset, size_t size, int flags);
+static void bpf_tcp_close(struct sock *sk, long timeout);
 
 static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
 {
@@ -161,7 +162,42 @@ static bool bpf_tcp_stream_read(const struct sock *sk)
 	return !empty;
 }
 
-static struct proto tcp_bpf_proto;
+enum {
+	SOCKMAP_IPV4,
+	SOCKMAP_IPV6,
+	SOCKMAP_NUM_PROTS,
+};
+
+enum {
+	SOCKMAP_BASE,
+	SOCKMAP_TX,
+	SOCKMAP_NUM_CONFIGS,
+};
+
+static struct proto *saved_tcpv6_prot __read_mostly;
+static DEFINE_SPINLOCK(tcpv6_prot_lock);
+static struct proto bpf_tcp_prots[SOCKMAP_NUM_PROTS][SOCKMAP_NUM_CONFIGS];
+static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],
+			 struct proto *base)
+{
+	prot[SOCKMAP_BASE]			= *base;
+	prot[SOCKMAP_BASE].close		= bpf_tcp_close;
+	prot[SOCKMAP_BASE].recvmsg		= bpf_tcp_recvmsg;
+	prot[SOCKMAP_BASE].stream_memory_read	= bpf_tcp_stream_read;
+
+	prot[SOCKMAP_TX]			= prot[SOCKMAP_BASE];
+	prot[SOCKMAP_TX].sendmsg		= bpf_tcp_sendmsg;
+	prot[SOCKMAP_TX].sendpage		= bpf_tcp_sendpage;
+}
+
+static void update_sk_prot(struct sock *sk, struct smap_psock *psock)
+{
+	int family = sk->sk_family == AF_INET6 ? SOCKMAP_IPV6 : SOCKMAP_IPV4;
+	int conf = psock->bpf_tx_msg ? SOCKMAP_TX : SOCKMAP_BASE;
+
+	sk->sk_prot = &bpf_tcp_prots[family][conf];
+}
+
 static int bpf_tcp_init(struct sock *sk)
 {
 	struct smap_psock *psock;
@@ -181,14 +217,17 @@ static int bpf_tcp_init(struct sock *sk)
 	psock->save_close = sk->sk_prot->close;
 	psock->sk_proto = sk->sk_prot;
 
-	if (psock->bpf_tx_msg) {
-		tcp_bpf_proto.sendmsg = bpf_tcp_sendmsg;
-		tcp_bpf_proto.sendpage = bpf_tcp_sendpage;
-		tcp_bpf_proto.recvmsg = bpf_tcp_recvmsg;
-		tcp_bpf_proto.stream_memory_read = bpf_tcp_stream_read;
+	/* Build IPv6 sockmap whenever the address of tcpv6_prot changes */
+	if (sk->sk_family == AF_INET6 &&
+	    unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) {
+		spin_lock_bh(&tcpv6_prot_lock);
+		if (likely(sk->sk_prot != saved_tcpv6_prot)) {
+			build_protos(bpf_tcp_prots[SOCKMAP_IPV6], sk->sk_prot);
+			smp_store_release(&saved_tcpv6_prot, sk->sk_prot);
+		}
+		spin_unlock_bh(&tcpv6_prot_lock);
 	}
-
-	sk->sk_prot = &tcp_bpf_proto;
+	update_sk_prot(sk, psock);
 	rcu_read_unlock();
 	return 0;
 }
@@ -1111,8 +1150,7 @@ static void bpf_tcp_msg_add(struct smap_psock *psock,
 
 static int bpf_tcp_ulp_register(void)
 {
-	tcp_bpf_proto = tcp_prot;
-	tcp_bpf_proto.close = bpf_tcp_close;
+	build_protos(bpf_tcp_prots[SOCKMAP_IPV4], &tcp_prot);
 	/* Once BPF TX ULP is registered it is never unregistered. It
 	 * will be in the ULP list for the lifetime of the system. Doing
 	 * duplicate registers is not a problem.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [bpf PATCH v3 2/4] bpf: sockmap, fix smap_list_map_remove when psock is in many maps
  2018-06-22 15:21 [bpf PATCH v3 0/4] BPF fixes for sockhash John Fastabend
  2018-06-22 15:21 ` [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added John Fastabend
@ 2018-06-22 15:21 ` John Fastabend
  2018-06-23  7:45   ` Martin KaFai Lau
  2018-06-22 15:21 ` [bpf PATCH v3 3/4] bpf: sockhash fix omitted bucket lock in sock_close John Fastabend
  2018-06-22 15:21 ` [bpf PATCH v3 4/4] bpf: sockhash, add release routine John Fastabend
  3 siblings, 1 reply; 8+ messages in thread
From: John Fastabend @ 2018-06-22 15:21 UTC (permalink / raw)
  To: john.fastabend, ast, daniel, kafai; +Cc: netdev

If a hashmap is free'd with open socks it removes the reference to
the hash entry from the psock. If that is the last reference to the
psock then it will also be free'd by the reference counting logic.
However the current logic that removes the hash reference from the
list of references is broken. In map_list_map_remove() we first check
if the sockmap entry matches and then check if the hashmap entry
matches. But, the sockmap entry sill always match because its NULL in
this case which causes the first entry to be removed from the list.
If this is always the "right" entry (because the user adds/removes
entries in order) then everything is OK but otherwise a subsequent
bpf_tcp_close() may reference a free'd object.

To fix this create two list handlers one for sockmap and one for
sockhash.

Reported-by: syzbot+0ce137753c78f7b6acc1@syzkaller.appspotmail.com
Fixes: 81110384441a ("bpf: sockmap, add hash map support")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 kernel/bpf/sockmap.c |   33 +++++++++++++++++++++------------
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index d7fd17a..69b26af 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -1602,17 +1602,26 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
 	return ERR_PTR(err);
 }
 
-static void smap_list_remove(struct smap_psock *psock,
-			     struct sock **entry,
-			     struct htab_elem *hash_link)
+static void smap_list_map_remove(struct smap_psock *psock,
+				 struct sock **entry)
 {
 	struct smap_psock_map_entry *e, *tmp;
 
 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
-		if (e->entry == entry || e->hash_link == hash_link) {
+		if (e->entry == entry)
+			list_del(&e->list);
+	}
+}
+static void smap_list_hash_remove(struct smap_psock *psock,
+				  struct htab_elem *hash_link)
+{
+	struct smap_psock_map_entry *e, *tmp;
+
+	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+		struct htab_elem *c = e->hash_link;
+
+		if (c == hash_link)
 			list_del(&e->list);
-			break;
-		}
 	}
 }
 
@@ -1647,7 +1656,7 @@ static void sock_map_free(struct bpf_map *map)
 		 * to be null and queued for garbage collection.
 		 */
 		if (likely(psock)) {
-			smap_list_remove(psock, &stab->sock_map[i], NULL);
+			smap_list_map_remove(psock, &stab->sock_map[i]);
 			smap_release_sock(psock, sock);
 		}
 		write_unlock_bh(&sock->sk_callback_lock);
@@ -1706,7 +1715,7 @@ static int sock_map_delete_elem(struct bpf_map *map, void *key)
 
 	if (psock->bpf_parse)
 		smap_stop_sock(psock, sock);
-	smap_list_remove(psock, &stab->sock_map[k], NULL);
+	smap_list_map_remove(psock, &stab->sock_map[k]);
 	smap_release_sock(psock, sock);
 out:
 	write_unlock_bh(&sock->sk_callback_lock);
@@ -1908,7 +1917,7 @@ static int sock_map_ctx_update_elem(struct bpf_sock_ops_kern *skops,
 		struct smap_psock *opsock = smap_psock_sk(osock);
 
 		write_lock_bh(&osock->sk_callback_lock);
-		smap_list_remove(opsock, &stab->sock_map[i], NULL);
+		smap_list_map_remove(opsock, &stab->sock_map[i]);
 		smap_release_sock(opsock, osock);
 		write_unlock_bh(&osock->sk_callback_lock);
 	}
@@ -2124,7 +2133,7 @@ static void sock_hash_free(struct bpf_map *map)
 			 * (psock) to be null and queued for garbage collection.
 			 */
 			if (likely(psock)) {
-				smap_list_remove(psock, NULL, l);
+				smap_list_hash_remove(psock, l);
 				smap_release_sock(psock, sock);
 			}
 			write_unlock_bh(&sock->sk_callback_lock);
@@ -2304,7 +2313,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
 		psock = smap_psock_sk(l_old->sk);
 
 		hlist_del_rcu(&l_old->hash_node);
-		smap_list_remove(psock, NULL, l_old);
+		smap_list_hash_remove(psock, l_old);
 		smap_release_sock(psock, l_old->sk);
 		free_htab_elem(htab, l_old);
 	}
@@ -2372,7 +2381,7 @@ static int sock_hash_delete_elem(struct bpf_map *map, void *key)
 		 * to be null and queued for garbage collection.
 		 */
 		if (likely(psock)) {
-			smap_list_remove(psock, NULL, l);
+			smap_list_hash_remove(psock, l);
 			smap_release_sock(psock, sock);
 		}
 		write_unlock_bh(&sock->sk_callback_lock);

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [bpf PATCH v3 3/4] bpf: sockhash fix omitted bucket lock in sock_close
  2018-06-22 15:21 [bpf PATCH v3 0/4] BPF fixes for sockhash John Fastabend
  2018-06-22 15:21 ` [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added John Fastabend
  2018-06-22 15:21 ` [bpf PATCH v3 2/4] bpf: sockmap, fix smap_list_map_remove when psock is in many maps John Fastabend
@ 2018-06-22 15:21 ` John Fastabend
  2018-06-23  7:45   ` Martin KaFai Lau
  2018-06-22 15:21 ` [bpf PATCH v3 4/4] bpf: sockhash, add release routine John Fastabend
  3 siblings, 1 reply; 8+ messages in thread
From: John Fastabend @ 2018-06-22 15:21 UTC (permalink / raw)
  To: john.fastabend, ast, daniel, kafai; +Cc: netdev

First in tcp_close, reduce scope of sk_callback_lock() the lock is
only needed for protecting maps list the ingress and cork
lists are protected by sock lock. Having the lock in wider scope is
harmless but may confuse the reader who may infer it is in fact
needed.

Next, in sock_hash_delete_elem() the pattern is as follows,

  sock_hash_delete_elem()
     [...]
     spin_lock(bucket_lock)
     l = lookup_elem_raw()
     if (l)
        hlist_del_rcu()
        write_lock(sk_callback_lock)
         .... destroy psock ...
        write_unlock(sk_callback_lock)
     spin_unlock(bucket_lock)

The ordering is necessary because we only know the {p}sock after
dereferencing the hash table which we can't do unless we have the
bucket lock held. Once we have the bucket lock and the psock element
it is deleted from the hashmap to ensure any other path doing a lookup
will fail. Finally, the refcnt is decremented and if zero the psock
is destroyed.

In parallel with the above (or free'ing the map) a tcp close event
may trigger tcp_close(). Which at the moment omits the bucket lock
altogether (oops!) where the flow looks like this,

  bpf_tcp_close()
     [...]
     write_lock(sk_callback_lock)
     for each psock->maps // list of maps this sock is part of
         hlist_del_rcu(ref_hash_node);
         .... destroy psock ...
     write_unlock(sk_callback_lock)

Obviously, and demonstrated by syzbot, this is broken because
we can have multiple threads deleting entries via hlist_del_rcu().

To fix this we might be tempted to wrap the hlist operation in a
bucket lock but that would create a lock inversion problem. In
summary to follow locking rules the psocks maps list needs the
sk_callback_lock but we need the bucket lock to do the hlist_del_rcu.
To resolve the lock inversion problem pop the head of the maps list
repeatedly and remove the reference until no more are left. If a
delete happens in parallel from the BPF API that is OK as well because
it will do a similar action, lookup the lock in the map/hash, delete
it from the map/hash, and dec the refcnt. We check for this case
before doing a destroy on the psock to ensure we don't have two
threads tearing down a psock. The new logic is as follows,

  bpf_tcp_close()
  e = psock_map_pop(psock->maps) // done with sk_callback_lock
  bucket_lock() // lock hash list bucket
  l = lookup_elem_raw(head, hash, key, key_size);
  if (l) {
     //only get here if elmnt was not already removed
     hlist_del_rcu()
     ... destroy psock...
  }
  bucket_unlock()

And finally for all the above to work add missing sk_callback_lock
around smap_list_remove in sock_hash_ctx_update_elem(). Otherwise
delete and update may corrupt maps list. Then add RCU annotations and
use rcu_dereference/rcu_assign_pointer to manage values relying on
RCU so that the object is not free'd from sock_hash_free() while it
is being referenced in bpf_tcp_close().

(As an aside the sk_callback_lock serves two purposes. The
 first, is to update the sock callbacks sk_data_ready, sk_write_space,
 etc. The second is to protect the psock 'maps' list. The 'maps' list
 is used to (as shown above) to delete all map/hash references to a
 sock when the sock is closed)

Reported-by: syzbot+0ce137753c78f7b6acc1@syzkaller.appspotmail.com
Fixes: 81110384441a ("bpf: sockmap, add hash map support")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 kernel/bpf/sockmap.c |  120 +++++++++++++++++++++++++++++++++++---------------
 1 file changed, 84 insertions(+), 36 deletions(-)

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 69b26af..333427b 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -72,6 +72,7 @@ struct bpf_htab {
 	u32 n_buckets;
 	u32 elem_size;
 	struct bpf_sock_progs progs;
+	struct rcu_head rcu;
 };
 
 struct htab_elem {
@@ -89,8 +90,8 @@ enum smap_psock_state {
 struct smap_psock_map_entry {
 	struct list_head list;
 	struct sock **entry;
-	struct htab_elem *hash_link;
-	struct bpf_htab *htab;
+	struct htab_elem __rcu *hash_link;
+	struct bpf_htab __rcu *htab;
 };
 
 struct smap_psock {
@@ -258,16 +259,54 @@ static void bpf_tcp_release(struct sock *sk)
 	rcu_read_unlock();
 }
 
+static struct htab_elem *lookup_elem_raw(struct hlist_head *head,
+					 u32 hash, void *key, u32 key_size)
+{
+	struct htab_elem *l;
+
+	hlist_for_each_entry_rcu(l, head, hash_node) {
+		if (l->hash == hash && !memcmp(&l->key, key, key_size))
+			return l;
+	}
+
+	return NULL;
+}
+
+static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash)
+{
+	return &htab->buckets[hash & (htab->n_buckets - 1)];
+}
+
+static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash)
+{
+	return &__select_bucket(htab, hash)->head;
+}
+
 static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
 {
 	atomic_dec(&htab->count);
 	kfree_rcu(l, rcu);
 }
 
+static struct smap_psock_map_entry *psock_map_pop(struct sock *sk,
+						  struct smap_psock *psock)
+{
+	struct smap_psock_map_entry *e;
+
+	write_lock_bh(&sk->sk_callback_lock);
+	e = list_first_entry_or_null(&psock->maps,
+				     struct smap_psock_map_entry,
+				     list);
+	if (e)
+		list_del(&e->list);
+	write_unlock_bh(&sk->sk_callback_lock);
+	return e;
+}
+
 static void bpf_tcp_close(struct sock *sk, long timeout)
 {
 	void (*close_fun)(struct sock *sk, long timeout);
-	struct smap_psock_map_entry *e, *tmp;
+	struct smap_psock_map_entry *e;
 	struct sk_msg_buff *md, *mtmp;
 	struct smap_psock *psock;
 	struct sock *osk;
@@ -286,7 +325,6 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
 	 */
 	close_fun = psock->save_close;
 
-	write_lock_bh(&sk->sk_callback_lock);
 	if (psock->cork) {
 		free_start_sg(psock->sock, psock->cork);
 		kfree(psock->cork);
@@ -299,20 +337,38 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
 		kfree(md);
 	}
 
-	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+	e = psock_map_pop(sk, psock);
+	while (e) {
 		if (e->entry) {
 			osk = cmpxchg(e->entry, sk, NULL);
 			if (osk == sk) {
-				list_del(&e->list);
 				smap_release_sock(psock, sk);
 			}
 		} else {
-			hlist_del_rcu(&e->hash_link->hash_node);
-			smap_release_sock(psock, e->hash_link->sk);
-			free_htab_elem(e->htab, e->hash_link);
+			struct htab_elem *link = rcu_dereference(e->hash_link);
+			struct bpf_htab *htab = rcu_dereference(e->htab);
+			struct hlist_head *head;
+			struct htab_elem *l;
+			struct bucket *b;
+
+			b = __select_bucket(htab, link->hash);
+			head = &b->head;
+			raw_spin_lock_bh(&b->lock);
+			l = lookup_elem_raw(head,
+					    link->hash, link->key,
+					    htab->map.key_size);
+			/* If another thread deleted this object skip deletion.
+			 * The refcnt on psock may or may not be zero.
+			 */
+			if (l) {
+				hlist_del_rcu(&link->hash_node);
+				smap_release_sock(psock, link->sk);
+				free_htab_elem(htab, link);
+			}
+			raw_spin_unlock_bh(&b->lock);
 		}
+		e = psock_map_pop(sk, psock);
 	}
-	write_unlock_bh(&sk->sk_callback_lock);
 	rcu_read_unlock();
 	close_fun(sk, timeout);
 }
@@ -1618,7 +1674,7 @@ static void smap_list_hash_remove(struct smap_psock *psock,
 	struct smap_psock_map_entry *e, *tmp;
 
 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
-		struct htab_elem *c = e->hash_link;
+		struct htab_elem *c = rcu_dereference(e->hash_link);
 
 		if (c == hash_link)
 			list_del(&e->list);
@@ -2090,14 +2146,13 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
 	return ERR_PTR(err);
 }
 
-static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash)
+static void __bpf_htab_free(struct rcu_head *rcu)
 {
-	return &htab->buckets[hash & (htab->n_buckets - 1)];
-}
+	struct bpf_htab *htab;
 
-static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash)
-{
-	return &__select_bucket(htab, hash)->head;
+	htab = container_of(rcu, struct bpf_htab, rcu);
+	bpf_map_area_free(htab->buckets);
+	kfree(htab);
 }
 
 static void sock_hash_free(struct bpf_map *map)
@@ -2116,10 +2171,13 @@ static void sock_hash_free(struct bpf_map *map)
 	 */
 	rcu_read_lock();
 	for (i = 0; i < htab->n_buckets; i++) {
-		struct hlist_head *head = select_bucket(htab, i);
+		struct bucket *b = __select_bucket(htab, i);
+		struct hlist_head *head;
 		struct hlist_node *n;
 		struct htab_elem *l;
 
+		raw_spin_lock_bh(&b->lock);
+		head = &b->head;
 		hlist_for_each_entry_safe(l, n, head, hash_node) {
 			struct sock *sock = l->sk;
 			struct smap_psock *psock;
@@ -2137,12 +2195,12 @@ static void sock_hash_free(struct bpf_map *map)
 				smap_release_sock(psock, sock);
 			}
 			write_unlock_bh(&sock->sk_callback_lock);
-			kfree(l);
+			free_htab_elem(htab, l);
 		}
+		raw_spin_unlock_bh(&b->lock);
 	}
 	rcu_read_unlock();
-	bpf_map_area_free(htab->buckets);
-	kfree(htab);
+	call_rcu(&htab->rcu, __bpf_htab_free);
 }
 
 static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
@@ -2169,19 +2227,6 @@ static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
 	return l_new;
 }
 
-static struct htab_elem *lookup_elem_raw(struct hlist_head *head,
-					 u32 hash, void *key, u32 key_size)
-{
-	struct htab_elem *l;
-
-	hlist_for_each_entry_rcu(l, head, hash_node) {
-		if (l->hash == hash && !memcmp(&l->key, key, key_size))
-			return l;
-	}
-
-	return NULL;
-}
-
 static inline u32 htab_map_hash(const void *key, u32 key_len)
 {
 	return jhash(key, key_len, 0);
@@ -2301,8 +2346,9 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
 		goto bucket_err;
 	}
 
-	e->hash_link = l_new;
-	e->htab = container_of(map, struct bpf_htab, map);
+	rcu_assign_pointer(e->hash_link, l_new);
+	rcu_assign_pointer(e->htab,
+			   container_of(map, struct bpf_htab, map));
 	list_add_tail(&e->list, &psock->maps);
 
 	/* add new element to the head of the list, so that
@@ -2313,8 +2359,10 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
 		psock = smap_psock_sk(l_old->sk);
 
 		hlist_del_rcu(&l_old->hash_node);
+		write_lock_bh(&l_old->sk->sk_callback_lock);
 		smap_list_hash_remove(psock, l_old);
 		smap_release_sock(psock, l_old->sk);
+		write_unlock_bh(&l_old->sk->sk_callback_lock);
 		free_htab_elem(htab, l_old);
 	}
 	raw_spin_unlock_bh(&b->lock);

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [bpf PATCH v3 4/4] bpf: sockhash, add release routine
  2018-06-22 15:21 [bpf PATCH v3 0/4] BPF fixes for sockhash John Fastabend
                   ` (2 preceding siblings ...)
  2018-06-22 15:21 ` [bpf PATCH v3 3/4] bpf: sockhash fix omitted bucket lock in sock_close John Fastabend
@ 2018-06-22 15:21 ` John Fastabend
  3 siblings, 0 replies; 8+ messages in thread
From: John Fastabend @ 2018-06-22 15:21 UTC (permalink / raw)
  To: john.fastabend, ast, daniel, kafai; +Cc: netdev

Add map_release_uref pointer to hashmap ops. This was dropped when
original sockhash code was ported into bpf-next before initial
commit.

Fixes: 81110384441a ("bpf: sockmap, add hash map support")
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 kernel/bpf/sockmap.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 333427b..2456986 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -2478,6 +2478,7 @@ struct sock  *__sock_hash_lookup_elem(struct bpf_map *map, void *key)
 	.map_get_next_key = sock_hash_get_next_key,
 	.map_update_elem = sock_hash_update_elem,
 	.map_delete_elem = sock_hash_delete_elem,
+	.map_release_uref = sock_map_release,
 };
 
 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added
  2018-06-22 15:21 ` [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added John Fastabend
@ 2018-06-23  7:44   ` Martin KaFai Lau
  0 siblings, 0 replies; 8+ messages in thread
From: Martin KaFai Lau @ 2018-06-23  7:44 UTC (permalink / raw)
  To: John Fastabend; +Cc: ast, daniel, netdev

On Fri, Jun 22, 2018 at 08:21:34AM -0700, John Fastabend wrote:
> This fixes a crash where we assign tcp_prot to IPv6 sockets instead
> of tcpv6_prot.
> 
> Previously we overwrote the sk->prot field with tcp_prot even in the
> AF_INET6 case. This patch ensures the correct tcp_prot and tcpv6_prot
> are used.
> 
> Tested with 'netserver -6' and 'netperf -H [IPv6]' as well as
> 'netperf -H [IPv4]'. The ESTABLISHED check resolves the previously
> crashing case here.
> 
> Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support")
> Reported-by: syzbot+5c063698bdbfac19f363@syzkaller.appspotmail.com
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>

> ---
>  kernel/bpf/sockmap.c |   58 +++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 48 insertions(+), 10 deletions(-)
> 
> diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
> index 52a91d8..d7fd17a 100644
> --- a/kernel/bpf/sockmap.c
> +++ b/kernel/bpf/sockmap.c
> @@ -140,6 +140,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
>  static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
>  static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
>  			    int offset, size_t size, int flags);
> +static void bpf_tcp_close(struct sock *sk, long timeout);
>  
>  static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
>  {
> @@ -161,7 +162,42 @@ static bool bpf_tcp_stream_read(const struct sock *sk)
>  	return !empty;
>  }
>  
> -static struct proto tcp_bpf_proto;
> +enum {
> +	SOCKMAP_IPV4,
> +	SOCKMAP_IPV6,
> +	SOCKMAP_NUM_PROTS,
> +};
> +
> +enum {
> +	SOCKMAP_BASE,
> +	SOCKMAP_TX,
> +	SOCKMAP_NUM_CONFIGS,
> +};
> +
> +static struct proto *saved_tcpv6_prot __read_mostly;
> +static DEFINE_SPINLOCK(tcpv6_prot_lock);
> +static struct proto bpf_tcp_prots[SOCKMAP_NUM_PROTS][SOCKMAP_NUM_CONFIGS];
> +static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],
> +			 struct proto *base)
> +{
> +	prot[SOCKMAP_BASE]			= *base;
> +	prot[SOCKMAP_BASE].close		= bpf_tcp_close;
> +	prot[SOCKMAP_BASE].recvmsg		= bpf_tcp_recvmsg;
> +	prot[SOCKMAP_BASE].stream_memory_read	= bpf_tcp_stream_read;
> +
> +	prot[SOCKMAP_TX]			= prot[SOCKMAP_BASE];
> +	prot[SOCKMAP_TX].sendmsg		= bpf_tcp_sendmsg;
> +	prot[SOCKMAP_TX].sendpage		= bpf_tcp_sendpage;
> +}
> +
> +static void update_sk_prot(struct sock *sk, struct smap_psock *psock)
> +{
> +	int family = sk->sk_family == AF_INET6 ? SOCKMAP_IPV6 : SOCKMAP_IPV4;
> +	int conf = psock->bpf_tx_msg ? SOCKMAP_TX : SOCKMAP_BASE;
> +
> +	sk->sk_prot = &bpf_tcp_prots[family][conf];
> +}
> +
>  static int bpf_tcp_init(struct sock *sk)
>  {
>  	struct smap_psock *psock;
> @@ -181,14 +217,17 @@ static int bpf_tcp_init(struct sock *sk)
>  	psock->save_close = sk->sk_prot->close;
>  	psock->sk_proto = sk->sk_prot;
>  
> -	if (psock->bpf_tx_msg) {
> -		tcp_bpf_proto.sendmsg = bpf_tcp_sendmsg;
> -		tcp_bpf_proto.sendpage = bpf_tcp_sendpage;
> -		tcp_bpf_proto.recvmsg = bpf_tcp_recvmsg;
> -		tcp_bpf_proto.stream_memory_read = bpf_tcp_stream_read;
> +	/* Build IPv6 sockmap whenever the address of tcpv6_prot changes */
> +	if (sk->sk_family == AF_INET6 &&
> +	    unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) {
> +		spin_lock_bh(&tcpv6_prot_lock);
> +		if (likely(sk->sk_prot != saved_tcpv6_prot)) {
> +			build_protos(bpf_tcp_prots[SOCKMAP_IPV6], sk->sk_prot);
> +			smp_store_release(&saved_tcpv6_prot, sk->sk_prot);
> +		}
> +		spin_unlock_bh(&tcpv6_prot_lock);
>  	}
> -
> -	sk->sk_prot = &tcp_bpf_proto;
> +	update_sk_prot(sk, psock);
>  	rcu_read_unlock();
>  	return 0;
>  }
> @@ -1111,8 +1150,7 @@ static void bpf_tcp_msg_add(struct smap_psock *psock,
>  
>  static int bpf_tcp_ulp_register(void)
>  {
> -	tcp_bpf_proto = tcp_prot;
> -	tcp_bpf_proto.close = bpf_tcp_close;
> +	build_protos(bpf_tcp_prots[SOCKMAP_IPV4], &tcp_prot);
>  	/* Once BPF TX ULP is registered it is never unregistered. It
>  	 * will be in the ULP list for the lifetime of the system. Doing
>  	 * duplicate registers is not a problem.
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bpf PATCH v3 2/4] bpf: sockmap, fix smap_list_map_remove when psock is in many maps
  2018-06-22 15:21 ` [bpf PATCH v3 2/4] bpf: sockmap, fix smap_list_map_remove when psock is in many maps John Fastabend
@ 2018-06-23  7:45   ` Martin KaFai Lau
  0 siblings, 0 replies; 8+ messages in thread
From: Martin KaFai Lau @ 2018-06-23  7:45 UTC (permalink / raw)
  To: John Fastabend; +Cc: ast, daniel, netdev

On Fri, Jun 22, 2018 at 08:21:39AM -0700, John Fastabend wrote:
> If a hashmap is free'd with open socks it removes the reference to
> the hash entry from the psock. If that is the last reference to the
> psock then it will also be free'd by the reference counting logic.
> However the current logic that removes the hash reference from the
> list of references is broken. In map_list_map_remove() we first check
s/map_list_map_remove/smap_list_remove/

> if the sockmap entry matches and then check if the hashmap entry
> matches. But, the sockmap entry sill always match because its NULL in
> this case which causes the first entry to be removed from the list.
> If this is always the "right" entry (because the user adds/removes
> entries in order) then everything is OK but otherwise a subsequent
> bpf_tcp_close() may reference a free'd object.
> 
> To fix this create two list handlers one for sockmap and one for
> sockhash.
> 
> Reported-by: syzbot+0ce137753c78f7b6acc1@syzkaller.appspotmail.com
> Fixes: 81110384441a ("bpf: sockmap, add hash map support")
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
One nit.  Other than that,

Acked-by: Martin KaFai Lau <kafai@fb.com>

> ---
>  kernel/bpf/sockmap.c |   33 +++++++++++++++++++++------------
>  1 file changed, 21 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
> index d7fd17a..69b26af 100644
> --- a/kernel/bpf/sockmap.c
> +++ b/kernel/bpf/sockmap.c
> @@ -1602,17 +1602,26 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
>  	return ERR_PTR(err);
>  }
>  
> -static void smap_list_remove(struct smap_psock *psock,
> -			     struct sock **entry,
> -			     struct htab_elem *hash_link)
> +static void smap_list_map_remove(struct smap_psock *psock,
> +				 struct sock **entry)
>  {
>  	struct smap_psock_map_entry *e, *tmp;
>  
>  	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
> -		if (e->entry == entry || e->hash_link == hash_link) {
> +		if (e->entry == entry)
> +			list_del(&e->list);
> +	}
> +}
Nit. Add an empty line.

> +static void smap_list_hash_remove(struct smap_psock *psock,
> +				  struct htab_elem *hash_link)
> +{
> +	struct smap_psock_map_entry *e, *tmp;
> +
> +	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
> +		struct htab_elem *c = e->hash_link;
> +
> +		if (c == hash_link)
>  			list_del(&e->list);
> -			break;
> -		}
>  	}
>  }
>  
> @@ -1647,7 +1656,7 @@ static void sock_map_free(struct bpf_map *map)
>  		 * to be null and queued for garbage collection.
>  		 */
>  		if (likely(psock)) {
> -			smap_list_remove(psock, &stab->sock_map[i], NULL);
> +			smap_list_map_remove(psock, &stab->sock_map[i]);
>  			smap_release_sock(psock, sock);
>  		}
>  		write_unlock_bh(&sock->sk_callback_lock);
> @@ -1706,7 +1715,7 @@ static int sock_map_delete_elem(struct bpf_map *map, void *key)
>  
>  	if (psock->bpf_parse)
>  		smap_stop_sock(psock, sock);
> -	smap_list_remove(psock, &stab->sock_map[k], NULL);
> +	smap_list_map_remove(psock, &stab->sock_map[k]);
>  	smap_release_sock(psock, sock);
>  out:
>  	write_unlock_bh(&sock->sk_callback_lock);
> @@ -1908,7 +1917,7 @@ static int sock_map_ctx_update_elem(struct bpf_sock_ops_kern *skops,
>  		struct smap_psock *opsock = smap_psock_sk(osock);
>  
>  		write_lock_bh(&osock->sk_callback_lock);
> -		smap_list_remove(opsock, &stab->sock_map[i], NULL);
> +		smap_list_map_remove(opsock, &stab->sock_map[i]);
>  		smap_release_sock(opsock, osock);
>  		write_unlock_bh(&osock->sk_callback_lock);
>  	}
> @@ -2124,7 +2133,7 @@ static void sock_hash_free(struct bpf_map *map)
>  			 * (psock) to be null and queued for garbage collection.
>  			 */
>  			if (likely(psock)) {
> -				smap_list_remove(psock, NULL, l);
> +				smap_list_hash_remove(psock, l);
>  				smap_release_sock(psock, sock);
>  			}
>  			write_unlock_bh(&sock->sk_callback_lock);
> @@ -2304,7 +2313,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
>  		psock = smap_psock_sk(l_old->sk);
>  
>  		hlist_del_rcu(&l_old->hash_node);
> -		smap_list_remove(psock, NULL, l_old);
> +		smap_list_hash_remove(psock, l_old);
>  		smap_release_sock(psock, l_old->sk);
>  		free_htab_elem(htab, l_old);
>  	}
> @@ -2372,7 +2381,7 @@ static int sock_hash_delete_elem(struct bpf_map *map, void *key)
>  		 * to be null and queued for garbage collection.
>  		 */
>  		if (likely(psock)) {
> -			smap_list_remove(psock, NULL, l);
> +			smap_list_hash_remove(psock, l);
>  			smap_release_sock(psock, sock);
>  		}
>  		write_unlock_bh(&sock->sk_callback_lock);
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [bpf PATCH v3 3/4] bpf: sockhash fix omitted bucket lock in sock_close
  2018-06-22 15:21 ` [bpf PATCH v3 3/4] bpf: sockhash fix omitted bucket lock in sock_close John Fastabend
@ 2018-06-23  7:45   ` Martin KaFai Lau
  0 siblings, 0 replies; 8+ messages in thread
From: Martin KaFai Lau @ 2018-06-23  7:45 UTC (permalink / raw)
  To: John Fastabend; +Cc: ast, daniel, netdev

On Fri, Jun 22, 2018 at 08:21:44AM -0700, John Fastabend wrote:
> First in tcp_close, reduce scope of sk_callback_lock() the lock is
> only needed for protecting maps list the ingress and cork
> lists are protected by sock lock. Having the lock in wider scope is
> harmless but may confuse the reader who may infer it is in fact
> needed.
> 
> Next, in sock_hash_delete_elem() the pattern is as follows,
> 
>   sock_hash_delete_elem()
>      [...]
>      spin_lock(bucket_lock)
>      l = lookup_elem_raw()
>      if (l)
>         hlist_del_rcu()
>         write_lock(sk_callback_lock)
>          .... destroy psock ...
>         write_unlock(sk_callback_lock)
>      spin_unlock(bucket_lock)
> 
> The ordering is necessary because we only know the {p}sock after
> dereferencing the hash table which we can't do unless we have the
> bucket lock held. Once we have the bucket lock and the psock element
> it is deleted from the hashmap to ensure any other path doing a lookup
> will fail. Finally, the refcnt is decremented and if zero the psock
> is destroyed.
> 
> In parallel with the above (or free'ing the map) a tcp close event
> may trigger tcp_close(). Which at the moment omits the bucket lock
> altogether (oops!) where the flow looks like this,
> 
>   bpf_tcp_close()
>      [...]
>      write_lock(sk_callback_lock)
>      for each psock->maps // list of maps this sock is part of
>          hlist_del_rcu(ref_hash_node);
>          .... destroy psock ...
>      write_unlock(sk_callback_lock)
> 
> Obviously, and demonstrated by syzbot, this is broken because
> we can have multiple threads deleting entries via hlist_del_rcu().
> 
> To fix this we might be tempted to wrap the hlist operation in a
> bucket lock but that would create a lock inversion problem. In
> summary to follow locking rules the psocks maps list needs the
> sk_callback_lock but we need the bucket lock to do the hlist_del_rcu.
> To resolve the lock inversion problem pop the head of the maps list
> repeatedly and remove the reference until no more are left. If a
> delete happens in parallel from the BPF API that is OK as well because
> it will do a similar action, lookup the lock in the map/hash, delete
> it from the map/hash, and dec the refcnt. We check for this case
> before doing a destroy on the psock to ensure we don't have two
> threads tearing down a psock. The new logic is as follows,
> 
>   bpf_tcp_close()
>   e = psock_map_pop(psock->maps) // done with sk_callback_lock
>   bucket_lock() // lock hash list bucket
>   l = lookup_elem_raw(head, hash, key, key_size);
>   if (l) {
>      //only get here if elmnt was not already removed
>      hlist_del_rcu()
>      ... destroy psock...
>   }
>   bucket_unlock()
> 
> And finally for all the above to work add missing sk_callback_lock
> around smap_list_remove in sock_hash_ctx_update_elem(). Otherwise
> delete and update may corrupt maps list. Then add RCU annotations and
> use rcu_dereference/rcu_assign_pointer to manage values relying on
> RCU so that the object is not free'd from sock_hash_free() while it
> is being referenced in bpf_tcp_close().
> 
> (As an aside the sk_callback_lock serves two purposes. The
>  first, is to update the sock callbacks sk_data_ready, sk_write_space,
>  etc. The second is to protect the psock 'maps' list. The 'maps' list
>  is used to (as shown above) to delete all map/hash references to a
>  sock when the sock is closed)
> 
> Reported-by: syzbot+0ce137753c78f7b6acc1@syzkaller.appspotmail.com
> Fixes: 81110384441a ("bpf: sockmap, add hash map support")
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> ---
>  kernel/bpf/sockmap.c |  120 +++++++++++++++++++++++++++++++++++---------------
>  1 file changed, 84 insertions(+), 36 deletions(-)
> 
> diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
> index 69b26af..333427b 100644
> --- a/kernel/bpf/sockmap.c
> +++ b/kernel/bpf/sockmap.c
> @@ -72,6 +72,7 @@ struct bpf_htab {
>  	u32 n_buckets;
>  	u32 elem_size;
>  	struct bpf_sock_progs progs;
> +	struct rcu_head rcu;
>  };
>  
>  struct htab_elem {
> @@ -89,8 +90,8 @@ enum smap_psock_state {
>  struct smap_psock_map_entry {
>  	struct list_head list;
>  	struct sock **entry;
> -	struct htab_elem *hash_link;
> -	struct bpf_htab *htab;
> +	struct htab_elem __rcu *hash_link;
> +	struct bpf_htab __rcu *htab;
>  };
>  
>  struct smap_psock {
> @@ -258,16 +259,54 @@ static void bpf_tcp_release(struct sock *sk)
>  	rcu_read_unlock();
>  }
>  
> +static struct htab_elem *lookup_elem_raw(struct hlist_head *head,
> +					 u32 hash, void *key, u32 key_size)
> +{
> +	struct htab_elem *l;
> +
> +	hlist_for_each_entry_rcu(l, head, hash_node) {
> +		if (l->hash == hash && !memcmp(&l->key, key, key_size))
> +			return l;
> +	}
> +
> +	return NULL;
> +}
> +
> +static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash)
> +{
> +	return &htab->buckets[hash & (htab->n_buckets - 1)];
> +}
> +
> +static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash)
> +{
> +	return &__select_bucket(htab, hash)->head;
> +}
> +
>  static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
>  {
>  	atomic_dec(&htab->count);
>  	kfree_rcu(l, rcu);
>  }
>  
> +static struct smap_psock_map_entry *psock_map_pop(struct sock *sk,
> +						  struct smap_psock *psock)
> +{
> +	struct smap_psock_map_entry *e;
> +
> +	write_lock_bh(&sk->sk_callback_lock);
> +	e = list_first_entry_or_null(&psock->maps,
> +				     struct smap_psock_map_entry,
> +				     list);
> +	if (e)
> +		list_del(&e->list);
> +	write_unlock_bh(&sk->sk_callback_lock);
> +	return e;
> +}
> +
>  static void bpf_tcp_close(struct sock *sk, long timeout)
>  {
>  	void (*close_fun)(struct sock *sk, long timeout);
> -	struct smap_psock_map_entry *e, *tmp;
> +	struct smap_psock_map_entry *e;
>  	struct sk_msg_buff *md, *mtmp;
>  	struct smap_psock *psock;
>  	struct sock *osk;
> @@ -286,7 +325,6 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
>  	 */
>  	close_fun = psock->save_close;
>  
> -	write_lock_bh(&sk->sk_callback_lock);
>  	if (psock->cork) {
>  		free_start_sg(psock->sock, psock->cork);
>  		kfree(psock->cork);
> @@ -299,20 +337,38 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
>  		kfree(md);
>  	}
>  
> -	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
> +	e = psock_map_pop(sk, psock);
> +	while (e) {
>  		if (e->entry) {
>  			osk = cmpxchg(e->entry, sk, NULL);
>  			if (osk == sk) {
> -				list_del(&e->list);
>  				smap_release_sock(psock, sk);
>  			}
>  		} else {
> -			hlist_del_rcu(&e->hash_link->hash_node);
> -			smap_release_sock(psock, e->hash_link->sk);
> -			free_htab_elem(e->htab, e->hash_link);
> +			struct htab_elem *link = rcu_dereference(e->hash_link);
> +			struct bpf_htab *htab = rcu_dereference(e->htab);
> +			struct hlist_head *head;
> +			struct htab_elem *l;
> +			struct bucket *b;
> +
> +			b = __select_bucket(htab, link->hash);
> +			head = &b->head;
> +			raw_spin_lock_bh(&b->lock);
> +			l = lookup_elem_raw(head,
> +					    link->hash, link->key,
> +					    htab->map.key_size);
> +			/* If another thread deleted this object skip deletion.
> +			 * The refcnt on psock may or may not be zero.
> +			 */
> +			if (l) {
> +				hlist_del_rcu(&link->hash_node);
> +				smap_release_sock(psock, link->sk);
> +				free_htab_elem(htab, link);
> +			}
> +			raw_spin_unlock_bh(&b->lock);
>  		}
> +		e = psock_map_pop(sk, psock);
>  	}
> -	write_unlock_bh(&sk->sk_callback_lock);
>  	rcu_read_unlock();
>  	close_fun(sk, timeout);
>  }
> @@ -1618,7 +1674,7 @@ static void smap_list_hash_remove(struct smap_psock *psock,
>  	struct smap_psock_map_entry *e, *tmp;
>  
>  	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
> -		struct htab_elem *c = e->hash_link;
> +		struct htab_elem *c = rcu_dereference(e->hash_link);
>  
>  		if (c == hash_link)
>  			list_del(&e->list);
> @@ -2090,14 +2146,13 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
>  	return ERR_PTR(err);
>  }
>  
> -static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash)
> +static void __bpf_htab_free(struct rcu_head *rcu)
>  {
> -	return &htab->buckets[hash & (htab->n_buckets - 1)];
> -}
> +	struct bpf_htab *htab;
>  
> -static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash)
> -{
> -	return &__select_bucket(htab, hash)->head;
> +	htab = container_of(rcu, struct bpf_htab, rcu);
> +	bpf_map_area_free(htab->buckets);
> +	kfree(htab);
>  }
>  
>  static void sock_hash_free(struct bpf_map *map)
> @@ -2116,10 +2171,13 @@ static void sock_hash_free(struct bpf_map *map)
>  	 */
>  	rcu_read_lock();
>  	for (i = 0; i < htab->n_buckets; i++) {
> -		struct hlist_head *head = select_bucket(htab, i);
> +		struct bucket *b = __select_bucket(htab, i);
> +		struct hlist_head *head;
>  		struct hlist_node *n;
>  		struct htab_elem *l;
>  
> +		raw_spin_lock_bh(&b->lock);
> +		head = &b->head;
>  		hlist_for_each_entry_safe(l, n, head, hash_node) {
>  			struct sock *sock = l->sk;
>  			struct smap_psock *psock;
> @@ -2137,12 +2195,12 @@ static void sock_hash_free(struct bpf_map *map)
>  				smap_release_sock(psock, sock);
>  			}
>  			write_unlock_bh(&sock->sk_callback_lock);
> -			kfree(l);
> +			free_htab_elem(htab, l);
>  		}
> +		raw_spin_unlock_bh(&b->lock);
>  	}
>  	rcu_read_unlock();
> -	bpf_map_area_free(htab->buckets);
> -	kfree(htab);
> +	call_rcu(&htab->rcu, __bpf_htab_free);
>  }
>  
>  static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
> @@ -2169,19 +2227,6 @@ static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
>  	return l_new;
>  }
>  
> -static struct htab_elem *lookup_elem_raw(struct hlist_head *head,
> -					 u32 hash, void *key, u32 key_size)
> -{
> -	struct htab_elem *l;
> -
> -	hlist_for_each_entry_rcu(l, head, hash_node) {
> -		if (l->hash == hash && !memcmp(&l->key, key, key_size))
> -			return l;
> -	}
> -
> -	return NULL;
> -}
> -
>  static inline u32 htab_map_hash(const void *key, u32 key_len)
>  {
>  	return jhash(key, key_len, 0);
> @@ -2301,8 +2346,9 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
>  		goto bucket_err;
>  	}
>  
> -	e->hash_link = l_new;
> -	e->htab = container_of(map, struct bpf_htab, map);
> +	rcu_assign_pointer(e->hash_link, l_new);
> +	rcu_assign_pointer(e->htab,
> +			   container_of(map, struct bpf_htab, map));
>  	list_add_tail(&e->list, &psock->maps);
Is sock->sk_callback_lock held here?

Others LGTM.

>  
>  	/* add new element to the head of the list, so that
> @@ -2313,8 +2359,10 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
>  		psock = smap_psock_sk(l_old->sk);
>  
>  		hlist_del_rcu(&l_old->hash_node);
> +		write_lock_bh(&l_old->sk->sk_callback_lock);
>  		smap_list_hash_remove(psock, l_old);
>  		smap_release_sock(psock, l_old->sk);
> +		write_unlock_bh(&l_old->sk->sk_callback_lock);
>  		free_htab_elem(htab, l_old);
>  	}
>  	raw_spin_unlock_bh(&b->lock);
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-06-23  7:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-22 15:21 [bpf PATCH v3 0/4] BPF fixes for sockhash John Fastabend
2018-06-22 15:21 ` [bpf PATCH v3 1/4] bpf: sockmap, fix crash when ipv6 sock is added John Fastabend
2018-06-23  7:44   ` Martin KaFai Lau
2018-06-22 15:21 ` [bpf PATCH v3 2/4] bpf: sockmap, fix smap_list_map_remove when psock is in many maps John Fastabend
2018-06-23  7:45   ` Martin KaFai Lau
2018-06-22 15:21 ` [bpf PATCH v3 3/4] bpf: sockhash fix omitted bucket lock in sock_close John Fastabend
2018-06-23  7:45   ` Martin KaFai Lau
2018-06-22 15:21 ` [bpf PATCH v3 4/4] bpf: sockhash, add release routine John Fastabend

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.