All of lore.kernel.org
 help / color / mirror / Atom feed
* [bpf PATCH 0/4] sockhash/sockmap fixes
@ 2018-07-05 15:36 John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 1/4] bpf: fix sk_skb programs without skb->dev assigned John Fastabend
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: John Fastabend @ 2018-07-05 15:36 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

First three patches resolve issues found while testing sockhash and
reviewing code. Syzbot also found them about the same time as I was
working on fixes. The main issue is in the sockhash path we reduced
the scope of sk_callback lock but this meant we could get update and
close running in parallel so fix that here.

Then testing sk_msg and sk_skb programs together found that skb->dev
is not always assigned and some of the helpers were depending on this
to lookup max mtu. Fix this by using SKB_MAX_ALLOC when no MTU is
available.

Finally, Martin spotted that the sockmap code was still using the
qdisc skb cb structure. But I was sure we had fixed this long ago.
Looks like we missed it in a merge conflict resolution and then by
chance data_end offset was the same in both structures so everything
sort of continued to work even though it could break at any moment
if the structs ever change. So redo the conversion and this time
also convert the helpers.

---

John Fastabend (4):
      bpf: fix sk_skb programs without skb->dev assigned
      bpf: sockhash, disallow bpf_tcp_close and update in parallel
      bpf: sockmap, consume_skb in close path
      bpf: sockmap, convert bpf_compute_data_pointers to bpf_*_sk_skb


 0 files changed

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [bpf PATCH 1/4] bpf: fix sk_skb programs without skb->dev assigned
  2018-07-05 15:36 [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
@ 2018-07-05 15:36 ` John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 2/4] bpf: sockhash, disallow bpf_tcp_close and update in parallel John Fastabend
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2018-07-05 15:36 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

Multiple BPF helpers in use by sk_skb programs calculate the max
skb length using the __bpf_skb_max_len function. However, this
calculates the max length using the skb->dev pointer which can be
NULL when an sk_skb program is paired with an sk_msg program.

To force this a sk_msg program needs to redirect into the ingress
path of a sock with an attach sk_skb program. Then the the sk_skb
program would need to call one of the helpers that adjust the skb
size.

To fix the null ptr dereference use SKB_MAX_ALLOC size if no dev
is available.

Fixes: 8934ce2fd081 ("bpf: sockmap redirect ingress support")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 0 files changed

diff --git a/net/core/filter.c b/net/core/filter.c
index 0ca6907..3095f1b 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2779,7 +2779,8 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 len_diff)
 
 static u32 __bpf_skb_max_len(const struct sk_buff *skb)
 {
-	return skb->dev->mtu + skb->dev->hard_header_len;
+	return skb->dev ? skb->dev->mtu + skb->dev->hard_header_len :
+			  SKB_MAX_ALLOC;
 }
 
 static int bpf_skb_adjust_net(struct sk_buff *skb, s32 len_diff)

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [bpf PATCH 2/4] bpf: sockhash, disallow bpf_tcp_close and update in parallel
  2018-07-05 15:36 [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 1/4] bpf: fix sk_skb programs without skb->dev assigned John Fastabend
@ 2018-07-05 15:36 ` John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 3/4] bpf: sockmap, consume_skb in close path John Fastabend
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2018-07-05 15:36 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

After latest lock updates there is no longer anything preventing a
close and recvmsg call running in parallel. Additionally, we can
race update with close if we close a socket and simultaneously update
if via the BPF userspace API (note the cgroup ops are already run
with sock_lock held).

To resolve this take sock_lock in close and update paths.

Reported-by: syzbot+b680e42077a0d7c9a0c4@syzkaller.appspotmail.com
Fixes: e9db4ef6bf4c ("bpf: sockhash fix omitted bucket lock in sock_close")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 0 files changed

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 00fb2e3..9c67e96 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -312,10 +312,12 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
 	struct smap_psock *psock;
 	struct sock *osk;
 
+	lock_sock(sk);
 	rcu_read_lock();
 	psock = smap_psock_sk(sk);
 	if (unlikely(!psock)) {
 		rcu_read_unlock();
+		release_sock(sk);
 		return sk->sk_prot->close(sk, timeout);
 	}
 
@@ -371,6 +373,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
 		e = psock_map_pop(sk, psock);
 	}
 	rcu_read_unlock();
+	release_sock(sk);
 	close_fun(sk, timeout);
 }
 
@@ -2069,7 +2072,13 @@ static int sock_map_update_elem(struct bpf_map *map,
 		return -EOPNOTSUPP;
 	}
 
+	lock_sock(skops.sk);
+	preempt_disable();
+	rcu_read_lock();
 	err = sock_map_ctx_update_elem(&skops, map, key, flags);
+	rcu_read_unlock();
+	preempt_enable();
+	release_sock(skops.sk);
 	fput(socket->file);
 	return err;
 }
@@ -2410,7 +2419,13 @@ static int sock_hash_update_elem(struct bpf_map *map,
 		return -EINVAL;
 	}
 
+	lock_sock(skops.sk);
+	preempt_disable();
+	rcu_read_lock();
 	err = sock_hash_ctx_update_elem(&skops, map, key, flags);
+	rcu_read_unlock();
+	preempt_enable();
+	release_sock(skops.sk);
 	fput(socket->file);
 	return err;
 }
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index d10ecd7..a31a1ba 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -735,7 +735,9 @@ static int map_update_elem(union bpf_attr *attr)
 	if (bpf_map_is_dev_bound(map)) {
 		err = bpf_map_offload_update_elem(map, key, value, attr->flags);
 		goto out;
-	} else if (map->map_type == BPF_MAP_TYPE_CPUMAP) {
+	} else if (map->map_type == BPF_MAP_TYPE_CPUMAP ||
+		   map->map_type == BPF_MAP_TYPE_SOCKHASH ||
+		   map->map_type == BPF_MAP_TYPE_SOCKMAP) {
 		err = map->ops->map_update_elem(map, key, value, attr->flags);
 		goto out;
 	}

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [bpf PATCH 3/4] bpf: sockmap, consume_skb in close path
  2018-07-05 15:36 [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 1/4] bpf: fix sk_skb programs without skb->dev assigned John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 2/4] bpf: sockhash, disallow bpf_tcp_close and update in parallel John Fastabend
@ 2018-07-05 15:36 ` John Fastabend
  2018-07-05 15:36 ` [bpf PATCH 4/4] bpf: sockmap, convert bpf_compute_data_pointers to bpf_*_sk_skb John Fastabend
  2018-07-05 15:48 ` [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
  4 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2018-07-05 15:36 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

Currently, when a sock is closed and the bpf_tcp_close() callback is
used we remove memory but do not free the skb. Call consume_skb() if
the skb is attached to the buffer.

Reported-by: syzbot+d464d2c20c717ef5a6a8@syzkaller.appspotmail.com
Fixes: 1aa12bdf1bfb ("bpf: sockmap, add sock close() hook to remove socks")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 0 files changed

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 9c67e96..dfc8a8a 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -571,7 +571,8 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
 	while (sg[i].length) {
 		free += sg[i].length;
 		sk_mem_uncharge(sk, sg[i].length);
-		put_page(sg_page(&sg[i]));
+		if (!md->skb)
+			put_page(sg_page(&sg[i]));
 		sg[i].length = 0;
 		sg[i].page_link = 0;
 		sg[i].offset = 0;
@@ -580,6 +581,8 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
 		if (i == MAX_SKB_FRAGS)
 			i = 0;
 	}
+	if (md->skb)
+		consume_skb(md->skb);
 
 	return free;
 }

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [bpf PATCH 4/4] bpf: sockmap, convert bpf_compute_data_pointers to bpf_*_sk_skb
  2018-07-05 15:36 [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
                   ` (2 preceding siblings ...)
  2018-07-05 15:36 ` [bpf PATCH 3/4] bpf: sockmap, consume_skb in close path John Fastabend
@ 2018-07-05 15:36 ` John Fastabend
  2018-07-05 15:48 ` [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
  4 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2018-07-05 15:36 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

In commit

  'bpf: bpf_compute_data uses incorrect cb structure' (8108a7751512)

we added the routine bpf_compute_data_end_sk_skb() to compute the
correct data_end values, but this has since been lost. In kernel
v4.14 this was correct and the above patch was applied in it
entirety. Then when v4.14 was merged into v4.15-rc1 net-next tree
we lost the piece that renamed bpf_compute_data_pointers to the
new function bpf_compute_data_end_sk_skb. This was done here,

e1ea2f9856b7 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net")

When it conflicted with the following rename patch,

6aaae2b6c433 ("bpf: rename bpf_compute_data_end into bpf_compute_data_pointers")

Finally, after a refactor I thought even the function
bpf_compute_data_end_sk_skb() was no longer needed and it was
erroneously removed.

However, we never reverted the sk_skb_convert_ctx_access() usage of
tcp_skb_cb which had been committed and survived the merge conflict.
Here we fix this by adding back the helper and *_data_end_sk_skb()
usage. Using the bpf_skc_data_end mapping is not correct because it
expects a qdisc_skb_cb object but at the sock layer this is not the
case. Even though it happens to work here because we don't overwrite
any data in-use at the socket layer and the cb structure is cleared
later this has potential to create some subtle issues. But, even
more concretely the filter.c access check uses tcp_skb_cb.

And by some act of chance though,

struct bpf_skb_data_end {
        struct qdisc_skb_cb        qdisc_cb;             /*     0    28 */

        /* XXX 4 bytes hole, try to pack */

        void *                     data_meta;            /*    32     8 */
        void *                     data_end;             /*    40     8 */

        /* size: 48, cachelines: 1, members: 3 */
        /* sum members: 44, holes: 1, sum holes: 4 */
        /* last cacheline: 48 bytes */
};

and then tcp_skb_cb,

struct tcp_skb_cb {
	[...]
                struct {
                        __u32      flags;                /*    24     4 */
                        struct sock * sk_redir;          /*    32     8 */
                        void *     data_end;             /*    40     8 */
                } bpf;                                   /*          24 */
        };

So when we use offset_of() to track down the byte offset we get 40 in
either case and everything continues to work. Fix this mess and use
correct structures its unclear how long this might actually work for
until someone moves the structs around.

Reported-by: Martin KaFai Lau <kafai@fb.com>
Fixes: e1ea2f9856b7 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net")
Fixes: 6aaae2b6c433 ("bpf: rename bpf_compute_data_end into bpf_compute_data_pointers")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 0 files changed

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 800582b..af3ec72 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -828,6 +828,10 @@ struct tcp_skb_cb {
 
 #define TCP_SKB_CB(__skb)	((struct tcp_skb_cb *)&((__skb)->cb[0]))
 
+static inline void bpf_compute_data_end_sk_skb(struct sk_buff *skb)
+{
+	TCP_SKB_CB(skb)->bpf.data_end = skb->data + skb_headlen(skb);
+}
 
 #if IS_ENABLED(CONFIG_IPV6)
 /* This is the variant of inet6_iif() that must be used by TCP,
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index dfc8a8a..98fb793 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -1236,7 +1236,7 @@ static int smap_verdict_func(struct smap_psock *psock, struct sk_buff *skb)
 	 */
 	TCP_SKB_CB(skb)->bpf.sk_redir = NULL;
 	skb->sk = psock->sock;
-	bpf_compute_data_pointers(skb);
+	bpf_compute_data_end_sk_skb(skb);
 	preempt_disable();
 	rc = (*prog->bpf_func)(skb, prog->insnsi);
 	preempt_enable();
@@ -1491,7 +1491,7 @@ static int smap_parse_func_strparser(struct strparser *strp,
 	 * any socket yet.
 	 */
 	skb->sk = psock->sock;
-	bpf_compute_data_pointers(skb);
+	bpf_compute_data_end_sk_skb(skb);
 	rc = (*prog->bpf_func)(skb, prog->insnsi);
 	skb->sk = NULL;
 	rcu_read_unlock();
diff --git a/net/core/filter.c b/net/core/filter.c
index 3095f1b..4702680 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1762,6 +1762,37 @@ static inline void bpf_pull_mac_rcsum(struct sk_buff *skb)
 	.arg2_type	= ARG_ANYTHING,
 };
 
+static inline int sk_skb_try_make_writable(struct sk_buff *skb,
+					   unsigned int write_len)
+{
+	int err = __bpf_try_make_writable(skb, write_len);
+
+	bpf_compute_data_end_sk_skb(skb);
+	return err;
+}
+
+BPF_CALL_2(sk_skb_pull_data, struct sk_buff *, skb, u32, len)
+{
+	/* Idea is the following: should the needed direct read/write
+	 * test fail during runtime, we can pull in more data and redo
+	 * again, since implicitly, we invalidate previous checks here.
+	 *
+	 * Or, since we know how much we need to make read/writeable,
+	 * this can be done once at the program beginning for direct
+	 * access case. By this we overcome limitations of only current
+	 * headroom being accessible.
+	 */
+	return sk_skb_try_make_writable(skb, len ? : skb_headlen(skb));
+}
+
+static const struct bpf_func_proto sk_skb_pull_data_proto = {
+	.func		= sk_skb_pull_data,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_ANYTHING,
+};
+
 BPF_CALL_5(bpf_l3_csum_replace, struct sk_buff *, skb, u32, offset,
 	   u64, from, u64, to, u64, flags)
 {
@@ -2864,8 +2895,8 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
 	return __skb_trim_rcsum(skb, new_len);
 }
 
-BPF_CALL_3(bpf_skb_change_tail, struct sk_buff *, skb, u32, new_len,
-	   u64, flags)
+static inline int __bpf_skb_change_tail(struct sk_buff *skb, u32 new_len,
+					u64 flags)
 {
 	u32 max_len = __bpf_skb_max_len(skb);
 	u32 min_len = __bpf_skb_min_len(skb);
@@ -2901,6 +2932,13 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
 		if (!ret && skb_is_gso(skb))
 			skb_gso_reset(skb);
 	}
+	return ret;
+}
+
+BPF_CALL_3(bpf_skb_change_tail, struct sk_buff *, skb, u32, new_len,
+	   u64, flags)
+{
+	int ret = __bpf_skb_change_tail(skb, new_len, flags);
 
 	bpf_compute_data_pointers(skb);
 	return ret;
@@ -2915,9 +2953,27 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
 	.arg3_type	= ARG_ANYTHING,
 };
 
-BPF_CALL_3(bpf_skb_change_head, struct sk_buff *, skb, u32, head_room,
+BPF_CALL_3(sk_skb_change_tail, struct sk_buff *, skb, u32, new_len,
 	   u64, flags)
 {
+	int ret = __bpf_skb_change_tail(skb, new_len, flags);
+
+	bpf_compute_data_end_sk_skb(skb);
+	return ret;
+}
+
+static const struct bpf_func_proto sk_skb_change_tail_proto = {
+	.func		= sk_skb_change_tail,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_ANYTHING,
+};
+
+static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
+					u64 flags)
+{
 	u32 max_len = __bpf_skb_max_len(skb);
 	u32 new_len = skb->len + head_room;
 	int ret;
@@ -2942,8 +2998,16 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
 		skb_reset_mac_header(skb);
 	}
 
+	return ret;
+}
+
+BPF_CALL_3(bpf_skb_change_head, struct sk_buff *, skb, u32, head_room,
+	   u64, flags)
+{
+	int ret = __bpf_skb_change_head(skb, head_room, flags);
+
 	bpf_compute_data_pointers(skb);
-	return 0;
+	return ret;
 }
 
 static const struct bpf_func_proto bpf_skb_change_head_proto = {
@@ -2955,6 +3019,23 @@ static int bpf_skb_trim_rcsum(struct sk_buff *skb, unsigned int new_len)
 	.arg3_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_3(sk_skb_change_head, struct sk_buff *, skb, u32, head_room,
+	   u64, flags)
+{
+	int ret = __bpf_skb_change_head(skb, head_room, flags);
+
+	bpf_compute_data_end_sk_skb(skb);
+	return ret;
+}
+
+static const struct bpf_func_proto sk_skb_change_head_proto = {
+	.func		= sk_skb_change_head,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_ANYTHING,
+};
 static unsigned long xdp_get_metalen(const struct xdp_buff *xdp)
 {
 	return xdp_data_meta_unsupported(xdp) ? 0 :
@@ -4618,9 +4699,12 @@ bool bpf_helper_changes_pkt_data(void *func)
 	    func == bpf_skb_store_bytes ||
 	    func == bpf_skb_change_proto ||
 	    func == bpf_skb_change_head ||
+	    func == sk_skb_change_head ||
 	    func == bpf_skb_change_tail ||
+	    func == sk_skb_change_tail ||
 	    func == bpf_skb_adjust_room ||
 	    func == bpf_skb_pull_data ||
+	    func == sk_skb_pull_data ||
 	    func == bpf_clone_redirect ||
 	    func == bpf_l3_csum_replace ||
 	    func == bpf_l4_csum_replace ||
@@ -4872,11 +4956,11 @@ bool bpf_helper_changes_pkt_data(void *func)
 	case BPF_FUNC_skb_load_bytes:
 		return &bpf_skb_load_bytes_proto;
 	case BPF_FUNC_skb_pull_data:
-		return &bpf_skb_pull_data_proto;
+		return &sk_skb_pull_data_proto;
 	case BPF_FUNC_skb_change_tail:
-		return &bpf_skb_change_tail_proto;
+		return &sk_skb_change_tail_proto;
 	case BPF_FUNC_skb_change_head:
-		return &bpf_skb_change_head_proto;
+		return &sk_skb_change_head_proto;
 	case BPF_FUNC_get_socket_cookie:
 		return &bpf_get_socket_cookie_proto;
 	case BPF_FUNC_get_socket_uid:

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [bpf PATCH 0/4] sockhash/sockmap fixes
  2018-07-05 15:36 [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
                   ` (3 preceding siblings ...)
  2018-07-05 15:36 ` [bpf PATCH 4/4] bpf: sockmap, convert bpf_compute_data_pointers to bpf_*_sk_skb John Fastabend
@ 2018-07-05 15:48 ` John Fastabend
  4 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2018-07-05 15:48 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev

On 07/05/2018 08:36 AM, John Fastabend wrote:
> First three patches resolve issues found while testing sockhash and
> reviewing code. Syzbot also found them about the same time as I was
> working on fixes. The main issue is in the sockhash path we reduced
> the scope of sk_callback lock but this meant we could get update and
> close running in parallel so fix that here.

[...]

> ---
> 
> John Fastabend (4):
>       bpf: fix sk_skb programs without skb->dev assigned
>       bpf: sockhash, disallow bpf_tcp_close and update in parallel
>       bpf: sockmap, consume_skb in close path
>       bpf: sockmap, convert bpf_compute_data_pointers to bpf_*_sk_skb
> 
> 
>  0 files changed
    ^^^^^^^^^^^^^^^

Ah dang, something about my workflow causes git to report 0 files
changed occasionally, I forgot to check it before sending. I'll send
a v2. And get some more coffee I guess. Thanks sorry for the noise.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-07-05 15:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-05 15:36 [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend
2018-07-05 15:36 ` [bpf PATCH 1/4] bpf: fix sk_skb programs without skb->dev assigned John Fastabend
2018-07-05 15:36 ` [bpf PATCH 2/4] bpf: sockhash, disallow bpf_tcp_close and update in parallel John Fastabend
2018-07-05 15:36 ` [bpf PATCH 3/4] bpf: sockmap, consume_skb in close path John Fastabend
2018-07-05 15:36 ` [bpf PATCH 4/4] bpf: sockmap, convert bpf_compute_data_pointers to bpf_*_sk_skb John Fastabend
2018-07-05 15:48 ` [bpf PATCH 0/4] sockhash/sockmap fixes John Fastabend

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.