All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next PATCH 00/10] BPF: sockmap and sk redirect support
@ 2017-08-16  5:30 John Fastabend
  2017-08-16  5:30 ` [net-next PATCH 01/10] net: early init support for strparser John Fastabend
                   ` (11 more replies)
  0 siblings, 12 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:30 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

This series implements a sockmap and socket redirect helper for BPF
using a model similar to XDP netdev redirect. A sockmap is a BPF map
type that holds references to sock structs. Then with a new sk
redirect bpf helper BPF programs can use the map to redirect skbs
between sockets,

      bpf_sk_redirect_map(map, key, flags)

Finally, we need a call site to attach our BPF logic to do socket
redirects. We added hooks to recv_sock using the existing strparser
infrastructure to do this. The call site is added via the BPF attach
map call. To enable users to use this infrastructure a new BPF program
BPF_PROG_TYPE_SK_SKB is created that allows users to reference sock
details, such as port and ip address fields, to build useful socket
layer program. The sockmap datapath is as follows,

     recv -> strparser -> verdict/action

where this series implements the drop and redirect actions.
Additional, actions can be added as needed.

A sample program is provided to illustrate how a sockmap can
be integrated with cgroups and used to add/delete sockets in
a sockmap. The program is simple but should show many of the
key ideas.

To test this work test_maps in selftests/bpf was leveraged.
We added a set of tests to add sockets and do send/recv ops
on the sockets to ensure correct behavior. Additionally, the
selftests tests a series of negative test cases. We can expand
on this in the future.

I also have a basic test program I use with iperf/netperf
clients that could be sent as an additional sample if folks
want this. It needs a bit of cleanup to send to the list and
wasn't included in this series.

For people who prefer git over pulling patches out of their mail
editor I've posted the code here,

https://github.com/jrfastab/linux-kernel-xdp/tree/sockmap

For some background information on the genesis of this work
it might be helpful to review these slides from netconf 2017
by Thomas Graf,

http://vger.kernel.org/netconf2017.html
https://docs.google.com/a/covalent.io/presentation/d/1dwSKSBGpUHD3WO5xxzZWj8awV_-xL-oYhvqQMOBhhtk/edit?usp=sharing

Thanks to Daniel Borkmann for reviewing and providing initial
feedback.

---

John Fastabend (10):
      net: early init support for strparser
      net: add sendmsg_locked and sendpage_locked to af_inet6
      net: fixes for skb_send_sock
      bpf: introduce new program type for skbs on sockets
      bpf: export bpf_prog_inc_not_zero
      bpf: sockmap with sk redirect support
      bpf: add access to sock fields and pkt data from sk_skb programs
      bpf: sockmap sample program
      bpf: selftests: add tests for new __sk_buff members
      bpf: selftests add sockmap tests


 include/linux/bpf.h                                |   14 
 include/linux/bpf_types.h                          |    2 
 include/linux/filter.h                             |    2 
 include/uapi/linux/bpf.h                           |   43 +
 kernel/bpf/Makefile                                |    2 
 kernel/bpf/sockmap.c                               |  792 ++++++++++++++++++++
 kernel/bpf/syscall.c                               |   54 +
 kernel/bpf/verifier.c                              |   15 
 net/core/filter.c                                  |  248 ++++++
 net/core/skbuff.c                                  |    2 
 net/ipv6/af_inet6.c                                |    2 
 net/socket.c                                       |    2 
 net/strparser/strparser.c                          |   10 
 samples/bpf/bpf_load.c                             |    8 
 samples/sockmap/Makefile                           |   78 ++
 samples/sockmap/sockmap_kern.c                     |  110 +++
 samples/sockmap/sockmap_user.c                     |  286 +++++++
 tools/include/uapi/linux/bpf.h                     |   46 +
 tools/lib/bpf/bpf.c                                |   14 
 tools/lib/bpf/bpf.h                                |    4 
 tools/lib/bpf/libbpf.c                             |   29 +
 tools/lib/bpf/libbpf.h                             |    2 
 tools/testing/selftests/bpf/Makefile               |    2 
 tools/testing/selftests/bpf/bpf_helpers.h          |    7 
 tools/testing/selftests/bpf/sockmap_parse_prog.c   |   38 +
 tools/testing/selftests/bpf/sockmap_verdict_prog.c |   48 +
 tools/testing/selftests/bpf/test_maps.c            |  308 ++++++++
 tools/testing/selftests/bpf/test_progs.c           |   55 -
 tools/testing/selftests/bpf/test_verifier.c        |  152 ++++
 29 files changed, 2316 insertions(+), 59 deletions(-)
 create mode 100644 kernel/bpf/sockmap.c
 create mode 100644 samples/sockmap/Makefile
 create mode 100644 samples/sockmap/sockmap_kern.c
 create mode 100644 samples/sockmap/sockmap_user.c
 create mode 100644 tools/testing/selftests/bpf/sockmap_parse_prog.c
 create mode 100644 tools/testing/selftests/bpf/sockmap_verdict_prog.c

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [net-next PATCH 01/10] net: early init support for strparser
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
@ 2017-08-16  5:30 ` John Fastabend
  2017-08-16  5:31 ` [net-next PATCH 02/10] net: add sendmsg_locked and sendpage_locked to af_inet6 John Fastabend
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:30 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

It is useful to allow strparser to init sockets before the read_sock
callback has been established.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/strparser/strparser.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
index 0d18fbc..434aa66 100644
--- a/net/strparser/strparser.c
+++ b/net/strparser/strparser.c
@@ -373,6 +373,9 @@ static int strp_read_sock(struct strparser *strp)
 	struct socket *sock = strp->sk->sk_socket;
 	read_descriptor_t desc;
 
+	if (unlikely(!sock || !sock->ops || !sock->ops->read_sock))
+		return -EBUSY;
+
 	desc.arg.data = strp;
 	desc.error = 0;
 	desc.count = 1; /* give more than one skb per call */
@@ -486,12 +489,7 @@ int strp_init(struct strparser *strp, struct sock *sk,
 	 * The upper layer calls strp_process for each skb to be parsed.
 	 */
 
-	if (sk) {
-		struct socket *sock = sk->sk_socket;
-
-		if (!sock->ops->read_sock || !sock->ops->peek_len)
-			return -EAFNOSUPPORT;
-	} else {
+	if (!sk) {
 		if (!cb->lock || !cb->unlock)
 			return -EINVAL;
 	}

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 02/10] net: add sendmsg_locked and sendpage_locked to af_inet6
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
  2017-08-16  5:30 ` [net-next PATCH 01/10] net: early init support for strparser John Fastabend
@ 2017-08-16  5:31 ` John Fastabend
  2017-08-16  5:31 ` [net-next PATCH 03/10] net: fixes for skb_send_sock John Fastabend
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:31 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

To complete the sendmsg_locked and sendpage_locked implementation add
the hooks for af_inet6 as well.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/ipv6/af_inet6.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index 0a7c740..3b58ee7 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -554,6 +554,8 @@ int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
 	.recvmsg	   = inet_recvmsg,		/* ok		*/
 	.mmap		   = sock_no_mmap,
 	.sendpage	   = inet_sendpage,
+	.sendmsg_locked    = tcp_sendmsg_locked,
+	.sendpage_locked   = tcp_sendpage_locked,
 	.splice_read	   = tcp_splice_read,
 	.read_sock	   = tcp_read_sock,
 	.peek_len	   = tcp_peek_len,

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 03/10] net: fixes for skb_send_sock
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
  2017-08-16  5:30 ` [net-next PATCH 01/10] net: early init support for strparser John Fastabend
  2017-08-16  5:31 ` [net-next PATCH 02/10] net: add sendmsg_locked and sendpage_locked to af_inet6 John Fastabend
@ 2017-08-16  5:31 ` John Fastabend
  2017-08-16  5:31 ` [net-next PATCH 04/10] bpf: introduce new program type for skbs on sockets John Fastabend
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:31 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

A couple fixes to new skb_send_sock infrastructure. However, no users
currently exist for this code (adding user in next handful of patches)
so it should not be possible to trigger a panic with existing in-kernel
code.

Fixes: 306b13eb3cf9 ("proto_ops: Add locked held versions of sendmsg and sendpage")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/core/skbuff.c |    2 +-
 net/socket.c      |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index cb12359..917da73 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -2285,7 +2285,7 @@ int skb_send_sock_locked(struct sock *sk, struct sk_buff *skb, int offset,
 
 		slen = min_t(int, len, skb_headlen(skb) - offset);
 		kv.iov_base = skb->data + offset;
-		kv.iov_len = len;
+		kv.iov_len = slen;
 		memset(&msg, 0, sizeof(msg));
 
 		ret = kernel_sendmsg_locked(sk, &msg, &kv, 1, slen);
diff --git a/net/socket.c b/net/socket.c
index b332d1e..c729625 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -658,7 +658,7 @@ int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,
 	struct socket *sock = sk->sk_socket;
 
 	if (!sock->ops->sendmsg_locked)
-		sock_no_sendmsg_locked(sk, msg, size);
+		return sock_no_sendmsg_locked(sk, msg, size);
 
 	iov_iter_kvec(&msg->msg_iter, WRITE | ITER_KVEC, vec, num, size);
 

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 04/10] bpf: introduce new program type for skbs on sockets
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (2 preceding siblings ...)
  2017-08-16  5:31 ` [net-next PATCH 03/10] net: fixes for skb_send_sock John Fastabend
@ 2017-08-16  5:31 ` John Fastabend
  2017-08-16  5:32 ` [net-next PATCH 05/10] bpf: export bpf_prog_inc_not_zero John Fastabend
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:31 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

A class of programs, run from strparser and soon from a new map type
called sock map, are used with skb as the context but on established
sockets. By creating a specific program type for these we can use
bpf helpers that expect full sockets and get the verifier to ensure
these helpers are not used out of context.

The new type is BPF_PROG_TYPE_SK_SKB. This patch introduces the
infrastructure and type.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 include/linux/bpf_types.h |    1 +
 include/uapi/linux/bpf.h  |    1 +
 net/core/filter.c         |   36 ++++++++++++++++++++++++++++++++++++
 3 files changed, 38 insertions(+)

diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index b1e1035..4b72db3 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -11,6 +11,7 @@
 BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_OUT, lwt_inout_prog_ops)
 BPF_PROG_TYPE(BPF_PROG_TYPE_LWT_XMIT, lwt_xmit_prog_ops)
 BPF_PROG_TYPE(BPF_PROG_TYPE_SOCK_OPS, sock_ops_prog_ops)
+BPF_PROG_TYPE(BPF_PROG_TYPE_SK_SKB, sk_skb_prog_ops)
 #endif
 #ifdef CONFIG_BPF_EVENTS
 BPF_PROG_TYPE(BPF_PROG_TYPE_KPROBE, kprobe_prog_ops)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 91da837..2e796e3 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -127,6 +127,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_LWT_OUT,
 	BPF_PROG_TYPE_LWT_XMIT,
 	BPF_PROG_TYPE_SOCK_OPS,
+	BPF_PROG_TYPE_SK_SKB,
 };
 
 enum bpf_attach_type {
diff --git a/net/core/filter.c b/net/core/filter.c
index 5afe3ac..c1108bf 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3234,6 +3234,20 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 	}
 }
 
+static const struct bpf_func_proto *sk_skb_func_proto(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	case BPF_FUNC_skb_load_bytes:
+		return &bpf_skb_load_bytes_proto;
+	case BPF_FUNC_get_socket_cookie:
+		return &bpf_get_socket_cookie_proto;
+	case BPF_FUNC_get_socket_uid:
+		return &bpf_get_socket_uid_proto;
+	default:
+		return bpf_base_func_proto(func_id);
+	}
+}
+
 static const struct bpf_func_proto *
 lwt_xmit_func_proto(enum bpf_func_id func_id)
 {
@@ -3525,6 +3539,22 @@ static bool sock_ops_is_valid_access(int off, int size,
 	return __is_valid_sock_ops_access(off, size);
 }
 
+static bool sk_skb_is_valid_access(int off, int size,
+				   enum bpf_access_type type,
+				   struct bpf_insn_access_aux *info)
+{
+	switch (off) {
+	case bpf_ctx_range(struct __sk_buff, data):
+		info->reg_type = PTR_TO_PACKET;
+		break;
+	case bpf_ctx_range(struct __sk_buff, data_end):
+		info->reg_type = PTR_TO_PACKET_END;
+		break;
+	}
+
+	return bpf_skb_is_valid_access(off, size, type, info);
+}
+
 static u32 bpf_convert_ctx_access(enum bpf_access_type type,
 				  const struct bpf_insn *si,
 				  struct bpf_insn *insn_buf,
@@ -3992,6 +4022,12 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 	.convert_ctx_access	= sock_ops_convert_ctx_access,
 };
 
+const struct bpf_verifier_ops sk_skb_prog_ops = {
+	.get_func_proto		= sk_skb_func_proto,
+	.is_valid_access	= sk_skb_is_valid_access,
+	.convert_ctx_access	= bpf_convert_ctx_access,
+};
+
 int sk_detach_filter(struct sock *sk)
 {
 	int ret = -ENOENT;

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 05/10] bpf: export bpf_prog_inc_not_zero
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (3 preceding siblings ...)
  2017-08-16  5:31 ` [net-next PATCH 04/10] bpf: introduce new program type for skbs on sockets John Fastabend
@ 2017-08-16  5:32 ` John Fastabend
  2017-08-16  5:32 ` [net-next PATCH 06/10] bpf: sockmap with sk redirect support John Fastabend
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:32 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

bpf_prog_inc_not_zero will be used by upcoming sockmap patches this
patch simply exports it so we can pull it in.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 include/linux/bpf.h  |    7 +++++++
 kernel/bpf/syscall.c |    3 ++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 39229c4..d6e1de8 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -252,6 +252,7 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
 struct bpf_prog * __must_check bpf_prog_add(struct bpf_prog *prog, int i);
 void bpf_prog_sub(struct bpf_prog *prog, int i);
 struct bpf_prog * __must_check bpf_prog_inc(struct bpf_prog *prog);
+struct bpf_prog * __must_check bpf_prog_inc_not_zero(struct bpf_prog *prog);
 void bpf_prog_put(struct bpf_prog *prog);
 int __bpf_prog_charge(struct user_struct *user, u32 pages);
 void __bpf_prog_uncharge(struct user_struct *user, u32 pages);
@@ -344,6 +345,12 @@ static inline struct bpf_prog * __must_check bpf_prog_inc(struct bpf_prog *prog)
 	return ERR_PTR(-EOPNOTSUPP);
 }
 
+static inline struct bpf_prog *__must_check
+bpf_prog_inc_not_zero(struct bpf_prog *prog)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
 static inline int __bpf_prog_charge(struct user_struct *user, u32 pages)
 {
 	return 0;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index fbe09a0..17e29f5 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -911,7 +911,7 @@ struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog)
 EXPORT_SYMBOL_GPL(bpf_prog_inc);
 
 /* prog_idr_lock should have been held */
-static struct bpf_prog *bpf_prog_inc_not_zero(struct bpf_prog *prog)
+struct bpf_prog *bpf_prog_inc_not_zero(struct bpf_prog *prog)
 {
 	int refold;
 
@@ -927,6 +927,7 @@ static struct bpf_prog *bpf_prog_inc_not_zero(struct bpf_prog *prog)
 
 	return prog;
 }
+EXPORT_SYMBOL_GPL(bpf_prog_inc_not_zero);
 
 static struct bpf_prog *__bpf_prog_get(u32 ufd, enum bpf_prog_type *type)
 {

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (4 preceding siblings ...)
  2017-08-16  5:32 ` [net-next PATCH 05/10] bpf: export bpf_prog_inc_not_zero John Fastabend
@ 2017-08-16  5:32 ` John Fastabend
  2017-08-17  5:40   ` Alexei Starovoitov
  2017-08-16  5:33 ` [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs John Fastabend
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:32 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

Recently we added a new map type called dev map used to forward XDP
packets between ports (6093ec2dc313). This patches introduces a
similar notion for sockets.

A sockmap allows users to add participating sockets to a map. When
sockets are added to the map enough context is stored with the
map entry to use the entry with a new helper

  bpf_sk_redirect_map(map, key, flags)

This helper (analogous to bpf_redirect_map in XDP) is given the map
and an entry in the map. When called from a sockmap program, discussed
below, the skb will be sent on the socket using skb_send_sock().

With the above we need a bpf program to call the helper from that will
then implement the send logic. The initial site implemented in this
series is the recv_sock hook. For this to work we implemented a map
attach command to add attributes to a map. In sockmap we add two
programs a parse program and a verdict program. The parse program
uses strparser to build messages and pass them to the verdict program.
The parse programs use the normal strparser semantics. The verdict
program is of type SK_SKB.

The verdict program returns a verdict SK_DROP, or  SK_REDIRECT for
now. Additional actions may be added later. When SK_REDIRECT is
returned, expected when bpf program uses bpf_sk_redirect_map(), the
sockmap logic will consult per cpu variables set by the helper routine
and pull the sock entry out of the sock map. This pattern follows the
existing redirect logic in cls and xdp programs.

This gives the flow,

 recv_sock -> str_parser (parse_prog) -> verdict_prog -> skb_send_sock
                                                     \
                                                      -> kfree_skb

As an example use case a message based load balancer may use specific
logic in the verdict program to select the sock to send on.

Sample programs are provided in future patches that hopefully illustrate
the user interfaces. Also selftests are in follow-on patches.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 include/linux/bpf.h       |    7 
 include/linux/bpf_types.h |    1 
 include/linux/filter.h    |    2 
 include/uapi/linux/bpf.h  |   33 ++
 kernel/bpf/Makefile       |    2 
 kernel/bpf/sockmap.c      |  792 +++++++++++++++++++++++++++++++++++++++++++++
 kernel/bpf/syscall.c      |   51 +++
 kernel/bpf/verifier.c     |   14 +
 net/core/filter.c         |   43 ++
 9 files changed, 940 insertions(+), 5 deletions(-)
 create mode 100644 kernel/bpf/sockmap.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d6e1de8..a4145e9 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -16,6 +16,7 @@
 #include <linux/rbtree_latch.h>
 
 struct perf_event;
+struct bpf_prog;
 struct bpf_map;
 
 /* map is generic key/value storage optionally accesible by eBPF programs */
@@ -37,6 +38,8 @@ struct bpf_map_ops {
 	void (*map_fd_put_ptr)(void *ptr);
 	u32 (*map_gen_lookup)(struct bpf_map *map, struct bpf_insn *insn_buf);
 	u32 (*map_fd_sys_lookup_elem)(void *ptr);
+	int (*map_attach)(struct bpf_map *map,
+			  struct bpf_prog *p1, struct bpf_prog *p2);
 };
 
 struct bpf_map {
@@ -138,8 +141,6 @@ enum bpf_reg_type {
 	PTR_TO_PACKET_END,	 /* skb->data + headlen */
 };
 
-struct bpf_prog;
-
 /* The information passed from prog-specific *_is_valid_access
  * back to the verifier.
  */
@@ -312,6 +313,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
 
 /* Map specifics */
 struct net_device  *__dev_map_lookup_elem(struct bpf_map *map, u32 key);
+struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
 void __dev_map_insert_ctx(struct bpf_map *map, u32 index);
 void __dev_map_flush(struct bpf_map *map);
 
@@ -391,6 +393,7 @@ static inline void __dev_map_flush(struct bpf_map *map)
 extern const struct bpf_func_proto bpf_skb_vlan_push_proto;
 extern const struct bpf_func_proto bpf_skb_vlan_pop_proto;
 extern const struct bpf_func_proto bpf_get_stackid_proto;
+extern const struct bpf_func_proto bpf_sock_map_update_proto;
 
 /* Shared helpers among cBPF and eBPF. */
 void bpf_user_rnd_init_once(void);
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 4b72db3..fa80507 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -38,4 +38,5 @@
 BPF_MAP_TYPE(BPF_MAP_TYPE_HASH_OF_MAPS, htab_of_maps_map_ops)
 #ifdef CONFIG_NET
 BPF_MAP_TYPE(BPF_MAP_TYPE_DEVMAP, dev_map_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_SOCKMAP, sock_map_ops)
 #endif
diff --git a/include/linux/filter.h b/include/linux/filter.h
index d19ed3c..7015116 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -727,6 +727,8 @@ int xdp_do_redirect(struct net_device *dev,
 void bpf_warn_invalid_xdp_action(u32 act);
 void bpf_warn_invalid_xdp_redirect(u32 ifindex);
 
+struct sock *do_sk_redirect_map(void);
+
 #ifdef CONFIG_BPF_JIT
 extern int bpf_jit_enable;
 extern int bpf_jit_harden;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 2e796e3..7f77476 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -110,6 +110,7 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_ARRAY_OF_MAPS,
 	BPF_MAP_TYPE_HASH_OF_MAPS,
 	BPF_MAP_TYPE_DEVMAP,
+	BPF_MAP_TYPE_SOCKMAP,
 };
 
 enum bpf_prog_type {
@@ -135,11 +136,15 @@ enum bpf_attach_type {
 	BPF_CGROUP_INET_EGRESS,
 	BPF_CGROUP_INET_SOCK_CREATE,
 	BPF_CGROUP_SOCK_OPS,
+	BPF_CGROUP_SMAP_INGRESS,
 	__MAX_BPF_ATTACH_TYPE
 };
 
 #define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE
 
+/* If BPF_SOCKMAP_STRPARSER is used sockmap will use strparser on receive */
+#define BPF_SOCKMAP_STRPARSER	(1U << 0)
+
 /* If BPF_F_ALLOW_OVERRIDE flag is used in BPF_PROG_ATTACH command
  * to the given target_fd cgroup the descendent cgroup will be able to
  * override effective bpf program that was inherited from this cgroup
@@ -211,6 +216,7 @@ enum bpf_attach_type {
 		__u32		attach_bpf_fd;	/* eBPF program to attach */
 		__u32		attach_type;
 		__u32		attach_flags;
+		__u32		attach_bpf_fd2;
 	};
 
 	struct { /* anonymous struct used by BPF_PROG_TEST_RUN command */
@@ -557,6 +563,23 @@ enum bpf_attach_type {
  *     @mode: operation mode (enum bpf_adj_room_mode)
  *     @flags: reserved for future use
  *     Return: 0 on success or negative error code
+ *
+ * int bpf_sk_redirect_map(map, key, flags)
+ *     Redirect skb to a sock in map using key as a lookup key for the
+ *     sock in map.
+ *     @map: pointer to sockmap
+ *     @key: key to lookup sock in map
+ *     @flags: reserved for future use
+ *     Return: SK_REDIRECT
+ *
+ * int bpf_sock_map_update(skops, map, key, flags, map_flags)
+ *	@skops: pointer to bpf_sock_ops
+ *	@map: pointer to sockmap to update
+ *	@key: key to insert/update sock in map
+ *	@flags: same flags as map update elem
+ *	@map_flags: sock map specific flags
+ *	   bit 1: Enable strparser
+ *	   other bits: reserved
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -610,7 +633,9 @@ enum bpf_attach_type {
 	FN(set_hash),			\
 	FN(setsockopt),			\
 	FN(skb_adjust_room),		\
-	FN(redirect_map),
+	FN(redirect_map),		\
+	FN(sk_redirect_map),		\
+	FN(sock_map_update),		\
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
@@ -747,6 +772,12 @@ struct xdp_md {
 	__u32 data_end;
 };
 
+enum sk_action {
+	SK_ABORTED = 0,
+	SK_DROP,
+	SK_REDIRECT,
+};
+
 #define BPF_TAG_SIZE	8
 
 struct bpf_prog_info {
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 2f0bcda..aa24287 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -3,7 +3,7 @@ obj-y := core.o
 obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o
 obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
 ifeq ($(CONFIG_NET),y)
-obj-$(CONFIG_BPF_SYSCALL) += devmap.o
+obj-$(CONFIG_BPF_SYSCALL) += devmap.o sockmap.o
 endif
 ifeq ($(CONFIG_PERF_EVENTS),y)
 obj-$(CONFIG_BPF_SYSCALL) += stackmap.o
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
new file mode 100644
index 0000000..792f0ad
--- /dev/null
+++ b/kernel/bpf/sockmap.c
@@ -0,0 +1,792 @@
+/* Copyright (c) 2017 Covalent IO, Inc. http://covalent.io
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+
+/* A BPF sock_map is used to store sock objects. This is primarly used
+ * for doing socket redirect with BPF helper routines.
+ *
+ * A sock map may have two BPF programs attached to it, a program used
+ * to parse packets and a program to provide a verdict and redirect
+ * decision on the packet. If no BPF parse program is provided it is
+ * assumed that every skb is a "message" (skb->len). Otherwise the
+ * parse program is attached to strparser and used to build messages
+ * that may span multiple skbs. The verdict program will either select
+ * a socket to send/receive the skb on or provide the drop code indicating
+ * the skb should be dropped. More actions may be added later as needed.
+ * The default program will drop packets.
+ *
+ * For reference this program is similar to devmap used in XDP context
+ * reviewing these together may be useful. For an example please review
+ * ./samples/bpf/sockmap/.
+ */
+#include <linux/bpf.h>
+#include <net/sock.h>
+#include <linux/filter.h>
+#include <linux/errno.h>
+#include <linux/file.h>
+#include <linux/kernel.h>
+#include <linux/net.h>
+#include <linux/skbuff.h>
+#include <linux/workqueue.h>
+#include <linux/list.h>
+#include <net/strparser.h>
+
+struct bpf_stab {
+	struct bpf_map map;
+	struct sock **sock_map;
+	struct bpf_prog *bpf_parse;
+	struct bpf_prog *bpf_verdict;
+	refcount_t refcnt;
+};
+
+enum smap_psock_state {
+	SMAP_TX_RUNNING,
+};
+
+struct smap_psock {
+	struct rcu_head	rcu;
+
+	/* datapath variables */
+	struct sk_buff_head rxqueue;
+	bool strp_enabled;
+
+	/* datapath error path cache across tx work invocations */
+	int save_rem;
+	int save_off;
+	struct sk_buff *save_skb;
+
+	struct strparser strp;
+	struct bpf_prog *bpf_parse;
+	struct bpf_prog *bpf_verdict;
+	struct bpf_stab *stab;
+
+	/* Back reference used when sock callback trigger sockmap operations */
+	int key;
+	struct sock *sock;
+	unsigned long state;
+
+	struct work_struct tx_work;
+	struct work_struct gc_work;
+
+	void (*save_data_ready)(struct sock *sk);
+	void (*save_write_space)(struct sock *sk);
+	void (*save_state_change)(struct sock *sk);
+};
+
+static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
+{
+	return (struct smap_psock *)rcu_dereference_sk_user_data(sk);
+}
+
+static int smap_verdict_func(struct smap_psock *psock, struct sk_buff *skb)
+{
+	struct bpf_prog *prog = READ_ONCE(psock->bpf_verdict);
+	int rc;
+
+	if (unlikely(!prog))
+		return SK_DROP;
+
+	skb_orphan(skb);
+	skb->sk = psock->sock;
+	bpf_compute_data_end(skb);
+	rc = (*prog->bpf_func)(skb, prog->insnsi);
+	skb->sk = NULL;
+
+	return rc;
+}
+
+static void smap_do_verdict(struct smap_psock *psock, struct sk_buff *skb)
+{
+	struct sock *sock;
+	int rc;
+
+	/* Because we use per cpu values to feed input from sock redirect
+	 * in BPF program to do_sk_redirect_map() call we need to ensure we
+	 * are not preempted. RCU read lock is not sufficient in this case
+	 * with CONFIG_PREEMPT_RCU enabled so we must be explicit here.
+	 */
+	preempt_disable();
+	rc = smap_verdict_func(psock, skb);
+	switch (rc) {
+	case SK_REDIRECT:
+		sock = do_sk_redirect_map();
+		preempt_enable();
+		if (likely(sock)) {
+			struct smap_psock *peer = smap_psock_sk(sock);
+
+			if (likely(peer &&
+				   test_bit(SMAP_TX_RUNNING, &peer->state) &&
+				   sk_stream_memory_free(peer->sock))) {
+				peer->sock->sk_wmem_queued += skb->truesize;
+				sk_mem_charge(peer->sock, skb->truesize);
+				skb_queue_tail(&peer->rxqueue, skb);
+				schedule_work(&peer->tx_work);
+				break;
+			}
+		}
+	/* Fall through and free skb otherwise */
+	case SK_DROP:
+	default:
+		preempt_enable();
+		kfree_skb(skb);
+	}
+}
+
+static void smap_report_sk_error(struct smap_psock *psock, int err)
+{
+	struct sock *sk = psock->sock;
+
+	sk->sk_err = err;
+	sk->sk_error_report(sk);
+}
+
+static void smap_release_sock(struct sock *sock);
+
+/* Called with lock_sock(sk) held */
+static void smap_state_change(struct sock *sk)
+{
+	struct smap_psock *psock;
+	struct sock *osk;
+
+	rcu_read_lock();
+
+	/* Allowing transitions into an established syn_recv states allows
+	 * for early binding sockets to a smap object before the connection
+	 * is established.
+	 */
+	switch (sk->sk_state) {
+	case TCP_SYN_RECV:
+	case TCP_ESTABLISHED:
+		break;
+	case TCP_CLOSE_WAIT:
+	case TCP_CLOSING:
+	case TCP_LAST_ACK:
+	case TCP_FIN_WAIT1:
+	case TCP_FIN_WAIT2:
+	case TCP_LISTEN:
+		break;
+	case TCP_CLOSE:
+		/* Only release if the map entry is in fact the sock in
+		 * question. There is a case where the operator deletes
+		 * the sock from the map, but the TCP sock is closed before
+		 * the psock is detached. Use cmpxchg to verify correct
+		 * sock is removed.
+		 */
+		psock = smap_psock_sk(sk);
+		if (unlikely(!psock))
+			break;
+		osk = cmpxchg(&psock->stab->sock_map[psock->key], sk, NULL);
+		if (osk == sk)
+			smap_release_sock(sk);
+		break;
+	default:
+		smap_report_sk_error(psock, EPIPE);
+		break;
+	}
+	rcu_read_unlock();
+}
+
+static void smap_read_sock_strparser(struct strparser *strp,
+				     struct sk_buff *skb)
+{
+	struct smap_psock *psock;
+
+	rcu_read_lock();
+	psock = container_of(strp, struct smap_psock, strp);
+	smap_do_verdict(psock, skb);
+	rcu_read_unlock();
+}
+
+/* Called with lock held on socket */
+static void smap_data_ready(struct sock *sk)
+{
+	struct smap_psock *psock;
+
+	write_lock_bh(&sk->sk_callback_lock);
+	psock = smap_psock_sk(sk);
+	if (likely(psock))
+		strp_data_ready(&psock->strp);
+	write_unlock_bh(&sk->sk_callback_lock);
+}
+
+static void smap_tx_work(struct work_struct *w)
+{
+	struct smap_psock *psock;
+	struct sk_buff *skb;
+	int rem, off, n;
+
+	psock = container_of(w, struct smap_psock, tx_work);
+
+	/* lock sock to avoid losing sk_socket at some point during loop */
+	lock_sock(psock->sock);
+	if (psock->save_skb) {
+		skb = psock->save_skb;
+		rem = psock->save_rem;
+		off = psock->save_off;
+		psock->save_skb = NULL;
+		goto start;
+	}
+
+	while ((skb = skb_dequeue(&psock->rxqueue))) {
+		rem = skb->len;
+		off = 0;
+start:
+		do {
+			if (likely(psock->sock->sk_socket))
+				n = skb_send_sock_locked(psock->sock,
+							 skb, off, rem);
+			else
+				n = -EINVAL;
+			if (n <= 0) {
+				if (n == -EAGAIN) {
+					/* Retry when space is available */
+					psock->save_skb = skb;
+					psock->save_rem = rem;
+					psock->save_off = off;
+					goto out;
+				}
+				/* Hard errors break pipe and stop xmit */
+				smap_report_sk_error(psock, n ? -n : EPIPE);
+				clear_bit(SMAP_TX_RUNNING, &psock->state);
+				sk_mem_uncharge(psock->sock, skb->truesize);
+				psock->sock->sk_wmem_queued -= skb->truesize;
+				kfree_skb(skb);
+				goto out;
+			}
+			rem -= n;
+			off += n;
+		} while (rem);
+		sk_mem_uncharge(psock->sock, skb->truesize);
+		psock->sock->sk_wmem_queued -= skb->truesize;
+		kfree_skb(skb);
+	}
+out:
+	release_sock(psock->sock);
+}
+
+static void smap_write_space(struct sock *sk)
+{
+	struct smap_psock *psock;
+
+	rcu_read_lock();
+	psock = smap_psock_sk(sk);
+	if (likely(psock && test_bit(SMAP_TX_RUNNING, &psock->state)))
+		schedule_work(&psock->tx_work);
+	rcu_read_unlock();
+}
+
+static void smap_stop_sock(struct smap_psock *psock, struct sock *sk)
+{
+	write_lock_bh(&sk->sk_callback_lock);
+	if (!psock->strp_enabled)
+		goto out;
+	sk->sk_data_ready = psock->save_data_ready;
+	sk->sk_write_space = psock->save_write_space;
+	sk->sk_state_change = psock->save_state_change;
+	psock->save_data_ready = NULL;
+	psock->save_write_space = NULL;
+	psock->save_state_change = NULL;
+	strp_stop(&psock->strp);
+	psock->strp_enabled = false;
+out:
+	write_unlock_bh(&sk->sk_callback_lock);
+}
+
+static void smap_destroy_psock(struct rcu_head *rcu)
+{
+	struct smap_psock *psock = container_of(rcu,
+						  struct smap_psock, rcu);
+
+	/* Now that a grace period has passed there is no longer
+	 * any reference to this sock in the sockmap so we can
+	 * destroy the psock, strparser, and bpf programs. But,
+	 * because we use workqueue sync operations we can not
+	 * do it in rcu context
+	 */
+	schedule_work(&psock->gc_work);
+}
+
+static void smap_release_sock(struct sock *sock)
+{
+	struct smap_psock *psock = smap_psock_sk(sock);
+
+	smap_stop_sock(psock, sock);
+	clear_bit(SMAP_TX_RUNNING, &psock->state);
+	rcu_assign_sk_user_data(sock, NULL);
+	call_rcu_sched(&psock->rcu, smap_destroy_psock);
+}
+
+static int smap_parse_func_strparser(struct strparser *strp,
+				       struct sk_buff *skb)
+{
+	struct smap_psock *psock;
+	struct bpf_prog *prog;
+	int rc;
+
+	rcu_read_lock();
+	psock = container_of(strp, struct smap_psock, strp);
+	prog = READ_ONCE(psock->bpf_parse);
+
+	if (unlikely(!prog)) {
+		rcu_read_unlock();
+		return skb->len;
+	}
+
+	/* Attach socket for bpf program to use if needed we can do this
+	 * because strparser clones the skb before handing it to a upper
+	 * layer, meaning skb_orphan has been called. We NULL sk on the
+	 * way out to ensure we don't trigger a BUG_ON in skb/sk operations
+	 * later and because we are not charging the memory of this skb to
+	 * any socket yet.
+	 */
+	skb->sk = psock->sock;
+	bpf_compute_data_end(skb);
+	rc = (*prog->bpf_func)(skb, prog->insnsi);
+	skb->sk = NULL;
+	rcu_read_unlock();
+	return rc;
+}
+
+
+static int smap_read_sock_done(struct strparser *strp, int err)
+{
+	return err;
+}
+
+static int smap_init_sock(struct smap_psock *psock,
+			  struct sock *sk)
+{
+	struct strp_callbacks cb;
+
+	memset(&cb, 0, sizeof(cb));
+	cb.rcv_msg = smap_read_sock_strparser;
+	cb.parse_msg = smap_parse_func_strparser;
+	cb.read_sock_done = smap_read_sock_done;
+	return strp_init(&psock->strp, sk, &cb);
+}
+
+static void smap_init_progs(struct smap_psock *psock,
+			    struct bpf_stab *stab,
+			    struct bpf_prog *verdict,
+			    struct bpf_prog *parse)
+{
+	struct bpf_prog *orig_parse, *orig_verdict;
+
+	orig_parse = xchg(&psock->bpf_parse, parse);
+	orig_verdict = xchg(&psock->bpf_verdict, verdict);
+
+	if (orig_verdict)
+		bpf_prog_put(orig_verdict);
+	if (orig_parse)
+		bpf_prog_put(orig_parse);
+}
+
+static void smap_start_sock(struct smap_psock *psock, struct sock *sk)
+{
+	if (sk->sk_data_ready == smap_data_ready)
+		return;
+	psock->save_data_ready = sk->sk_data_ready;
+	psock->save_write_space = sk->sk_write_space;
+	psock->save_state_change = sk->sk_state_change;
+	sk->sk_data_ready = smap_data_ready;
+	sk->sk_write_space = smap_write_space;
+	sk->sk_state_change = smap_state_change;
+	psock->strp_enabled = true;
+}
+
+static void sock_map_remove_complete(struct bpf_stab *stab)
+{
+	bpf_map_area_free(stab->sock_map);
+	kfree(stab);
+}
+
+static void smap_gc_work(struct work_struct *w)
+{
+	struct smap_psock *psock;
+
+	psock = container_of(w, struct smap_psock, gc_work);
+
+	/* no callback lock needed because we already detached sockmap ops */
+	if (psock->strp_enabled)
+		strp_done(&psock->strp);
+
+	cancel_work_sync(&psock->tx_work);
+	__skb_queue_purge(&psock->rxqueue);
+
+	/* At this point all strparser and xmit work must be complete */
+	if (psock->bpf_parse)
+		bpf_prog_put(psock->bpf_parse);
+	if (psock->bpf_verdict)
+		bpf_prog_put(psock->bpf_verdict);
+
+	if (refcount_dec_and_test(&psock->stab->refcnt))
+		sock_map_remove_complete(psock->stab);
+
+	sock_put(psock->sock);
+	kfree(psock);
+}
+
+static struct smap_psock *smap_init_psock(struct sock *sock,
+					  struct bpf_stab *stab)
+{
+	struct smap_psock *psock;
+
+	psock = kzalloc(sizeof(struct smap_psock), GFP_ATOMIC | __GFP_NOWARN);
+	if (!psock)
+		return ERR_PTR(-ENOMEM);
+
+	psock->sock = sock;
+	skb_queue_head_init(&psock->rxqueue);
+	INIT_WORK(&psock->tx_work, smap_tx_work);
+	INIT_WORK(&psock->gc_work, smap_gc_work);
+
+	rcu_assign_sk_user_data(sock, psock);
+	sock_hold(sock);
+	return psock;
+}
+
+static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
+{
+	struct bpf_stab *stab;
+	int err = -EINVAL;
+	u64 cost;
+
+	/* check sanity of attributes */
+	if (attr->max_entries == 0 || attr->key_size != 4 ||
+	    attr->value_size != 4 || attr->map_flags)
+		return ERR_PTR(-EINVAL);
+
+	if (attr->value_size > KMALLOC_MAX_SIZE)
+		return ERR_PTR(-E2BIG);
+
+	stab = kzalloc(sizeof(*stab), GFP_USER);
+	if (!stab)
+		return ERR_PTR(-ENOMEM);
+
+	/* mandatory map attributes */
+	stab->map.map_type = attr->map_type;
+	stab->map.key_size = attr->key_size;
+	stab->map.value_size = attr->value_size;
+	stab->map.max_entries = attr->max_entries;
+	stab->map.map_flags = attr->map_flags;
+
+	/* make sure page count doesn't overflow */
+	cost = (u64) stab->map.max_entries * sizeof(struct sock *);
+	if (cost >= U32_MAX - PAGE_SIZE)
+		goto free_stab;
+
+	stab->map.pages = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT;
+
+	/* if map size is larger than memlock limit, reject it early */
+	err = bpf_map_precharge_memlock(stab->map.pages);
+	if (err)
+		goto free_stab;
+
+	stab->sock_map = bpf_map_area_alloc(stab->map.max_entries *
+					    sizeof(struct sock *));
+	if (!stab->sock_map)
+		goto free_stab;
+
+	refcount_set(&stab->refcnt, 1);
+	return &stab->map;
+free_stab:
+	kfree(stab);
+	return ERR_PTR(err);
+}
+
+static void sock_map_free(struct bpf_map *map)
+{
+	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+	int i;
+
+	synchronize_rcu();
+
+	/* At this point no update, lookup or delete operations can happen.
+	 * However, be aware we can still get a socket state event updates,
+	 * and data ready callabacks that reference the psock from sk_user_data
+	 * Also psock worker threads are still in-flight. So smap_release_sock
+	 * will only free the psock after cancel_sync on the worker threads
+	 * and a grace period expire to ensure psock is really safe to remove.
+	 */
+	rcu_read_lock();
+	for (i = 0; i < stab->map.max_entries; i++) {
+		struct sock *sock;
+
+		sock = xchg(&stab->sock_map[i], NULL);
+		if (!sock)
+			continue;
+
+		smap_release_sock(sock);
+	}
+	rcu_read_unlock();
+
+	if (stab->bpf_verdict)
+		bpf_prog_put(stab->bpf_verdict);
+	if (stab->bpf_parse)
+		bpf_prog_put(stab->bpf_parse);
+
+	if (refcount_dec_and_test(&stab->refcnt))
+		sock_map_remove_complete(stab);
+}
+
+static int sock_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
+{
+	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+	u32 i = key ? *(u32 *)key : U32_MAX;
+	u32 *next = (u32 *)next_key;
+
+	if (i >= stab->map.max_entries) {
+		*next = 0;
+		return 0;
+	}
+
+	if (i == stab->map.max_entries - 1)
+		return -ENOENT;
+
+	*next = i + 1;
+	return 0;
+}
+
+struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
+{
+	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+
+	if (key >= map->max_entries)
+		return NULL;
+
+	return READ_ONCE(stab->sock_map[key]);
+}
+
+static int sock_map_delete_elem(struct bpf_map *map, void *key)
+{
+	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+	int k = *(u32 *)key;
+	struct sock *sock;
+
+	if (k >= map->max_entries)
+		return -EINVAL;
+
+	sock = xchg(&stab->sock_map[k], NULL);
+	if (!sock)
+		return -EINVAL;
+
+	smap_release_sock(sock);
+	return 0;
+}
+
+/* Locking notes: Concurrent updates, deletes, and lookups are allowed and are
+ * done inside rcu critical sections. This ensures on updates that the psock
+ * will not be released via smap_release_sock() until concurrent updates/deletes
+ * complete. All operations operate on sock_map using cmpxchg and xchg
+ * operations to ensure we do not get stale references. Any reads into the
+ * map must be done with READ_ONCE() because of this.
+ *
+ * A psock is destroyed via call_rcu and after any worker threads are cancelled
+ * and syncd so we are certain all references from the update/lookup/delete
+ * operations as well as references in the data path are no longer in use.
+ *
+ * A psock object holds a refcnt on the sockmap it is attached to and this is
+ * not decremented until after a RCU grace period and garbage collection occurs.
+ * This ensures the map is not free'd until psocks linked to it are removed. The
+ * map link is used when the independent sock events trigger map deletion.
+ *
+ * Psocks may only participate in one sockmap at a time. Users that try to
+ * join a single sock to multiple maps will get an error.
+ *
+ * Last, but not least, it is possible the socket is closed while running
+ * an update on an existing psock. This will release the psock, but again
+ * not until the update has completed due to rcu grace period rules.
+ */
+static int sock_map_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+				    struct bpf_map *map,
+				    void *key, u64 flags, u64 map_flags)
+{
+	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+	struct bpf_prog *verdict, *parse;
+	struct smap_psock *psock = NULL;
+	struct sock *old_sock, *sock;
+	u32 i = *(u32 *)key;
+	bool update = false;
+	int err = 0;
+
+	if (unlikely(flags > BPF_EXIST))
+		return -EINVAL;
+
+	if (unlikely(i >= stab->map.max_entries))
+		return -E2BIG;
+
+	if (unlikely(map_flags > BPF_SOCKMAP_STRPARSER))
+		return -EINVAL;
+
+	verdict = parse = NULL;
+	sock = READ_ONCE(stab->sock_map[i]);
+
+	if (flags == BPF_EXIST || flags == BPF_ANY) {
+		if (!sock && flags == BPF_EXIST) {
+			return -ENOENT;
+		} else if (sock && sock != skops->sk) {
+			return -EINVAL;
+		} else if (sock) {
+			psock = smap_psock_sk(sock);
+			if (unlikely(!psock))
+				return -EBUSY;
+			update = true;
+		}
+	} else if (sock && BPF_NOEXIST) {
+		return -EEXIST;
+	}
+
+	/* reserve BPF programs early so can abort easily on failures */
+	if (map_flags & BPF_SOCKMAP_STRPARSER) {
+		verdict = READ_ONCE(stab->bpf_verdict);
+		parse = READ_ONCE(stab->bpf_parse);
+
+		if (!verdict || !parse)
+			return -ENOENT;
+
+		/* bpf prog refcnt may be zero if a concurrent attach operation
+		 * removes the program after the above READ_ONCE() but before
+		 * we increment the refcnt. If this is the case abort with an
+		 * error.
+		 */
+		verdict = bpf_prog_inc_not_zero(stab->bpf_verdict);
+		if (IS_ERR(verdict))
+			return PTR_ERR(verdict);
+
+		parse = bpf_prog_inc_not_zero(stab->bpf_parse);
+		if (IS_ERR(parse)) {
+			bpf_prog_put(verdict);
+			return PTR_ERR(parse);
+		}
+	}
+
+	if (!psock) {
+		sock = skops->sk;
+		if (rcu_dereference_sk_user_data(sock))
+			return -EEXIST;
+		psock = smap_init_psock(sock, stab);
+		if (IS_ERR(psock)) {
+			if (verdict)
+				bpf_prog_put(verdict);
+			if (parse)
+				bpf_prog_put(parse);
+			return PTR_ERR(psock);
+		}
+		psock->key = i;
+		psock->stab = stab;
+		refcount_inc(&stab->refcnt);
+		set_bit(SMAP_TX_RUNNING, &psock->state);
+	}
+
+	if (map_flags & BPF_SOCKMAP_STRPARSER) {
+		write_lock_bh(&sock->sk_callback_lock);
+		if (psock->strp_enabled)
+			goto start_done;
+		err = smap_init_sock(psock, sock);
+		if (err)
+			goto out;
+		smap_init_progs(psock, stab, verdict, parse);
+		smap_start_sock(psock, sock);
+start_done:
+		write_unlock_bh(&sock->sk_callback_lock);
+	} else if (update) {
+		smap_stop_sock(psock, sock);
+	}
+
+	if (!update) {
+		old_sock = xchg(&stab->sock_map[i], skops->sk);
+		if (old_sock)
+			smap_release_sock(old_sock);
+	}
+
+	return 0;
+out:
+	write_unlock_bh(&sock->sk_callback_lock);
+	if (!update)
+		smap_release_sock(sock);
+	return err;
+}
+
+static int sock_map_attach_prog(struct bpf_map *map,
+				struct bpf_prog *parse,
+				struct bpf_prog *verdict)
+{
+	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+	struct bpf_prog *_parse, *_verdict;
+
+	_parse = xchg(&stab->bpf_parse, parse);
+	_verdict = xchg(&stab->bpf_verdict, verdict);
+
+	if (_parse)
+		bpf_prog_put(_parse);
+	if (_verdict)
+		bpf_prog_put(_verdict);
+
+	return 0;
+}
+
+static void *sock_map_lookup(struct bpf_map *map, void *key)
+{
+	return NULL;
+}
+
+static int sock_map_update_elem(struct bpf_map *map,
+				void *key, void *value, u64 flags)
+{
+	struct bpf_sock_ops_kern skops;
+	u32 fd = *(u32 *)value;
+	struct socket *socket;
+	int err;
+
+	socket = sockfd_lookup(fd, &err);
+	if (!socket)
+		return err;
+
+	skops.sk = socket->sk;
+	if (!skops.sk) {
+		fput(socket->file);
+		return -EINVAL;
+	}
+
+	err = sock_map_ctx_update_elem(&skops, map, key,
+				       flags, BPF_SOCKMAP_STRPARSER);
+	fput(socket->file);
+	return err;
+}
+
+const struct bpf_map_ops sock_map_ops = {
+	.map_alloc = sock_map_alloc,
+	.map_free = sock_map_free,
+	.map_lookup_elem = sock_map_lookup,
+	.map_get_next_key = sock_map_get_next_key,
+	.map_update_elem = sock_map_update_elem,
+	.map_delete_elem = sock_map_delete_elem,
+	.map_attach = sock_map_attach_prog,
+};
+
+BPF_CALL_5(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+	   struct bpf_map *, map, void *, key, u64, flags, u64, map_flags)
+{
+	WARN_ON_ONCE(!rcu_read_lock_held());
+	return sock_map_ctx_update_elem(bpf_sock, map, key, flags, map_flags);
+}
+
+const struct bpf_func_proto bpf_sock_map_update_proto = {
+	.func		= bpf_sock_map_update,
+	.gpl_only	= false,
+	.pkt_access	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_PTR_TO_MAP_KEY,
+	.arg4_type	= ARG_ANYTHING,
+	.arg5_type	= ARG_ANYTHING,
+};
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 17e29f5..d2f2bdf 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1087,7 +1087,50 @@ static int bpf_obj_get(const union bpf_attr *attr)
 
 #ifdef CONFIG_CGROUP_BPF
 
-#define BPF_PROG_ATTACH_LAST_FIELD attach_flags
+#define BPF_PROG_ATTACH_LAST_FIELD attach_bpf_fd2
+
+static int sockmap_get_from_fd(const union bpf_attr *attr, int ptype)
+{
+	struct bpf_prog *prog1, *prog2;
+	int ufd = attr->target_fd;
+	struct bpf_map *map;
+	struct fd f;
+	int err;
+
+	f = fdget(ufd);
+	map = __bpf_map_get(f);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	if (!map->ops->map_attach) {
+		fdput(f);
+		return -EOPNOTSUPP;
+	}
+
+	prog1 = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+	if (IS_ERR(prog1)) {
+		fdput(f);
+		return PTR_ERR(prog1);
+	}
+
+	prog2 = bpf_prog_get_type(attr->attach_bpf_fd2, ptype);
+	if (IS_ERR(prog2)) {
+		fdput(f);
+		bpf_prog_put(prog1);
+		return PTR_ERR(prog2);
+	}
+
+	err = map->ops->map_attach(map, prog1, prog2);
+	if (err) {
+		fdput(f);
+		bpf_prog_put(prog1);
+		bpf_prog_put(prog2);
+		return PTR_ERR(map);
+	}
+
+	fdput(f);
+	return err;
+}
 
 static int bpf_prog_attach(const union bpf_attr *attr)
 {
@@ -1116,10 +1159,16 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 	case BPF_CGROUP_SOCK_OPS:
 		ptype = BPF_PROG_TYPE_SOCK_OPS;
 		break;
+	case BPF_CGROUP_SMAP_INGRESS:
+		ptype = BPF_PROG_TYPE_SK_SKB;
+		break;
 	default:
 		return -EINVAL;
 	}
 
+	if (attr->attach_type == BPF_CGROUP_SMAP_INGRESS)
+		return sockmap_get_from_fd(attr, ptype);
+
 	prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
 	if (IS_ERR(prog))
 		return PTR_ERR(prog);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ecc590e..a439ad5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1486,6 +1486,12 @@ static int check_map_func_compatibility(struct bpf_map *map, int func_id)
 	case BPF_MAP_TYPE_HASH_OF_MAPS:
 		if (func_id != BPF_FUNC_map_lookup_elem)
 			goto error;
+	case BPF_MAP_TYPE_SOCKMAP:
+		if (func_id != BPF_FUNC_sk_redirect_map &&
+		    func_id != BPF_FUNC_sock_map_update &&
+		    func_id != BPF_FUNC_map_delete_elem)
+			goto error;
+		break;
 	default:
 		break;
 	}
@@ -1514,6 +1520,14 @@ static int check_map_func_compatibility(struct bpf_map *map, int func_id)
 		if (map->map_type != BPF_MAP_TYPE_DEVMAP)
 			goto error;
 		break;
+	case BPF_FUNC_sk_redirect_map:
+		if (map->map_type != BPF_MAP_TYPE_SOCKMAP)
+			goto error;
+		break;
+	case BPF_FUNC_sock_map_update:
+		if (map->map_type != BPF_MAP_TYPE_SOCKMAP)
+			goto error;
+		break;
 	default:
 		break;
 	}
diff --git a/net/core/filter.c b/net/core/filter.c
index c1108bf..934b92f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1858,6 +1858,45 @@ int skb_do_redirect(struct sk_buff *skb)
 	.arg3_type      = ARG_ANYTHING,
 };
 
+BPF_CALL_3(bpf_sk_redirect_map, struct bpf_map *, map, u32, key, u64, flags)
+{
+	struct redirect_info *ri = this_cpu_ptr(&redirect_info);
+
+	if (unlikely(flags))
+		return SK_ABORTED;
+
+	ri->ifindex = key;
+	ri->flags = flags;
+	ri->map = map;
+
+	return SK_REDIRECT;
+}
+
+struct sock *do_sk_redirect_map(void)
+{
+	struct redirect_info *ri = this_cpu_ptr(&redirect_info);
+	struct sock *sk = NULL;
+
+	if (ri->map) {
+		sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
+
+		ri->ifindex = 0;
+		ri->map = NULL;
+		/* we do not clear flags for future lookup */
+	}
+
+	return sk;
+}
+
+static const struct bpf_func_proto bpf_sk_redirect_map_proto = {
+	.func           = bpf_sk_redirect_map,
+	.gpl_only       = false,
+	.ret_type       = RET_INTEGER,
+	.arg1_type      = ARG_CONST_MAP_PTR,
+	.arg2_type      = ARG_ANYTHING,
+	.arg3_type      = ARG_ANYTHING,
+};
+
 BPF_CALL_1(bpf_get_cgroup_classid, const struct sk_buff *, skb)
 {
 	return task_get_classid(skb);
@@ -3229,6 +3268,8 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 	switch (func_id) {
 	case BPF_FUNC_setsockopt:
 		return &bpf_setsockopt_proto;
+	case BPF_FUNC_sock_map_update:
+		return &bpf_sock_map_update_proto;
 	default:
 		return bpf_base_func_proto(func_id);
 	}
@@ -3243,6 +3284,8 @@ static const struct bpf_func_proto *sk_skb_func_proto(enum bpf_func_id func_id)
 		return &bpf_get_socket_cookie_proto;
 	case BPF_FUNC_get_socket_uid:
 		return &bpf_get_socket_uid_proto;
+	case BPF_FUNC_sk_redirect_map:
+		return &bpf_sk_redirect_map_proto;
 	default:
 		return bpf_base_func_proto(func_id);
 	}

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (5 preceding siblings ...)
  2017-08-16  5:32 ` [net-next PATCH 06/10] bpf: sockmap with sk redirect support John Fastabend
@ 2017-08-16  5:33 ` John Fastabend
  2017-08-17  5:42   ` Alexei Starovoitov
  2017-08-16  5:33 ` [net-next PATCH 08/10] bpf: sockmap sample program John Fastabend
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:33 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 include/uapi/linux/bpf.h |    9 ++
 kernel/bpf/verifier.c    |    1 
 net/core/filter.c        |  169 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 179 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 7f77476..5ecbe81 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -712,6 +712,15 @@ struct __sk_buff {
 	__u32 data;
 	__u32 data_end;
 	__u32 napi_id;
+
+	/* accessed by BPF_PROG_TYPE_sk_skb types */
+	__u32 family;
+	__u32 remote_ip4;	/* Stored in network byte order */
+	__u32 local_ip4;	/* Stored in network byte order */
+	__u32 remote_ip6[4];	/* Stored in network byte order */
+	__u32 local_ip6[4];	/* Stored in network byte order */
+	__u32 remote_port;	/* Stored in network byte order */
+	__u32 local_port;	/* stored in host byte order */
 };
 
 struct bpf_tunnel_key {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a439ad5..bfed82e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -848,6 +848,7 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
 	case BPF_PROG_TYPE_SCHED_ACT:
 	case BPF_PROG_TYPE_XDP:
 	case BPF_PROG_TYPE_LWT_XMIT:
+	case BPF_PROG_TYPE_SK_SKB:
 		if (meta)
 			return meta->pkt_access;
 
diff --git a/net/core/filter.c b/net/core/filter.c
index 934b92f..9a9e31f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3278,8 +3278,16 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 static const struct bpf_func_proto *sk_skb_func_proto(enum bpf_func_id func_id)
 {
 	switch (func_id) {
+	case BPF_FUNC_skb_store_bytes:
+		return &bpf_skb_store_bytes_proto;
 	case BPF_FUNC_skb_load_bytes:
 		return &bpf_skb_load_bytes_proto;
+	case BPF_FUNC_skb_pull_data:
+		return &bpf_skb_pull_data_proto;
+	case BPF_FUNC_skb_change_tail:
+		return &bpf_skb_change_tail_proto;
+	case BPF_FUNC_skb_change_head:
+		return &bpf_skb_change_head_proto;
 	case BPF_FUNC_get_socket_cookie:
 		return &bpf_get_socket_cookie_proto;
 	case BPF_FUNC_get_socket_uid:
@@ -3343,6 +3351,10 @@ static bool bpf_skb_is_valid_access(int off, int size, enum bpf_access_type type
 		if (off + size > offsetofend(struct __sk_buff, cb[4]))
 			return false;
 		break;
+	case bpf_ctx_range_till(struct __sk_buff, remote_ip6[0], remote_ip6[3]):
+	case bpf_ctx_range_till(struct __sk_buff, local_ip6[0], local_ip6[3]):
+	case bpf_ctx_range_till(struct __sk_buff, remote_ip4, remote_ip4):
+	case bpf_ctx_range_till(struct __sk_buff, local_ip4, local_ip4):
 	case bpf_ctx_range(struct __sk_buff, data):
 	case bpf_ctx_range(struct __sk_buff, data_end):
 		if (size != size_default)
@@ -3371,6 +3383,7 @@ static bool sk_filter_is_valid_access(int off, int size,
 	case bpf_ctx_range(struct __sk_buff, tc_classid):
 	case bpf_ctx_range(struct __sk_buff, data):
 	case bpf_ctx_range(struct __sk_buff, data_end):
+	case bpf_ctx_range_till(struct __sk_buff, family, local_port):
 		return false;
 	}
 
@@ -3392,6 +3405,7 @@ static bool lwt_is_valid_access(int off, int size,
 {
 	switch (off) {
 	case bpf_ctx_range(struct __sk_buff, tc_classid):
+	case bpf_ctx_range_till(struct __sk_buff, family, local_port):
 		return false;
 	}
 
@@ -3505,6 +3519,8 @@ static bool tc_cls_act_is_valid_access(int off, int size,
 	case bpf_ctx_range(struct __sk_buff, data_end):
 		info->reg_type = PTR_TO_PACKET_END;
 		break;
+	case bpf_ctx_range_till(struct __sk_buff, family, local_port):
+		return false;
 	}
 
 	return bpf_skb_is_valid_access(off, size, type, info);
@@ -3582,11 +3598,63 @@ static bool sock_ops_is_valid_access(int off, int size,
 	return __is_valid_sock_ops_access(off, size);
 }
 
+static int sk_skb_prologue(struct bpf_insn *insn_buf, bool direct_write,
+			   const struct bpf_prog *prog)
+{
+	struct bpf_insn *insn = insn_buf;
+
+	if (!direct_write)
+		return 0;
+
+	/* if (!skb->cloned)
+	 *       goto start;
+	 *
+	 * (Fast-path, otherwise approximation that we might be
+	 *  a clone, do the rest in helper.)
+	 */
+	*insn++ = BPF_LDX_MEM(BPF_B, BPF_REG_6, BPF_REG_1, CLONED_OFFSET());
+	*insn++ = BPF_ALU32_IMM(BPF_AND, BPF_REG_6, CLONED_MASK);
+	*insn++ = BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 7);
+
+	/* ret = bpf_skb_pull_data(skb, 0); */
+	*insn++ = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
+	*insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_2, BPF_REG_2);
+	*insn++ = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+			       BPF_FUNC_skb_pull_data);
+	/* if (!ret)
+	 *      goto restore;
+	 * return SK_DROP;
+	 */
+	*insn++ = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2);
+	*insn++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_0, SK_DROP);
+	*insn++ = BPF_EXIT_INSN();
+
+	/* restore: */
+	*insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6);
+	/* start: */
+	*insn++ = prog->insnsi[0];
+
+	return insn - insn_buf;
+}
+
 static bool sk_skb_is_valid_access(int off, int size,
 				   enum bpf_access_type type,
 				   struct bpf_insn_access_aux *info)
 {
+	if (type == BPF_WRITE) {
+		switch (off) {
+		case bpf_ctx_range(struct __sk_buff, mark):
+		case bpf_ctx_range(struct __sk_buff, tc_index):
+		case bpf_ctx_range(struct __sk_buff, priority):
+			break;
+		default:
+			return false;
+		}
+	}
+
 	switch (off) {
+	case bpf_ctx_range(struct __sk_buff, tc_classid):
+		return false;
 	case bpf_ctx_range(struct __sk_buff, data):
 		info->reg_type = PTR_TO_PACKET;
 		break;
@@ -3781,6 +3849,106 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
 		*insn++ = BPF_MOV64_IMM(si->dst_reg, 0);
 #endif
 		break;
+	case offsetof(struct __sk_buff, family):
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common, skc_family) != 2);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->dst_reg,
+				      bpf_target_off(struct sock_common,
+						     skc_family,
+						     2, target_size));
+		break;
+	case offsetof(struct __sk_buff, remote_ip4):
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common, skc_daddr) != 4);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg,
+				      bpf_target_off(struct sock_common,
+						     skc_daddr,
+						     4, target_size));
+		break;
+	case offsetof(struct __sk_buff, local_ip4):
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common,
+					  skc_rcv_saddr) != 4);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg,
+				      bpf_target_off(struct sock_common,
+						     skc_rcv_saddr,
+						     4, target_size));
+		break;
+	case offsetof(struct __sk_buff, remote_ip6[0]) ...
+	     offsetof(struct __sk_buff, remote_ip6[3]):
+#if IS_ENABLED(CONFIG_IPV6)
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common,
+					  skc_v6_daddr.s6_addr32[0]) != 4);
+
+		off = si->off;
+		off -= offsetof(struct __sk_buff, remote_ip6[0]);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg,
+				      offsetof(struct sock_common,
+					       skc_v6_daddr.s6_addr32[0]) +
+				      off);
+#else
+		*insn++ = BPF_MOV32_IMM(si->dst_reg, 0);
+#endif
+		break;
+	case offsetof(struct __sk_buff, local_ip6[0]) ...
+	     offsetof(struct __sk_buff, local_ip6[3]):
+#if IS_ENABLED(CONFIG_IPV6)
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common,
+					  skc_v6_rcv_saddr.s6_addr32[0]) != 4);
+
+		off = si->off;
+		off -= offsetof(struct __sk_buff, local_ip6[0]);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg,
+				      offsetof(struct sock_common,
+					       skc_v6_rcv_saddr.s6_addr32[0]) +
+				      off);
+#else
+		*insn++ = BPF_MOV32_IMM(si->dst_reg, 0);
+#endif
+		break;
+
+	case offsetof(struct __sk_buff, remote_port):
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common, skc_dport) != 2);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->dst_reg,
+				      bpf_target_off(struct sock_common,
+						     skc_dport,
+						     2, target_size));
+#ifndef __BIG_ENDIAN_BITFIELD
+		*insn++ = BPF_ALU32_IMM(BPF_LSH, si->dst_reg, 16);
+#endif
+		break;
+
+	case offsetof(struct __sk_buff, local_port):
+		BUILD_BUG_ON(FIELD_SIZEOF(struct sock_common, skc_num) != 2);
+
+		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, sk),
+				      si->dst_reg, si->src_reg,
+				      offsetof(struct sk_buff, sk));
+		*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->dst_reg,
+				      bpf_target_off(struct sock_common,
+						     skc_num, 2, target_size));
+		break;
 	}
 
 	return insn - insn_buf;
@@ -4069,6 +4237,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 	.get_func_proto		= sk_skb_func_proto,
 	.is_valid_access	= sk_skb_is_valid_access,
 	.convert_ctx_access	= bpf_convert_ctx_access,
+	.gen_prologue		= sk_skb_prologue,
 };
 
 int sk_detach_filter(struct sock *sk)

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 08/10] bpf: sockmap sample program
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (6 preceding siblings ...)
  2017-08-16  5:33 ` [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs John Fastabend
@ 2017-08-16  5:33 ` John Fastabend
  2017-08-16  5:33 ` [net-next PATCH 09/10] bpf: selftests: add tests for new __sk_buff members John Fastabend
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:33 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

This program binds a program to a cgroup and then matches hard
coded IP addresses and adds these to a sockmap.

This will receive messages from the backend and send them to
the client.

     client:X <---> frontend:10000 client:X <---> backend:10001

To keep things simple this is only designed for 1:1 connections
using hard coded values. A more complete example would allow many
backends and clients.

To run,

 # sockmap <cgroup2_dir>

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 samples/bpf/bpf_load.c                    |    8 +
 samples/sockmap/Makefile                  |   78 ++++++++
 samples/sockmap/sockmap_kern.c            |  110 +++++++++++
 samples/sockmap/sockmap_user.c            |  286 +++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h            |   46 +++++
 tools/lib/bpf/bpf.c                       |   14 +
 tools/lib/bpf/bpf.h                       |    4 
 tools/testing/selftests/bpf/bpf_helpers.h |    7 +
 8 files changed, 547 insertions(+), 6 deletions(-)
 create mode 100644 samples/sockmap/Makefile
 create mode 100644 samples/sockmap/sockmap_kern.c
 create mode 100644 samples/sockmap/sockmap_user.c

diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index 899f403..a8552b8 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -65,6 +65,7 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 	bool is_cgroup_skb = strncmp(event, "cgroup/skb", 10) == 0;
 	bool is_cgroup_sk = strncmp(event, "cgroup/sock", 11) == 0;
 	bool is_sockops = strncmp(event, "sockops", 7) == 0;
+	bool is_sk_skb = strncmp(event, "sk_skb", 6) == 0;
 	size_t insns_cnt = size / sizeof(struct bpf_insn);
 	enum bpf_prog_type prog_type;
 	char buf[256];
@@ -92,6 +93,8 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 		prog_type = BPF_PROG_TYPE_CGROUP_SOCK;
 	} else if (is_sockops) {
 		prog_type = BPF_PROG_TYPE_SOCK_OPS;
+	} else if (is_sk_skb) {
+		prog_type = BPF_PROG_TYPE_SK_SKB;
 	} else {
 		printf("Unknown event '%s'\n", event);
 		return -1;
@@ -109,7 +112,7 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 	if (is_xdp || is_perf_event || is_cgroup_skb || is_cgroup_sk)
 		return 0;
 
-	if (is_socket || is_sockops) {
+	if (is_socket || is_sockops || is_sk_skb) {
 		if (is_socket)
 			event += 6;
 		else
@@ -567,7 +570,8 @@ static int do_load_bpf_file(const char *path, fixup_map_cb fixup_map)
 		    memcmp(shname, "perf_event", 10) == 0 ||
 		    memcmp(shname, "socket", 6) == 0 ||
 		    memcmp(shname, "cgroup/", 7) == 0 ||
-		    memcmp(shname, "sockops", 7) == 0) {
+		    memcmp(shname, "sockops", 7) == 0 ||
+		    memcmp(shname, "sk_skb", 6) == 0) {
 			ret = load_and_attach(shname, data->d_buf,
 					      data->d_size);
 			if (ret != 0)
diff --git a/samples/sockmap/Makefile b/samples/sockmap/Makefile
new file mode 100644
index 0000000..9291ab8
--- /dev/null
+++ b/samples/sockmap/Makefile
@@ -0,0 +1,78 @@
+# kbuild trick to avoid linker error. Can be omitted if a module is built.
+obj- := dummy.o
+
+# List of programs to build
+hostprogs-y := sockmap
+
+# Libbpf dependencies
+LIBBPF := ../../tools/lib/bpf/bpf.o
+
+HOSTCFLAGS += -I$(objtree)/usr/include
+HOSTCFLAGS += -I$(srctree)/tools/lib/
+HOSTCFLAGS += -I$(srctree)/tools/testing/selftests/bpf/
+HOSTCFLAGS += -I$(srctree)/tools/lib/ -I$(srctree)/tools/include
+HOSTCFLAGS += -I$(srctree)/tools/perf
+
+sockmap-objs := ../bpf/bpf_load.o $(LIBBPF) sockmap_user.o
+
+# Tell kbuild to always build the programs
+always := $(hostprogs-y)
+always += sockmap_kern.o
+
+HOSTLOADLIBES_sockmap += -lelf -lpthread
+
+# Allows pointing LLC/CLANG to a LLVM backend with bpf support, redefine on cmdline:
+#  make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang
+LLC ?= llc
+CLANG ?= clang
+
+# Trick to allow make to be run from this directory
+all:
+	$(MAKE) -C ../../ $(CURDIR)/
+
+clean:
+	$(MAKE) -C ../../ M=$(CURDIR) clean
+	@rm -f *~
+
+$(obj)/syscall_nrs.s:	$(src)/syscall_nrs.c
+	$(call if_changed_dep,cc_s_c)
+
+$(obj)/syscall_nrs.h:	$(obj)/syscall_nrs.s FORCE
+	$(call filechk,offsets,__SYSCALL_NRS_H__)
+
+clean-files += syscall_nrs.h
+
+FORCE:
+
+
+# Verify LLVM compiler tools are available and bpf target is supported by llc
+.PHONY: verify_cmds verify_target_bpf $(CLANG) $(LLC)
+
+verify_cmds: $(CLANG) $(LLC)
+	@for TOOL in $^ ; do \
+		if ! (which -- "$${TOOL}" > /dev/null 2>&1); then \
+			echo "*** ERROR: Cannot find LLVM tool $${TOOL}" ;\
+			exit 1; \
+		else true; fi; \
+	done
+
+verify_target_bpf: verify_cmds
+	@if ! (${LLC} -march=bpf -mattr=help > /dev/null 2>&1); then \
+		echo "*** ERROR: LLVM (${LLC}) does not support 'bpf' target" ;\
+		echo "   NOTICE: LLVM version >= 3.7.1 required" ;\
+		exit 2; \
+	else true; fi
+
+$(src)/*.c: verify_target_bpf
+
+# asm/sysreg.h - inline assembly used by it is incompatible with llvm.
+# But, there is no easy way to fix it, so just exclude it since it is
+# useless for BPF samples.
+$(obj)/%.o: $(src)/%.c
+	$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) -I$(obj) \
+		-D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \
+		-Wno-compare-distinct-pointer-types \
+		-Wno-gnu-variable-sized-type-not-at-end \
+		-Wno-address-of-packed-member -Wno-tautological-compare \
+		-Wno-unknown-warning-option \
+		-O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
diff --git a/samples/sockmap/sockmap_kern.c b/samples/sockmap/sockmap_kern.c
new file mode 100644
index 0000000..6ff986f
--- /dev/null
+++ b/samples/sockmap/sockmap_kern.c
@@ -0,0 +1,110 @@
+/* Copyright (c) 2017 Covalent IO, Inc. http://covalent.io
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <uapi/linux/bpf.h>
+#include <uapi/linux/if_ether.h>
+#include <uapi/linux/if_packet.h>
+#include <uapi/linux/ip.h>
+#include "../../tools/testing/selftests/bpf/bpf_helpers.h"
+#include "../../tools/testing/selftests/bpf/bpf_endian.h"
+
+/* Sockmap sample program connects a client and a backend together
+ * using cgroups.
+ *
+ *    client:X <---> frontend:80 client:X <---> backend:80
+ *
+ * For simplicity we hard code values here and bind 1:1. The hard
+ * coded values are part of the setup in sockmap.sh script that
+ * is associated with this BPF program.
+ *
+ * The bpf_printk is verbose and prints information as connections
+ * are established and verdicts are decided.
+ */
+
+#define bpf_printk(fmt, ...)					\
+({								\
+	       char ____fmt[] = fmt;				\
+	       bpf_trace_printk(____fmt, sizeof(____fmt),	\
+				##__VA_ARGS__);			\
+})
+
+struct bpf_map_def SEC("maps") sock_map = {
+	.type = BPF_MAP_TYPE_SOCKMAP,
+	.key_size = sizeof(int),
+	.value_size = sizeof(int),
+	.max_entries = 20,
+};
+
+SEC("sk_skb1")
+int bpf_prog1(struct __sk_buff *skb)
+{
+	return skb->len;
+}
+
+SEC("sk_skb2")
+int bpf_prog2(struct __sk_buff *skb)
+{
+	__u32 lport = skb->local_port;
+	__u32 rport = skb->remote_port;
+	int ret = 0;
+
+	if (lport == 10000)
+		ret = 10;
+	else
+		ret = 1;
+
+	bpf_printk("sockmap: %d -> %d @ %d\n", lport, bpf_ntohl(rport), ret);
+	return bpf_sk_redirect_map(&sock_map, ret, 0);
+}
+
+SEC("sockops")
+int bpf_sockmap(struct bpf_sock_ops *skops)
+{
+	__u32 lport, rport;
+	int op, err = 0, index, key, ret;
+
+
+	op = (int) skops->op;
+
+	switch (op) {
+	case BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB:
+		lport = skops->local_port;
+		rport = skops->remote_port;
+
+		if (lport == 10000) {
+			ret = 1;
+			err = bpf_sock_map_update(skops, &sock_map, &ret,
+						  BPF_NOEXIST,
+						  BPF_SOCKMAP_STRPARSER);
+			bpf_printk("passive(%i -> %i) map ctx update err: %d\n",
+				   lport, bpf_ntohl(rport), err);
+		}
+		break;
+	case BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB:
+		lport = skops->local_port;
+		rport = skops->remote_port;
+
+		if (bpf_ntohl(rport) == 10001) {
+			ret = 10;
+			err = bpf_sock_map_update(skops, &sock_map, &ret,
+						  BPF_NOEXIST,
+						  BPF_SOCKMAP_STRPARSER);
+			bpf_printk("active(%i -> %i) map ctx update err: %d\n",
+				   lport, bpf_ntohl(rport), err);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+char _license[] SEC("license") = "GPL";
diff --git a/samples/sockmap/sockmap_user.c b/samples/sockmap/sockmap_user.c
new file mode 100644
index 0000000..fb78f5a
--- /dev/null
+++ b/samples/sockmap/sockmap_user.c
@@ -0,0 +1,286 @@
+/* Copyright (c) 2017 Covalent IO, Inc. http://covalent.io
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/socket.h>
+#include <sys/ioctl.h>
+#include <sys/select.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+#include <unistd.h>
+#include <string.h>
+#include <errno.h>
+#include <sys/ioctl.h>
+#include <stdbool.h>
+#include <signal.h>
+#include <fcntl.h>
+
+#include <sys/time.h>
+#include <sys/types.h>
+
+#include <linux/netlink.h>
+#include <linux/socket.h>
+#include <linux/sock_diag.h>
+#include <linux/bpf.h>
+#include <linux/if_link.h>
+#include <assert.h>
+#include <libgen.h>
+
+#include "../bpf/bpf_load.h"
+#include "../bpf/bpf_util.h"
+#include "../bpf/libbpf.h"
+
+int running;
+void running_handler(int a);
+
+/* randomly selected ports for testing on lo */
+#define S1_PORT 10000
+#define S2_PORT 10001
+
+static int sockmap_test_sockets(int rate, int dot)
+{
+	int i, sc, err, max_fd, one = 1;
+	int s1, s2, c1, c2, p1, p2;
+	struct sockaddr_in addr;
+	struct timeval timeout;
+	char buf[1024] = {0};
+	int *fds[4] = {&s1, &s2, &c1, &c2};
+	fd_set w;
+
+	s1 = s2 = p1 = p2 = c1 = c2 = 0;
+
+	/* Init sockets */
+	for (i = 0; i < 4; i++) {
+		*fds[i] = socket(AF_INET, SOCK_STREAM, 0);
+		if (*fds[i] < 0) {
+			perror("socket s1 failed()");
+			err = *fds[i];
+			goto out;
+		}
+	}
+
+	/* Allow reuse */
+	for (i = 0; i < 2; i++) {
+		err = setsockopt(*fds[i], SOL_SOCKET, SO_REUSEADDR,
+				 (char *)&one, sizeof(one));
+		if (err) {
+			perror("setsockopt failed()");
+			goto out;
+		}
+	}
+
+	/* Non-blocking sockets */
+	for (i = 0; i < 4; i++) {
+		err = ioctl(*fds[i], FIONBIO, (char *)&one);
+		if (err < 0) {
+			perror("ioctl s1 failed()");
+			goto out;
+		}
+	}
+
+	/* Bind server sockets */
+	memset(&addr, 0, sizeof(struct sockaddr_in));
+	addr.sin_family = AF_INET;
+	addr.sin_addr.s_addr = inet_addr("127.0.0.1");
+
+	addr.sin_port = htons(S1_PORT);
+	err = bind(s1, (struct sockaddr *)&addr, sizeof(addr));
+	if (err < 0) {
+		perror("bind s1 failed()\n");
+		goto out;
+	}
+
+	addr.sin_port = htons(S2_PORT);
+	err = bind(s2, (struct sockaddr *)&addr, sizeof(addr));
+	if (err < 0) {
+		perror("bind s2 failed()\n");
+		goto out;
+	}
+
+	/* Listen server sockets */
+	addr.sin_port = htons(S1_PORT);
+	err = listen(s1, 32);
+	if (err < 0) {
+		perror("listen s1 failed()\n");
+		goto out;
+	}
+
+	addr.sin_port = htons(S2_PORT);
+	err = listen(s2, 32);
+	if (err < 0) {
+		perror("listen s1 failed()\n");
+		goto out;
+	}
+
+	/* Initiate Connect */
+	addr.sin_port = htons(S1_PORT);
+	err = connect(c1, (struct sockaddr *)&addr, sizeof(addr));
+	if (err < 0 && errno != EINPROGRESS) {
+		perror("connect c1 failed()\n");
+		goto out;
+	}
+
+	addr.sin_port = htons(S2_PORT);
+	err = connect(c2, (struct sockaddr *)&addr, sizeof(addr));
+	if (err < 0 && errno != EINPROGRESS) {
+		perror("connect c2 failed()\n");
+		goto out;
+	}
+
+	/* Accept Connecrtions */
+	p1 = accept(s1, NULL, NULL);
+	if (p1 < 0) {
+		perror("accept s1 failed()\n");
+		goto out;
+	}
+
+	p2 = accept(s2, NULL, NULL);
+	if (p2 < 0) {
+		perror("accept s1 failed()\n");
+		goto out;
+	}
+
+	max_fd = p2;
+	timeout.tv_sec = 10;
+	timeout.tv_usec = 0;
+
+	printf("connected sockets: c1 <-> p1, c2 <-> p2\n");
+	printf("cgroups binding: c1(%i) <-> s1(%i) - - - c2(%i) <-> s2(%i)\n",
+		c1, s1, c2, s2);
+
+	/* Ping/Pong data from client to server */
+	sc = send(c1, buf, sizeof(buf), 0);
+	if (sc < 0) {
+		perror("send failed()\n");
+		goto out;
+	}
+
+	do {
+		int s, rc, i;
+
+		/* FD sets */
+		FD_ZERO(&w);
+		FD_SET(c1, &w);
+		FD_SET(c2, &w);
+		FD_SET(p1, &w);
+		FD_SET(p2, &w);
+
+		s = select(max_fd + 1, &w, NULL, NULL, &timeout);
+		if (s == -1) {
+			perror("select()");
+			break;
+		} else if (!s) {
+			fprintf(stderr, "unexpected timeout\n");
+			break;
+		}
+
+		for (i = 0; i <= max_fd && s > 0; ++i) {
+			if (!FD_ISSET(i, &w))
+				continue;
+
+			s--;
+
+			rc = recv(i, buf, sizeof(buf), 0);
+			if (rc < 0) {
+				if (errno != EWOULDBLOCK) {
+					perror("recv failed()\n");
+					break;
+				}
+			}
+
+			if (rc == 0) {
+				close(i);
+				break;
+			}
+
+			sc = send(i, buf, rc, 0);
+			if (sc < 0) {
+				perror("send failed()\n");
+				break;
+			}
+		}
+		sleep(rate);
+		if (dot) {
+			printf(".");
+			fflush(stdout);
+
+		}
+	} while (running);
+
+out:
+	close(s1);
+	close(s2);
+	close(p1);
+	close(p2);
+	close(c1);
+	close(c2);
+	return err;
+}
+
+int main(int argc, char **argv)
+{
+	int rate = 1, dot = 1;
+	char filename[256];
+	int err, cg_fd;
+	char *cg_path;
+
+	cg_path = argv[argc - 1];
+	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+
+	running = 1;
+
+	/* catch SIGINT */
+	signal(SIGINT, running_handler);
+
+	if (load_bpf_file(filename)) {
+		fprintf(stderr, "load_bpf_file: (%s) %s\n",
+			filename, strerror(errno));
+		return 1;
+	}
+
+	/* Cgroup configuration */
+	cg_fd = open(cg_path, O_DIRECTORY, O_RDONLY);
+	if (cg_fd < 0) {
+		fprintf(stderr, "ERROR: (%i) open cg path failed: %s\n",
+			cg_fd, cg_path);
+		return cg_fd;
+	}
+
+	/* Attach programs to sockmap */
+	err = __bpf_prog_attach(prog_fd[0], prog_fd[1], map_fd[0],
+				BPF_CGROUP_SMAP_INGRESS, 0);
+	if (err) {
+		fprintf(stderr, "ERROR: bpf_prog_attach (sockmap): %d (%s)\n",
+			err, strerror(errno));
+		return err;
+	}
+
+	/* Attach to cgroups */
+	err = bpf_prog_attach(prog_fd[2], cg_fd, BPF_CGROUP_SOCK_OPS, 0);
+	if (err) {
+		fprintf(stderr, "ERROR: bpf_prog_attach (groups): %d (%s)\n",
+			err, strerror(errno));
+		return err;
+	}
+
+	err = sockmap_test_sockets(rate, dot);
+	if (err) {
+		fprintf(stderr, "ERROR: test socket failed: %d\n", err);
+		return err;
+	}
+	return 0;
+}
+
+void running_handler(int a)
+{
+	running = 0;
+}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index bf3b2e2..2d97dd2 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -110,6 +110,7 @@ enum bpf_map_type {
 	BPF_MAP_TYPE_ARRAY_OF_MAPS,
 	BPF_MAP_TYPE_HASH_OF_MAPS,
 	BPF_MAP_TYPE_DEVMAP,
+	BPF_MAP_TYPE_SOCKMAP,
 };
 
 enum bpf_prog_type {
@@ -127,6 +128,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_LWT_OUT,
 	BPF_PROG_TYPE_LWT_XMIT,
 	BPF_PROG_TYPE_SOCK_OPS,
+	BPF_PROG_TYPE_SK_SKB,
 };
 
 enum bpf_attach_type {
@@ -134,11 +136,18 @@ enum bpf_attach_type {
 	BPF_CGROUP_INET_EGRESS,
 	BPF_CGROUP_INET_SOCK_CREATE,
 	BPF_CGROUP_SOCK_OPS,
+	BPF_CGROUP_SMAP_INGRESS,
 	__MAX_BPF_ATTACH_TYPE
 };
 
 #define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE
 
+enum bpf_sockmap_flags {
+	BPF_SOCKMAP_UNSPEC,
+	BPF_SOCKMAP_STRPARSER,
+	__MAX_BPF_SOCKMAP_FLAG
+};
+
 /* If BPF_F_ALLOW_OVERRIDE flag is used in BPF_PROG_ATTACH command
  * to the given target_fd cgroup the descendent cgroup will be able to
  * override effective bpf program that was inherited from this cgroup
@@ -210,6 +219,7 @@ enum bpf_attach_type {
 		__u32		attach_bpf_fd;	/* eBPF program to attach */
 		__u32		attach_type;
 		__u32		attach_flags;
+		__u32		attach_bpf_fd2;
 	};
 
 	struct { /* anonymous struct used by BPF_PROG_TEST_RUN command */
@@ -545,6 +555,23 @@ enum bpf_attach_type {
  *     @mode: operation mode (enum bpf_adj_room_mode)
  *     @flags: reserved for future use
  *     Return: 0 on success or negative error code
+ *
+ * int bpf_sk_redirect_map(map, key, flags)
+ *     Redirect skb to a sock in map using key as a lookup key for the
+ *     sock in map.
+ *     @map: pointer to sockmap
+ *     @key: key to lookup sock in map
+ *     @flags: reserved for future use
+ *     Return: SK_REDIRECT
+ *
+ * int bpf_sock_map_update(skops, map, key, flags, map_flags)
+ *	@skops: pointer to bpf_sock_ops
+ *	@map: pointer to sockmap to update
+ *	@key: key to insert/update sock in map
+ *	@flags: same flags as map update elem
+ *	@map_flags: sock map specific flags
+ *	   bit 1: Enable strparser
+ *	   other bits: reserved
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -598,7 +625,9 @@ enum bpf_attach_type {
 	FN(set_hash),			\
 	FN(setsockopt),			\
 	FN(skb_adjust_room),		\
-	FN(redirect_map),
+	FN(redirect_map),		\
+	FN(sk_redirect_map),		\
+	FN(sock_map_update),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
@@ -675,6 +704,15 @@ struct __sk_buff {
 	__u32 data;
 	__u32 data_end;
 	__u32 napi_id;
+
+	/* accessed by BPF_PROG_TYPE_sk_skb types */
+	__u32 family;
+	__u32 remote_ip4;	/* Stored in network byte order */
+	__u32 local_ip4;	/* Stored in network byte order */
+	__u32 remote_ip6[4];	/* Stored in network byte order */
+	__u32 local_ip6[4];	/* Stored in network byte order */
+	__u32 remote_port;	/* Stored in network byte order */
+	__u32 local_port;	/* stored in host byte order */
 };
 
 struct bpf_tunnel_key {
@@ -734,6 +772,12 @@ struct xdp_md {
 	__u32 data_end;
 };
 
+enum sk_action {
+	SK_ABORTED = 0,
+	SK_DROP,
+	SK_REDIRECT,
+};
+
 #define BPF_TAG_SIZE	8
 
 struct bpf_prog_info {
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index e5bbb09..7766015 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -211,20 +211,28 @@ int bpf_obj_get(const char *pathname)
 	return sys_bpf(BPF_OBJ_GET, &attr, sizeof(attr));
 }
 
-int bpf_prog_attach(int prog_fd, int target_fd, enum bpf_attach_type type,
-		    unsigned int flags)
+int __bpf_prog_attach(int prog_fd1, int prog_fd2, int target_fd,
+		      enum bpf_attach_type type,
+		      unsigned int flags)
 {
 	union bpf_attr attr;
 
 	bzero(&attr, sizeof(attr));
 	attr.target_fd	   = target_fd;
-	attr.attach_bpf_fd = prog_fd;
+	attr.attach_bpf_fd = prog_fd1;
+	attr.attach_bpf_fd2 = prog_fd2;
 	attr.attach_type   = type;
 	attr.attach_flags  = flags;
 
 	return sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr));
 }
 
+int bpf_prog_attach(int prog_fd, int target_fd, enum bpf_attach_type type,
+		    unsigned int flags)
+{
+	return __bpf_prog_attach(prog_fd, 0, target_fd, type, flags);
+}
+
 int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
 {
 	union bpf_attr attr;
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 418c86e..eaee585 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -50,6 +50,10 @@ int bpf_map_update_elem(int fd, const void *key, const void *value,
 int bpf_obj_get(const char *pathname);
 int bpf_prog_attach(int prog_fd, int attachable_fd, enum bpf_attach_type type,
 		    unsigned int flags);
+int __bpf_prog_attach(int prog1, int prog2,
+		      int attachable_fd,
+		      enum bpf_attach_type type,
+		      unsigned int flags);
 int bpf_prog_detach(int attachable_fd, enum bpf_attach_type type);
 int bpf_prog_test_run(int prog_fd, int repeat, void *data, __u32 size,
 		      void *data_out, __u32 *size_out, __u32 *retval,
diff --git a/tools/testing/selftests/bpf/bpf_helpers.h b/tools/testing/selftests/bpf/bpf_helpers.h
index acbd605..73092d4 100644
--- a/tools/testing/selftests/bpf/bpf_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_helpers.h
@@ -65,6 +65,13 @@ static int (*bpf_xdp_adjust_head)(void *ctx, int offset) =
 static int (*bpf_setsockopt)(void *ctx, int level, int optname, void *optval,
 			     int optlen) =
 	(void *) BPF_FUNC_setsockopt;
+static int (*bpf_sk_redirect_map)(void *map, int key, int flags) =
+	(void *) BPF_FUNC_sk_redirect_map;
+static int (*bpf_sock_map_update)(void *map, void *key, void *value,
+				  unsigned long long flags,
+				  unsigned long long map_lags) =
+	(void *) BPF_FUNC_sock_map_update;
+
 
 /* llvm builtin functions that eBPF C program may use to
  * emit BPF_LD_ABS and BPF_LD_IND instructions

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 09/10] bpf: selftests: add tests for new __sk_buff members
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (7 preceding siblings ...)
  2017-08-16  5:33 ` [net-next PATCH 08/10] bpf: sockmap sample program John Fastabend
@ 2017-08-16  5:33 ` John Fastabend
  2017-08-16  5:34 ` [net-next PATCH 10/10] bpf: selftests add sockmap tests John Fastabend
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:33 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

This adds tests to access new __sk_buff members from sk skb program
type.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 tools/testing/selftests/bpf/test_verifier.c |  152 +++++++++++++++++++++++++++
 1 file changed, 152 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 1b76712..c03542c 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -967,6 +967,158 @@ struct test_val {
 		.result = REJECT,
 	},
 	{
+		"invalid access __sk_buff family",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, family)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"invalid access __sk_buff remote_ip4",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip4)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"invalid access __sk_buff local_ip4",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip4)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"invalid access __sk_buff remote_ip6",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip6)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"invalid access __sk_buff local_ip6",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip6)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"invalid access __sk_buff remote_port",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_port)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"invalid access __sk_buff remote_port",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_port)),
+			BPF_EXIT_INSN(),
+		},
+		.errstr = "invalid bpf_context access",
+		.result = REJECT,
+	},
+	{
+		"valid access __sk_buff family",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, family)),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
+		"valid access __sk_buff remote_ip4",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip4)),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
+		"valid access __sk_buff local_ip4",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip4)),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
+		"valid access __sk_buff remote_ip6",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip6[0])),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip6[1])),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip6[2])),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_ip6[3])),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
+		"valid access __sk_buff local_ip6",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip6[0])),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip6[1])),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip6[2])),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_ip6[3])),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
+		"valid access __sk_buff remote_port",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, remote_port)),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
+		"valid access __sk_buff remote_port",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, local_port)),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SK_SKB,
+	},
+	{
 		"check skb->mark is not writeable by sockets",
 		.insns = {
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [net-next PATCH 10/10] bpf: selftests add sockmap tests
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (8 preceding siblings ...)
  2017-08-16  5:33 ` [net-next PATCH 09/10] bpf: selftests: add tests for new __sk_buff members John Fastabend
@ 2017-08-16  5:34 ` John Fastabend
  2017-08-16 15:25 ` [net-next PATCH 00/10] BPF: sockmap and sk redirect support Daniel Borkmann
  2017-08-16 18:28 ` David Miller
  11 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16  5:34 UTC (permalink / raw)
  To: davem, daniel, ast; +Cc: tgraf, netdev, john.fastabend, tom

This generates a set of sockets, attaches BPF programs, and sends some
simple traffic using basic send/recv pattern. Additionally, we do a bunch
of negative tests to ensure adding/removing socks out of the sockmap fail
correctly.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 tools/lib/bpf/libbpf.c                             |   29 ++
 tools/lib/bpf/libbpf.h                             |    2 
 tools/testing/selftests/bpf/Makefile               |    2 
 tools/testing/selftests/bpf/sockmap_parse_prog.c   |   38 ++
 tools/testing/selftests/bpf/sockmap_verdict_prog.c |   48 +++
 tools/testing/selftests/bpf/test_maps.c            |  308 ++++++++++++++++++++
 tools/testing/selftests/bpf/test_progs.c           |   55 +---
 7 files changed, 443 insertions(+), 39 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/sockmap_parse_prog.c
 create mode 100644 tools/testing/selftests/bpf/sockmap_verdict_prog.c

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1a2c07e..1cc3ea0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -1744,3 +1744,32 @@ long libbpf_get_error(const void *ptr)
 		return PTR_ERR(ptr);
 	return 0;
 }
+
+int bpf_prog_load(const char *file, enum bpf_prog_type type,
+		  struct bpf_object **pobj, int *prog_fd)
+{
+	struct bpf_program *prog;
+	struct bpf_object *obj;
+	int err;
+
+	obj = bpf_object__open(file);
+	if (IS_ERR(obj))
+		return -ENOENT;
+
+	prog = bpf_program__next(NULL, obj);
+	if (!prog) {
+		bpf_object__close(obj);
+		return -ENOENT;
+	}
+
+	bpf_program__set_type(prog, type);
+	err = bpf_object__load(obj);
+	if (err) {
+		bpf_object__close(obj);
+		return -EINVAL;
+	}
+
+	*pobj = obj;
+	*prog_fd = bpf_program__fd(prog);
+	return 0;
+}
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 32c7252..7959086 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -243,4 +243,6 @@ int bpf_map__set_priv(struct bpf_map *map, void *priv,
 
 long libbpf_get_error(const void *ptr);
 
+int bpf_prog_load(const char *file, enum bpf_prog_type type,
+		  struct bpf_object **pobj, int *prog_fd);
 #endif
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 3c2e67d..f4b23d6 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -15,7 +15,7 @@ TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test
 	test_align
 
 TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test_obj_id.o \
-	test_pkt_md_access.o test_xdp_redirect.o
+	test_pkt_md_access.o test_xdp_redirect.o sockmap_parse_prog.o sockmap_verdict_prog.o
 
 TEST_PROGS := test_kmod.sh test_xdp_redirect.sh
 
diff --git a/tools/testing/selftests/bpf/sockmap_parse_prog.c b/tools/testing/selftests/bpf/sockmap_parse_prog.c
new file mode 100644
index 0000000..8b54531
--- /dev/null
+++ b/tools/testing/selftests/bpf/sockmap_parse_prog.c
@@ -0,0 +1,38 @@
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+#include "bpf_util.h"
+#include "bpf_endian.h"
+
+int _version SEC("version") = 1;
+
+#define bpf_printk(fmt, ...)					\
+({								\
+	       char ____fmt[] = fmt;				\
+	       bpf_trace_printk(____fmt, sizeof(____fmt),	\
+				##__VA_ARGS__);			\
+})
+
+SEC("sk_skb1")
+int bpf_prog1(struct __sk_buff *skb)
+{
+	void *data_end = (void *)(long) skb->data_end;
+	void *data = (void *)(long) skb->data;
+	__u32 lport = skb->local_port;
+	__u32 rport = skb->remote_port;
+	char *d = data;
+
+	if (data + 8 > data_end)
+		return skb->len;
+
+	/* This write/read is a bit pointless but tests the verifier and
+	 * strparser handler for read/write pkt data and access into sk
+	 * fields.
+	 */
+	d[0] = 1;
+
+	bpf_printk("data[0] = (%u): local_port %i remote %i\n",
+		   d[0], lport, bpf_ntohl(rport));
+	return skb->len;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/sockmap_verdict_prog.c b/tools/testing/selftests/bpf/sockmap_verdict_prog.c
new file mode 100644
index 0000000..d5f9447
--- /dev/null
+++ b/tools/testing/selftests/bpf/sockmap_verdict_prog.c
@@ -0,0 +1,48 @@
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+#include "bpf_util.h"
+#include "bpf_endian.h"
+
+int _version SEC("version") = 1;
+
+#define bpf_printk(fmt, ...)					\
+({								\
+	       char ____fmt[] = fmt;				\
+	       bpf_trace_printk(____fmt, sizeof(____fmt),	\
+				##__VA_ARGS__);			\
+})
+
+struct bpf_map_def SEC("maps") sock_map = {
+	.type = BPF_MAP_TYPE_SOCKMAP,
+	.key_size = sizeof(int),
+	.value_size = sizeof(int),
+	.max_entries = 20,
+};
+
+SEC("sk_skb2")
+int bpf_prog2(struct __sk_buff *skb)
+{
+	void *data_end = (void *)(long) skb->data_end;
+	void *data = (void *)(long) skb->data;
+	__u32 lport = skb->local_port;
+	__u32 rport = skb->remote_port;
+	char *d = data;
+
+	if (data + 8 > data_end)
+		return SK_DROP;
+
+	d[0] = 0xd;
+	d[1] = 0xe;
+	d[2] = 0xa;
+	d[3] = 0xd;
+	d[4] = 0xb;
+	d[5] = 0xe;
+	d[6] = 0xe;
+	d[7] = 0xf;
+
+	bpf_printk("data[0] = (%u): local_port %i remote %i\n",
+		   d[0], lport, bpf_ntohl(rport));
+	return bpf_sk_redirect_map(&sock_map, 5, 0);
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index c991ab6..40b2d1f 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -22,6 +22,7 @@
 #include <linux/bpf.h>
 
 #include <bpf/bpf.h>
+#include <bpf/libbpf.h>
 #include "bpf_util.h"
 
 static int map_flags;
@@ -453,6 +454,312 @@ static void test_devmap(int task, void *data)
 	close(fd);
 }
 
+#include <sys/socket.h>
+#include <sys/ioctl.h>
+#include <arpa/inet.h>
+#include <sys/select.h>
+#include <linux/err.h>
+#define SOCKMAP_PARSE_PROG "./sockmap_parse_prog.o"
+#define SOCKMAP_VERDICT_PROG "./sockmap_verdict_prog.o"
+static void test_sockmap(int task, void *data)
+{
+	int ports[] = {50200, 50201, 50202, 50204};
+	int err, i, fd, sfd[6] = {0xdeadbeef};
+	char buf[] = "hello sockmap user\n";
+	int one = 1, map_fd, s, sc, rc;
+	int parse_prog, verdict_prog;
+	struct bpf_map *bpf_map;
+	struct sockaddr_in addr;
+	struct bpf_object *obj;
+	struct timeval to;
+	__u32 key, value;
+	fd_set w;
+
+	/* Create some sockets to use with sockmap */
+	for (i = 0; i < 2; i++) {
+		sfd[i] = socket(AF_INET, SOCK_STREAM, 0);
+		if (sfd[i] < 0)
+			goto out;
+		err = setsockopt(sfd[i], SOL_SOCKET, SO_REUSEADDR,
+				 (char *)&one, sizeof(one));
+		if (err) {
+			printf("failed to setsockopt\n");
+			goto out;
+		}
+		err = ioctl(sfd[i], FIONBIO, (char *)&one);
+		if (err < 0) {
+			printf("failed to ioctl\n");
+			goto out;
+		}
+		memset(&addr, 0, sizeof(struct sockaddr_in));
+		addr.sin_family = AF_INET;
+		addr.sin_addr.s_addr = inet_addr("127.0.0.1");
+		addr.sin_port = htons(ports[i]);
+		err = bind(sfd[i], (struct sockaddr *)&addr, sizeof(addr));
+		if (err < 0) {
+			printf("failed to bind: err %i: %i:%i\n",
+			       err, i, sfd[i]);
+			goto out;
+		}
+		err = listen(sfd[i], 32);
+		if (err < 0) {
+			printf("failed to listeen\n");
+			goto out;
+		}
+	}
+
+	for (i = 2; i < 4; i++) {
+		sfd[i] = socket(AF_INET, SOCK_STREAM, 0);
+		if (sfd[i] < 0)
+			goto out;
+		err = setsockopt(sfd[i], SOL_SOCKET, SO_REUSEADDR,
+				 (char *)&one, sizeof(one));
+		if (err) {
+			printf("set sock opt\n");
+			goto out;
+		}
+		memset(&addr, 0, sizeof(struct sockaddr_in));
+		addr.sin_family = AF_INET;
+		addr.sin_addr.s_addr = inet_addr("127.0.0.1");
+		addr.sin_port = htons(ports[i - 2]);
+		err = connect(sfd[i], (struct sockaddr *)&addr, sizeof(addr));
+		if (err) {
+			printf("failed to conenct\n");
+			goto out;
+		}
+	}
+
+
+	for (i = 4; i < 6; i++) {
+		sfd[i] = accept(sfd[i - 4], NULL, NULL);
+		if (sfd[i] < 0) {
+			printf("accept failed\n");
+			goto out;
+		}
+	}
+
+	/* Test sockmap with connected sockets */
+	fd = bpf_create_map(BPF_MAP_TYPE_SOCKMAP,
+			    sizeof(key), sizeof(value),
+			    6, 0);
+	if (fd < 0) {
+		printf("Failed to create sockmap %i\n", fd);
+		goto out_sockmap;
+	}
+
+	/* Nothing attached so these should fail */
+	for (i = 0; i < 6; i++) {
+		err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+		if (!err) {
+			printf("Failed invalid update sockmap '%i:%i'\n",
+			       i, sfd[i]);
+			goto out_sockmap;
+		}
+	}
+
+	/* Test attaching bad fds */
+	err = __bpf_prog_attach(-1, -2, fd, BPF_CGROUP_SMAP_INGRESS, 0);
+	if (!err) {
+		printf("Failed invalid prog attach\n");
+		goto out_sockmap;
+	}
+
+	/* Load SK_SKB program and Attach */
+	err = bpf_prog_load(SOCKMAP_PARSE_PROG,
+			    BPF_PROG_TYPE_SK_SKB, &obj, &parse_prog);
+	if (err) {
+		printf("Failed to load SK_SKB parse prog\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_prog_load(SOCKMAP_VERDICT_PROG,
+			    BPF_PROG_TYPE_SK_SKB, &obj, &verdict_prog);
+	if (err) {
+		printf("Failed to load SK_SKB verdict prog\n");
+		goto out_sockmap;
+	}
+
+	bpf_map = bpf_object__find_map_by_name(obj, "sock_map");
+	if (IS_ERR(bpf_map)) {
+		printf("Failed to load map from verdict prog\n");
+		goto out_sockmap;
+	}
+
+	map_fd = bpf_map__fd(bpf_map);
+	if (map_fd < 0) {
+		printf("Failed to get map fd\n");
+		goto out_sockmap;
+	}
+
+	err = __bpf_prog_attach(parse_prog, verdict_prog, map_fd,
+				BPF_CGROUP_SMAP_INGRESS, 0);
+	if (err) {
+		printf("Failed bpf prog attach\n");
+		goto out_sockmap;
+	}
+
+	/* Test map update elem */
+	for (i = 0; i < 6; i++) {
+		err = bpf_map_update_elem(map_fd, &i, &sfd[i], BPF_ANY);
+		if (err) {
+			printf("Failed map_fd update sockmap %i '%i:%i'\n",
+			       err, i, sfd[i]);
+			goto out_sockmap;
+		}
+	}
+
+	/* Test map delete elem and remove send/recv sockets */
+	for (i = 2; i < 4; i++) {
+		err = bpf_map_delete_elem(map_fd, &i);
+		if (err) {
+			printf("Failed delete  sockmap %i '%i:%i'\n",
+			       err, i, sfd[i]);
+			goto out_sockmap;
+		}
+	}
+
+	/* Test map send/recv */
+	sc = send(sfd[2], buf, 10, 0);
+	if (sc < 0) {
+		printf("Failed sockmap send\n");
+		goto out_sockmap;
+	}
+
+	FD_ZERO(&w);
+	FD_SET(sfd[3], &w);
+	to.tv_sec = 1;
+	to.tv_usec = 0;
+	s = select(sfd[3] + 1, &w, NULL, NULL, &to);
+	if (s == -1) {
+		perror("Failed sockmap select()");
+		goto out_sockmap;
+	} else if (!s) {
+		printf("Failed sockmap unexpected timeout\n");
+		goto out_sockmap;
+	}
+
+	if (!FD_ISSET(sfd[3], &w)) {
+		printf("Failed sockmap select/recv\n");
+		goto out_sockmap;
+	}
+
+	rc = recv(sfd[3], buf, sizeof(buf), 0);
+	if (rc < 0) {
+		printf("Failed sockmap recv\n");
+		goto out_sockmap;
+	}
+
+	/* Delete the reset of the elems include some NULL elems */
+	for (i = 0; i < 6; i++) {
+		err = bpf_map_delete_elem(map_fd, &i);
+		if (err && (i == 0 || i == 1 || i >= 4)) {
+			printf("Failed delete  sockmap %i '%i:%i'\n",
+			       err, i, sfd[i]);
+			goto out_sockmap;
+		} else if (!err && (i == 2 || i == 3)) {
+			printf("Failed null delete sockmap %i '%i:%i'\n",
+			       err, i, sfd[i]);
+			goto out_sockmap;
+		}
+	}
+
+	/* Test having multiple SMAPs open and active on same fds */
+	err = __bpf_prog_attach(parse_prog, verdict_prog, fd,
+				BPF_CGROUP_SMAP_INGRESS, 0);
+	if (err) {
+		printf("Failed fd bpf prog attach\n");
+		goto out_sockmap;
+	}
+
+	for (i = 0; i < 6; i++) {
+		err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+		if (err) {
+			printf("Failed fd update sockmap %i '%i:%i'\n",
+			       err, i, sfd[i]);
+			goto out_sockmap;
+		}
+	}
+
+	/* Test duplicate socket add of NOEXIST, ANY and EXIST */
+	i = 0;
+	err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_NOEXIST);
+	if (!err) {
+		printf("Failed BPF_NOEXIST create\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+	if (err) {
+		printf("Failed sockmap update BPF_ANY\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_EXIST);
+	if (err) {
+		printf("Failed sockmap update BPF_EXIST\n");
+		goto out_sockmap;
+	}
+
+	/* The above were pushing fd into same slot try different slot now */
+	i = 2;
+	err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_NOEXIST);
+	if (!err) {
+		printf("Failed BPF_NOEXIST create\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+	if (err) {
+		printf("Failed sockmap update BPF_ANY\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_EXIST);
+	if (err) {
+		printf("Failed sockmap update BPF_EXIST\n");
+		goto out_sockmap;
+	}
+
+	/* Try pushing fd into different map, this is not allowed at the
+	 * moment. Which programs would we use?
+	 */
+	err = bpf_map_update_elem(map_fd, &i, &sfd[i], BPF_NOEXIST);
+	if (!err) {
+		printf("Failed BPF_NOEXIST create\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_map_update_elem(map_fd, &i, &sfd[i], BPF_ANY);
+	if (!err) {
+		printf("Failed sockmap update BPF_ANY\n");
+		goto out_sockmap;
+	}
+
+	err = bpf_map_update_elem(map_fd, &i, &sfd[i], BPF_EXIST);
+	if (!err) {
+		printf("Failed sockmap update BPF_EXIST\n");
+		goto out_sockmap;
+	}
+
+	/* Test map close sockets */
+	for (i = 0; i < 6; i++)
+		close(sfd[i]);
+	close(fd);
+	close(map_fd);
+	bpf_object__close(obj);
+	return;
+out:
+	for (i = 0; i < 6; i++)
+		close(sfd[i]);
+	printf("Failed to create sockmap '%i:%s'!\n", i, strerror(errno));
+	exit(1);
+out_sockmap:
+	for (i = 0; i < 6; i++)
+		close(sfd[i]);
+	close(fd);
+	exit(1);
+}
+
 #define MAP_SIZE (32 * 1024)
 
 static void test_map_large(void)
@@ -621,6 +928,7 @@ static void run_all_tests(void)
 	test_arraymap_percpu_many_keys();
 
 	test_devmap(0, NULL);
+	test_sockmap(0, NULL);
 
 	test_map_large();
 	test_map_parallel();
diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index 1f7dd35..1cb0378 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -75,39 +75,6 @@
 	__ret;								\
 })
 
-static int bpf_prog_load(const char *file, enum bpf_prog_type type,
-			 struct bpf_object **pobj, int *prog_fd)
-{
-	struct bpf_program *prog;
-	struct bpf_object *obj;
-	int err;
-
-	obj = bpf_object__open(file);
-	if (IS_ERR(obj)) {
-		error_cnt++;
-		return -ENOENT;
-	}
-
-	prog = bpf_program__next(NULL, obj);
-	if (!prog) {
-		bpf_object__close(obj);
-		error_cnt++;
-		return -ENOENT;
-	}
-
-	bpf_program__set_type(prog, type);
-	err = bpf_object__load(obj);
-	if (err) {
-		bpf_object__close(obj);
-		error_cnt++;
-		return -EINVAL;
-	}
-
-	*pobj = obj;
-	*prog_fd = bpf_program__fd(prog);
-	return 0;
-}
-
 static int bpf_find_map(const char *test, struct bpf_object *obj,
 			const char *name)
 {
@@ -130,8 +97,10 @@ static void test_pkt_access(void)
 	int err, prog_fd;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
-	if (err)
+	if (err) {
+		error_cnt++;
 		return;
+	}
 
 	err = bpf_prog_test_run(prog_fd, 100000, &pkt_v4, sizeof(pkt_v4),
 				NULL, NULL, &retval, &duration);
@@ -162,8 +131,10 @@ static void test_xdp(void)
 	int err, prog_fd, map_fd;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_XDP, &obj, &prog_fd);
-	if (err)
+	if (err) {
+		error_cnt++;
 		return;
+	}
 
 	map_fd = bpf_find_map(__func__, obj, "vip2tnl");
 	if (map_fd < 0)
@@ -223,8 +194,10 @@ static void test_l4lb(void)
 	u32 *magic = (u32 *)buf;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
-	if (err)
+	if (err) {
+		error_cnt++;
 		return;
+	}
 
 	map_fd = bpf_find_map(__func__, obj, "vip_map");
 	if (map_fd < 0)
@@ -280,8 +253,10 @@ static void test_tcp_estats(void)
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
 	CHECK(err, "", "err %d errno %d\n", err, errno);
-	if (err)
+	if (err) {
+		error_cnt++;
 		return;
+	}
 
 	bpf_object__close(obj);
 }
@@ -336,6 +311,8 @@ static void test_bpf_obj_id(void)
 		/* test_obj_id.o is a dumb prog. It should never fail
 		 * to load.
 		 */
+		if (err)
+			error_cnt++;
 		assert(!err);
 
 		/* Check getting prog info */
@@ -496,8 +473,10 @@ static void test_pkt_md_access(void)
 	int err, prog_fd;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
-	if (err)
+	if (err) {
+		error_cnt++;
 		return;
+	}
 
 	err = bpf_prog_test_run(prog_fd, 10, &pkt_v4, sizeof(pkt_v4),
 				NULL, NULL, &retval, &duration);

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (9 preceding siblings ...)
  2017-08-16  5:34 ` [net-next PATCH 10/10] bpf: selftests add sockmap tests John Fastabend
@ 2017-08-16 15:25 ` Daniel Borkmann
  2017-08-16 18:28 ` David Miller
  11 siblings, 0 replies; 31+ messages in thread
From: Daniel Borkmann @ 2017-08-16 15:25 UTC (permalink / raw)
  To: John Fastabend, davem, ast; +Cc: tgraf, netdev, tom

On 08/16/2017 07:30 AM, John Fastabend wrote:
> This series implements a sockmap and socket redirect helper for BPF
> using a model similar to XDP netdev redirect. A sockmap is a BPF map
> type that holds references to sock structs. Then with a new sk
> redirect bpf helper BPF programs can use the map to redirect skbs
> between sockets,
>
>        bpf_sk_redirect_map(map, key, flags)
>
> Finally, we need a call site to attach our BPF logic to do socket
> redirects. We added hooks to recv_sock using the existing strparser
> infrastructure to do this. The call site is added via the BPF attach
> map call. To enable users to use this infrastructure a new BPF program
> BPF_PROG_TYPE_SK_SKB is created that allows users to reference sock
> details, such as port and ip address fields, to build useful socket
> layer program. The sockmap datapath is as follows,
>
>       recv -> strparser -> verdict/action
>
> where this series implements the drop and redirect actions.
> Additional, actions can be added as needed.
>
> A sample program is provided to illustrate how a sockmap can
> be integrated with cgroups and used to add/delete sockets in
> a sockmap. The program is simple but should show many of the
> key ideas.
>
> To test this work test_maps in selftests/bpf was leveraged.
> We added a set of tests to add sockets and do send/recv ops
> on the sockets to ensure correct behavior. Additionally, the
> selftests tests a series of negative test cases. We can expand
> on this in the future.
>
> I also have a basic test program I use with iperf/netperf
> clients that could be sent as an additional sample if folks
> want this. It needs a bit of cleanup to send to the list and
> wasn't included in this series.
>
> For people who prefer git over pulling patches out of their mail
> editor I've posted the code here,
>
> https://github.com/jrfastab/linux-kernel-xdp/tree/sockmap
>
> For some background information on the genesis of this work
> it might be helpful to review these slides from netconf 2017
> by Thomas Graf,
>
> http://vger.kernel.org/netconf2017.html
> https://docs.google.com/a/covalent.io/presentation/d/1dwSKSBGpUHD3WO5xxzZWj8awV_-xL-oYhvqQMOBhhtk/edit?usp=sharing
>
> Thanks to Daniel Borkmann for reviewing and providing initial
> feedback.

LGTM, for the set:

Acked-by: Daniel Borkmann <daniel@iogearbox.net>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
                   ` (10 preceding siblings ...)
  2017-08-16 15:25 ` [net-next PATCH 00/10] BPF: sockmap and sk redirect support Daniel Borkmann
@ 2017-08-16 18:28 ` David Miller
  2017-08-16 18:35   ` David Miller
  11 siblings, 1 reply; 31+ messages in thread
From: David Miller @ 2017-08-16 18:28 UTC (permalink / raw)
  To: john.fastabend; +Cc: daniel, ast, tgraf, netdev, tom

From: John Fastabend <john.fastabend@gmail.com>
Date: Tue, 15 Aug 2017 22:30:15 -0700

> This series implements a sockmap and socket redirect helper for BPF
> using a model similar to XDP netdev redirect.

Series applied, thanks John.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 18:28 ` David Miller
@ 2017-08-16 18:35   ` David Miller
  2017-08-16 19:06     ` John Fastabend
  0 siblings, 1 reply; 31+ messages in thread
From: David Miller @ 2017-08-16 18:35 UTC (permalink / raw)
  To: john.fastabend; +Cc: daniel, ast, tgraf, netdev, tom

From: David Miller <davem@davemloft.net>
Date: Wed, 16 Aug 2017 11:28:19 -0700 (PDT)

> From: John Fastabend <john.fastabend@gmail.com>
> Date: Tue, 15 Aug 2017 22:30:15 -0700
> 
>> This series implements a sockmap and socket redirect helper for BPF
>> using a model similar to XDP netdev redirect.
> 
> Series applied, thanks John.
> 

We get a legit warning from gcc due to these changes:

kernel/bpf/sockmap.c: In function ‘smap_state_change’:
kernel/bpf/sockmap.c:156:21: warning: ‘psock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
  struct smap_psock *psock;

It's the default switch case in this function, psock is not initialized
for sure at this point.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 18:35   ` David Miller
@ 2017-08-16 19:06     ` John Fastabend
  2017-08-16 19:13       ` David Miller
  0 siblings, 1 reply; 31+ messages in thread
From: John Fastabend @ 2017-08-16 19:06 UTC (permalink / raw)
  To: David Miller; +Cc: daniel, ast, tgraf, netdev, tom

On 08/16/2017 11:35 AM, David Miller wrote:
> From: David Miller <davem@davemloft.net>
> Date: Wed, 16 Aug 2017 11:28:19 -0700 (PDT)
> 
>> From: John Fastabend <john.fastabend@gmail.com>
>> Date: Tue, 15 Aug 2017 22:30:15 -0700
>>
>>> This series implements a sockmap and socket redirect helper for BPF
>>> using a model similar to XDP netdev redirect.
>>
>> Series applied, thanks John.
>>
> 
> We get a legit warning from gcc due to these changes:
> 
> kernel/bpf/sockmap.c: In function ‘smap_state_change’:
> kernel/bpf/sockmap.c:156:21: warning: ‘psock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
>   struct smap_psock *psock;
> 
> It's the default switch case in this function, psock is not initialized
> for sure at this point.
> 

I missed this with older gcc4 it seems. Fixed now and compiling without errors
now with gcc4/5. Below is the diff and all verifier/sockmap tests pass. Want a
v2 I presume?

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 792f0ad..f7e5e6c 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -188,6 +188,9 @@ static void smap_state_change(struct sock *sk)
                        smap_release_sock(sk);
                break;
        default:
+               psock = smap_psock_sk(sk);
+               if (unlikely(!psock))
+                       break;
                smap_report_sk_error(psock, EPIPE);
                break;
        }

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 19:06     ` John Fastabend
@ 2017-08-16 19:13       ` David Miller
  2017-08-16 19:17         ` Eric Dumazet
  0 siblings, 1 reply; 31+ messages in thread
From: David Miller @ 2017-08-16 19:13 UTC (permalink / raw)
  To: john.fastabend; +Cc: daniel, ast, tgraf, netdev, tom

From: John Fastabend <john.fastabend@gmail.com>
Date: Wed, 16 Aug 2017 12:06:36 -0700

> On 08/16/2017 11:35 AM, David Miller wrote:
>> From: David Miller <davem@davemloft.net>
>> Date: Wed, 16 Aug 2017 11:28:19 -0700 (PDT)
>> 
>>> From: John Fastabend <john.fastabend@gmail.com>
>>> Date: Tue, 15 Aug 2017 22:30:15 -0700
>>>
>>>> This series implements a sockmap and socket redirect helper for BPF
>>>> using a model similar to XDP netdev redirect.
>>>
>>> Series applied, thanks John.
>>>
>> 
>> We get a legit warning from gcc due to these changes:
>> 
>> kernel/bpf/sockmap.c: In function ‘smap_state_change’:
>> kernel/bpf/sockmap.c:156:21: warning: ‘psock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
>>   struct smap_psock *psock;
>> 
>> It's the default switch case in this function, psock is not initialized
>> for sure at this point.
>> 
> 
> I missed this with older gcc4 it seems. Fixed now and compiling without errors
> now with gcc4/5. Below is the diff and all verifier/sockmap tests pass. Want a
> v2 I presume?

I already pushed out v1, so you'll need to send me a fixup patch.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 19:13       ` David Miller
@ 2017-08-16 19:17         ` Eric Dumazet
  2017-08-16 19:34           ` John Fastabend
  0 siblings, 1 reply; 31+ messages in thread
From: Eric Dumazet @ 2017-08-16 19:17 UTC (permalink / raw)
  To: David Miller; +Cc: john.fastabend, daniel, ast, tgraf, netdev, tom

On Wed, 2017-08-16 at 12:13 -0700, David Miller wrote:
> From: John Fastabend <john.fastabend@gmail.com>
> Date: Wed, 16 Aug 2017 12:06:36 -0700
> 
> > On 08/16/2017 11:35 AM, David Miller wrote:
> >> From: David Miller <davem@davemloft.net>
> >> Date: Wed, 16 Aug 2017 11:28:19 -0700 (PDT)
> >> 
> >>> From: John Fastabend <john.fastabend@gmail.com>
> >>> Date: Tue, 15 Aug 2017 22:30:15 -0700
> >>>
> >>>> This series implements a sockmap and socket redirect helper for BPF
> >>>> using a model similar to XDP netdev redirect.
> >>>
> >>> Series applied, thanks John.
> >>>
> >> 
> >> We get a legit warning from gcc due to these changes:
> >> 
> >> kernel/bpf/sockmap.c: In function ‘smap_state_change’:
> >> kernel/bpf/sockmap.c:156:21: warning: ‘psock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
> >>   struct smap_psock *psock;
> >> 
> >> It's the default switch case in this function, psock is not initialized
> >> for sure at this point.
> >> 
> > 
> > I missed this with older gcc4 it seems. Fixed now and compiling without errors
> > now with gcc4/5. Below is the diff and all verifier/sockmap tests pass. Want a
> > v2 I presume?
> 
> I already pushed out v1, so you'll need to send me a fixup patch.

I also have a build error.

$ git grep -n __sock_map_lookup_elem
include/linux/bpf.h:316:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
kernel/bpf/sockmap.c:558:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
net/core/filter.c:1881:         sk = __sock_map_lookup_elem(ri->map, ri->ifindex);



$ make ...
...
net/core/filter.c: In function ‘do_sk_redirect_map’:
net/core/filter.c:1881:3: error: implicit declaration of function ‘__sock_map_lookup_elem’ [-Werror=implicit-function-declaration]
   sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
   ^
net/core/filter.c:1881:6: warning: assignment makes pointer from integer without a cast [enabled by default]
   sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
      ^
cc1: some warnings being treated as errors
make[2]: *** [net/core/filter.o] Error 1
make[2]: *** Waiting for unfinished jobs....

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 19:17         ` Eric Dumazet
@ 2017-08-16 19:34           ` John Fastabend
  2017-08-16 21:22             ` David Miller
  2017-08-16 21:35             ` David Ahern
  0 siblings, 2 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16 19:34 UTC (permalink / raw)
  To: Eric Dumazet, David Miller; +Cc: daniel, ast, tgraf, netdev, tom

On 08/16/2017 12:17 PM, Eric Dumazet wrote:
> On Wed, 2017-08-16 at 12:13 -0700, David Miller wrote:
>> From: John Fastabend <john.fastabend@gmail.com>
>> Date: Wed, 16 Aug 2017 12:06:36 -0700
>>
>>> On 08/16/2017 11:35 AM, David Miller wrote:
>>>> From: David Miller <davem@davemloft.net>
>>>> Date: Wed, 16 Aug 2017 11:28:19 -0700 (PDT)
>>>>
>>>>> From: John Fastabend <john.fastabend@gmail.com>
>>>>> Date: Tue, 15 Aug 2017 22:30:15 -0700
>>>>>
>>>>>> This series implements a sockmap and socket redirect helper for BPF
>>>>>> using a model similar to XDP netdev redirect.
>>>>>
>>>>> Series applied, thanks John.
>>>>>
>>>>
>>>> We get a legit warning from gcc due to these changes:
>>>>
>>>> kernel/bpf/sockmap.c: In function ‘smap_state_change’:
>>>> kernel/bpf/sockmap.c:156:21: warning: ‘psock’ may be used uninitialized in this function [-Wmaybe-uninitialized]
>>>>   struct smap_psock *psock;
>>>>
>>>> It's the default switch case in this function, psock is not initialized
>>>> for sure at this point.
>>>>
>>>
>>> I missed this with older gcc4 it seems. Fixed now and compiling without errors
>>> now with gcc4/5. Below is the diff and all verifier/sockmap tests pass. Want a
>>> v2 I presume?
>>
>> I already pushed out v1, so you'll need to send me a fixup patch.
> 
> I also have a build error.
> 
> $ git grep -n __sock_map_lookup_elem
> include/linux/bpf.h:316:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
> kernel/bpf/sockmap.c:558:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
> net/core/filter.c:1881:         sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
> 
> 
> 
> $ make ...
> ...
> net/core/filter.c: In function ‘do_sk_redirect_map’:
> net/core/filter.c:1881:3: error: implicit declaration of function ‘__sock_map_lookup_elem’ [-Werror=implicit-function-declaration]
>    sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>    ^
> net/core/filter.c:1881:6: warning: assignment makes pointer from integer without a cast [enabled by default]
>    sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>       ^
> cc1: some warnings being treated as errors
> make[2]: *** [net/core/filter.o] Error 1
> make[2]: *** Waiting for unfinished jobs....
> 
> 

Thanks Eric, I'll have a fix shortly.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 19:34           ` John Fastabend
@ 2017-08-16 21:22             ` David Miller
  2017-08-16 21:35             ` David Ahern
  1 sibling, 0 replies; 31+ messages in thread
From: David Miller @ 2017-08-16 21:22 UTC (permalink / raw)
  To: john.fastabend; +Cc: eric.dumazet, daniel, ast, tgraf, netdev, tom

From: John Fastabend <john.fastabend@gmail.com>
Date: Wed, 16 Aug 2017 12:34:44 -0700

> Thanks Eric, I'll have a fix shortly.

The faster you fix these regressions the better :-)

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 19:34           ` John Fastabend
  2017-08-16 21:22             ` David Miller
@ 2017-08-16 21:35             ` David Ahern
  2017-08-16 21:37               ` John Fastabend
  1 sibling, 1 reply; 31+ messages in thread
From: David Ahern @ 2017-08-16 21:35 UTC (permalink / raw)
  To: John Fastabend, Eric Dumazet, David Miller
  Cc: daniel, ast, tgraf, netdev, tom

On 8/16/17 1:34 PM, John Fastabend wrote:
>> I also have a build error.
>>
>> $ git grep -n __sock_map_lookup_elem
>> include/linux/bpf.h:316:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
>> kernel/bpf/sockmap.c:558:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
>> net/core/filter.c:1881:         sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>>
>>
>>
>> $ make ...
>> ...
>> net/core/filter.c: In function ‘do_sk_redirect_map’:
>> net/core/filter.c:1881:3: error: implicit declaration of function ‘__sock_map_lookup_elem’ [-Werror=implicit-function-declaration]
>>    sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>>    ^
>> net/core/filter.c:1881:6: warning: assignment makes pointer from integer without a cast [enabled by default]
>>    sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>>       ^
>> cc1: some warnings being treated as errors
>> make[2]: *** [net/core/filter.o] Error 1
>> make[2]: *** Waiting for unfinished jobs....
>>
>>
> 
> Thanks Eric, I'll have a fix shortly.
> 

And I have a different build error:

$ make O=kbuild/rcu-lock-debug/ -j24 -s
scripts/Makefile.kasan:25: CONFIG_KASAN: compiler does not support all
options. Trying minimal configuration
scripts/Makefile.kasan:25: CONFIG_KASAN: compiler does not support all
options. Trying minimal configuration
kernel/bpf/sockmap.o: In function `smap_stop_sock':
/home/dsa/kernel-2.git/kernel/bpf/sockmap.c:297: undefined reference to
`strp_stop'
kernel/bpf/sockmap.o: In function `smap_gc_work':
/home/dsa/kernel-2.git/kernel/bpf/sockmap.c:419: undefined reference to
`strp_done'
kernel/bpf/sockmap.o: In function `smap_data_ready':
/home/dsa/kernel-2.git/kernel/bpf/sockmap.c:216: undefined reference to
`strp_data_ready'
kernel/bpf/sockmap.o: In function `smap_init_sock':
/home/dsa/kernel-2.git/kernel/bpf/sockmap.c:373: undefined reference to
`strp_init'
/home/dsa/kernel-2.git/Makefile:1000: recipe for target 'vmlinux' failed
make[1]: *** [vmlinux] Error 1
Makefile:145: recipe for target 'sub-make' failed
make: *** [sub-make] Error 2

I'm guessing a missing CONFIG tie in.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 00/10] BPF: sockmap and sk redirect support
  2017-08-16 21:35             ` David Ahern
@ 2017-08-16 21:37               ` John Fastabend
  0 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-16 21:37 UTC (permalink / raw)
  To: David Ahern, Eric Dumazet, David Miller; +Cc: daniel, ast, tgraf, netdev, tom

On 08/16/2017 02:35 PM, David Ahern wrote:
> On 8/16/17 1:34 PM, John Fastabend wrote:
>>> I also have a build error.
>>>
>>> $ git grep -n __sock_map_lookup_elem
>>> include/linux/bpf.h:316:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
>>> kernel/bpf/sockmap.c:558:struct sock  *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
>>> net/core/filter.c:1881:         sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>>>
>>>
>>>
>>> $ make ...
>>> ...
>>> net/core/filter.c: In function ‘do_sk_redirect_map’:
>>> net/core/filter.c:1881:3: error: implicit declaration of function ‘__sock_map_lookup_elem’ [-Werror=implicit-function-declaration]
>>>    sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>>>    ^
>>> net/core/filter.c:1881:6: warning: assignment makes pointer from integer without a cast [enabled by default]
>>>    sk = __sock_map_lookup_elem(ri->map, ri->ifindex);
>>>       ^
>>> cc1: some warnings being treated as errors
>>> make[2]: *** [net/core/filter.o] Error 1
>>> make[2]: *** Waiting for unfinished jobs....
>>>
>>>
>>
>> Thanks Eric, I'll have a fix shortly.
>>
> 
> And I have a different build error:
> 
> $ make O=kbuild/rcu-lock-debug/ -j24 -s
> scripts/Makefile.kasan:25: CONFIG_KASAN: compiler does not support all
> options. Trying minimal configuration
> scripts/Makefile.kasan:25: CONFIG_KASAN: compiler does not support all
> options. Trying minimal configuration
> kernel/bpf/sockmap.o: In function `smap_stop_sock':
> /home/dsa/kernel-2.git/kernel/bpf/sockmap.c:297: undefined reference to
> `strp_stop'
> kernel/bpf/sockmap.o: In function `smap_gc_work':
> /home/dsa/kernel-2.git/kernel/bpf/sockmap.c:419: undefined reference to
> `strp_done'
> kernel/bpf/sockmap.o: In function `smap_data_ready':
> /home/dsa/kernel-2.git/kernel/bpf/sockmap.c:216: undefined reference to
> `strp_data_ready'
> kernel/bpf/sockmap.o: In function `smap_init_sock':
> /home/dsa/kernel-2.git/kernel/bpf/sockmap.c:373: undefined reference to
> `strp_init'
> /home/dsa/kernel-2.git/Makefile:1000: recipe for target 'vmlinux' failed
> make[1]: *** [vmlinux] Error 1
> Makefile:145: recipe for target 'sub-make' failed
> make: *** [sub-make] Error 2
> 
> I'm guessing a missing CONFIG tie in.
> 

Yep those two are related we have the fix now just running a couple extra build
tests now to be sure. For the future I think we will tie into kbuild bot earlier.

Thanks,
John

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-16  5:32 ` [net-next PATCH 06/10] bpf: sockmap with sk redirect support John Fastabend
@ 2017-08-17  5:40   ` Alexei Starovoitov
  2017-08-17 18:58     ` John Fastabend
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2017-08-17  5:40 UTC (permalink / raw)
  To: John Fastabend, davem, daniel; +Cc: tgraf, netdev, tom

On 8/15/17 10:32 PM, John Fastabend wrote:
> +
> +static void smap_do_verdict(struct smap_psock *psock, struct sk_buff *skb)
> +{
> +	struct sock *sock;
> +	int rc;
> +
> +	/* Because we use per cpu values to feed input from sock redirect
> +	 * in BPF program to do_sk_redirect_map() call we need to ensure we
> +	 * are not preempted. RCU read lock is not sufficient in this case
> +	 * with CONFIG_PREEMPT_RCU enabled so we must be explicit here.
> +	 */
> +	preempt_disable();
> +	rc = smap_verdict_func(psock, skb);
> +	switch (rc) {
> +	case SK_REDIRECT:
> +		sock = do_sk_redirect_map();
> +		preempt_enable();
> +		if (likely(sock)) {
> +			struct smap_psock *peer = smap_psock_sk(sock);
> +
> +			if (likely(peer &&
> +				   test_bit(SMAP_TX_RUNNING, &peer->state) &&
> +				   sk_stream_memory_free(peer->sock))) {
> +				peer->sock->sk_wmem_queued += skb->truesize;
> +				sk_mem_charge(peer->sock, skb->truesize);
> +				skb_queue_tail(&peer->rxqueue, skb);
> +				schedule_work(&peer->tx_work);
> +				break;
> +			}
> +		}
> +	/* Fall through and free skb otherwise */
> +	case SK_DROP:
> +	default:
> +		preempt_enable();
> +		kfree_skb(skb);

two preempt_enable() after single preempt_disable()?

> +
> +static void smap_tx_work(struct work_struct *w)
> +{
> +	struct smap_psock *psock;
> +	struct sk_buff *skb;
> +	int rem, off, n;
> +
> +	psock = container_of(w, struct smap_psock, tx_work);
> +
> +	/* lock sock to avoid losing sk_socket at some point during loop */
> +	lock_sock(psock->sock);
> +	if (psock->save_skb) {
> +		skb = psock->save_skb;
> +		rem = psock->save_rem;
> +		off = psock->save_off;
> +		psock->save_skb = NULL;
> +		goto start;
> +	}
> +
> +	while ((skb = skb_dequeue(&psock->rxqueue))) {
> +		rem = skb->len;
> +		off = 0;
> +start:
> +		do {
> +			if (likely(psock->sock->sk_socket))
> +				n = skb_send_sock_locked(psock->sock,
> +							 skb, off, rem);

so this will be hot loop ?
Do you have perf report by any chance? Curious how it looks.

> +	/* reserve BPF programs early so can abort easily on failures */
> +	if (map_flags & BPF_SOCKMAP_STRPARSER) {

why have two 'flags' arguments and new helper just for this?
can normal update() be used and extra bits of flag there?

> -#define BPF_PROG_ATTACH_LAST_FIELD attach_flags
> +#define BPF_PROG_ATTACH_LAST_FIELD attach_bpf_fd2

> +	prog1 = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
> +	if (IS_ERR(prog1)) {
> +		fdput(f);
> +		return PTR_ERR(prog1);
> +	}
> +
> +	prog2 = bpf_prog_get_type(attr->attach_bpf_fd2, ptype);

could you add a comment to uapi on possible uses of this field
otherwise the name is not readable.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs
  2017-08-16  5:33 ` [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs John Fastabend
@ 2017-08-17  5:42   ` Alexei Starovoitov
  2017-08-17 12:40     ` Daniel Borkmann
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2017-08-17  5:42 UTC (permalink / raw)
  To: John Fastabend, davem, daniel; +Cc: tgraf, netdev, tom

On 8/15/17 10:33 PM, John Fastabend wrote:
> +static int sk_skb_prologue(struct bpf_insn *insn_buf, bool direct_write,
> +			   const struct bpf_prog *prog)
> +{
> +	struct bpf_insn *insn = insn_buf;
> +
> +	if (!direct_write)
> +		return 0;
> +
> +	/* if (!skb->cloned)
> +	 *       goto start;
> +	 *
> +	 * (Fast-path, otherwise approximation that we might be
> +	 *  a clone, do the rest in helper.)
> +	 */

iirc we're doing something similar in other prologue generator?
can be consolidated?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs
  2017-08-17  5:42   ` Alexei Starovoitov
@ 2017-08-17 12:40     ` Daniel Borkmann
  0 siblings, 0 replies; 31+ messages in thread
From: Daniel Borkmann @ 2017-08-17 12:40 UTC (permalink / raw)
  To: Alexei Starovoitov, John Fastabend, davem; +Cc: tgraf, netdev, tom

On 08/17/2017 07:42 AM, Alexei Starovoitov wrote:
> On 8/15/17 10:33 PM, John Fastabend wrote:
>> +static int sk_skb_prologue(struct bpf_insn *insn_buf, bool direct_write,
>> +               const struct bpf_prog *prog)
>> +{
>> +    struct bpf_insn *insn = insn_buf;
>> +
>> +    if (!direct_write)
>> +        return 0;
>> +
>> +    /* if (!skb->cloned)
>> +     *       goto start;
>> +     *
>> +     * (Fast-path, otherwise approximation that we might be
>> +     *  a clone, do the rest in helper.)
>> +     */
>
> iirc we're doing something similar in other prologue generator?
> can be consolidated?

Jup, with tc. I'll fix this one so the two can be consolidated.

Cheers,
Daniel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-17  5:40   ` Alexei Starovoitov
@ 2017-08-17 18:58     ` John Fastabend
  2017-08-17 22:28       ` Alexei Starovoitov
  0 siblings, 1 reply; 31+ messages in thread
From: John Fastabend @ 2017-08-17 18:58 UTC (permalink / raw)
  To: Alexei Starovoitov, davem, daniel; +Cc: tgraf, netdev, tom


[...]


>> +
>> +static void smap_tx_work(struct work_struct *w)
>> +{
>> +    struct smap_psock *psock;
>> +    struct sk_buff *skb;
>> +    int rem, off, n;
>> +
>> +    psock = container_of(w, struct smap_psock, tx_work);
>> +
>> +    /* lock sock to avoid losing sk_socket at some point during loop */
>> +    lock_sock(psock->sock);
>> +    if (psock->save_skb) {
>> +        skb = psock->save_skb;
>> +        rem = psock->save_rem;
>> +        off = psock->save_off;
>> +        psock->save_skb = NULL;
>> +        goto start;
>> +    }
>> +
>> +    while ((skb = skb_dequeue(&psock->rxqueue))) {
>> +        rem = skb->len;
>> +        off = 0;
>> +start:
>> +        do {
>> +            if (likely(psock->sock->sk_socket))
>> +                n = skb_send_sock_locked(psock->sock,
>> +                             skb, off, rem);
> 
> so this will be hot loop ?
> Do you have perf report by any chance? Curious how it looks.

I had some old ones but lets generate some fresh ones and we can go over
them. I'll post later today/tomorrow.

> 
>> +    /* reserve BPF programs early so can abort easily on failures */
>> +    if (map_flags & BPF_SOCKMAP_STRPARSER) {
> 
> why have two 'flags' arguments and new helper just for this?
> can normal update() be used and extra bits of flag there?
> 

The new helper is needed regardless to handle consuming the skops ctx
pointer from programs attached to cgroups. This way we can attach sockets
in cgroups when they enter specified TCP states.

The map_flags arg was because I expect we may end up with a few more flags
in sockmap and thought it was reasonable to keep separate namespaces for
the two flag types, BPF_ and BPF_SOCKMAP_*. It does however have the one
issue that when doing the update via syscall the flags are not available.

If there is no objection to consuming some bits of the normal flags a small
patch could do that. I think we will need at least two more bits going forward
for additional features. I guess though the map flags is not pressed for
bit space yet though.

>> -#define BPF_PROG_ATTACH_LAST_FIELD attach_flags
>> +#define BPF_PROG_ATTACH_LAST_FIELD attach_bpf_fd2
> 
>> +    prog1 = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
>> +    if (IS_ERR(prog1)) {
>> +        fdput(f);
>> +        return PTR_ERR(prog1);
>> +    }
>> +
>> +    prog2 = bpf_prog_get_type(attr->attach_bpf_fd2, ptype);
> 
> could you add a comment to uapi on possible uses of this field
> otherwise the name is not readable.

Yep no problem. I have a couple patches queued up for selftests and
automatically pulling in STREAM_PARSER, along with couple minor other
things so I'll push this update with those.

Thanks,
John
 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-17 18:58     ` John Fastabend
@ 2017-08-17 22:28       ` Alexei Starovoitov
  2017-08-18  7:35         ` John Fastabend
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2017-08-17 22:28 UTC (permalink / raw)
  To: John Fastabend, davem, daniel; +Cc: tgraf, netdev, tom

On 8/17/17 11:58 AM, John Fastabend wrote:
>>> +    /* reserve BPF programs early so can abort easily on failures */
>>> +    if (map_flags & BPF_SOCKMAP_STRPARSER) {
>> why have two 'flags' arguments and new helper just for this?
>> can normal update() be used and extra bits of flag there?
>>
> The new helper is needed regardless to handle consuming the skops ctx
> pointer from programs attached to cgroups. This way we can attach sockets
> in cgroups when they enter specified TCP states.
>
> The map_flags arg was because I expect we may end up with a few more flags
> in sockmap and thought it was reasonable to keep separate namespaces for
> the two flag types, BPF_ and BPF_SOCKMAP_*. It does however have the one
> issue that when doing the update via syscall the flags are not available.
>
> If there is no objection to consuming some bits of the normal flags a small
> patch could do that. I think we will need at least two more bits going forward
> for additional features. I guess though the map flags is not pressed for
> bit space yet though.
>

hmm. looking at the patch further. I'm not excited with this new api.
Why this BPF_SOCKMAP_STRPARSER flag is needed at all?
The sample code doesn't use it. What could be the use case?
Some future proofing? but the code seems to work properly only in
strparser mode... like 'verdict' bpf prog is called only
from strparser callback... and no other way to call it...
and normal map_update_elem() implies BPF_SOCKMAP_STRPARSER too...
I don't see detach api either..

Why BPF_CGROUP_SMAP_INGRESS has 'cgroup' suffix in there?
It doesn't deal with cgroups. It attaches a pair of prog_fds to sockmap.
I guess it's ok-ish to use BPF_PROG_ATTACH syscall command for that,
but using BPF_CGROUP_SMAP_INGRESS as 'attach type' is very confusing.
Why couldn't you use multiple normal bpf_map_update() commands to
store two prog_fds into stockmap ?
You could have reserved two special numerical key values 0xdead and
0xbeef or -1/-2 for these two progs?
If I read it correctly this new prog_attach sub-command doesn't attach
to any event, it only stores two hidden bpf programs inside sockmap.

Later proper prog_attach to actual cgroup is done with normal cgroup_fd
and BPF_CGROUP_SOCK_OPS prog type which looks completely independent
of sockmap, no?
It seems right now there is psock <-> one sockmap restriction
which looks implementation related. If we ever want to remove that
restriction such uapi will prevent us from doing it, since it's
doing too much stuff under the cover.

i like the sock to sock redirect concept of the patch, but uapi needs
to be improved before it goes to released kernel.

How about we break it down into smaller chunks of work and
expose what's happening underneath as explicit user driven actions?
Like:
- map_fd = create_map(BPF_MAP_TYPE_SOCKMAP) will create an empty map
- add new explicit helper to create psock in given map_fd or
   use prog_attach() cmd to attach to socket instead of sockmap ?
- two map_udpate_elem(map_fd, -1, strparser_prog_fd)
   map_update_elem(map_fd, -2, verdict_prog_fd)
   will store the progs in there which are currently hidden.
   map_delete_elem(-1) and (-2) would clear them.
   Alternatively pass both FDs at once as 8-byte key into single
   map_update() cmd
   or create new psock object that will keep strpaser and verdict progs?
- keep prog_attach(BPF_CGROUP_SOCK_OPS) as-is for
   BPF_CGROUP_SOCK_OPS prog type which can call new helper
   bpf_foo(ctx, ...) that will convert ctx socket into strparser mode
   without making association to sockmap.

My main objection to the api is that hidden psock object makes
the api confusing to use and I'm struggling to see how it will
be extensible. Like what if we want more than two strparser+verdict
progs in sockmap? or multiple sockmaps out of the same verdict prog?
I'm also struggling to wrap my head how prog_attach cmd attaches
to a map and hidden creation of strparser in a socket.
It seems right now psock == one rx socket that works in strparser
mode while sockmap is secondary set of tx sockets to redirect to.
If so, psock and sockmap should be independent in uapi.
Like prog_attach would attach strpaser and verdict programs to
a socket switching it to strparser mode.
If SK_REDIRECT is not returned it would be dropping skbs (like now),
if redirect requested, the smap_do_verdict() will pick the next
socket from sockmap and there will be no rigid connection
between sockmap and psock and we can use multiple sockmaps
from the same verdict program. smap_do_verdict() only needs
to know peer socket to redirect to.
Thoughts?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-17 22:28       ` Alexei Starovoitov
@ 2017-08-18  7:35         ` John Fastabend
  2017-08-18 18:32           ` Alexei Starovoitov
  0 siblings, 1 reply; 31+ messages in thread
From: John Fastabend @ 2017-08-18  7:35 UTC (permalink / raw)
  To: Alexei Starovoitov, davem, daniel; +Cc: tgraf, netdev, tom

On 08/17/2017 03:28 PM, Alexei Starovoitov wrote:
> On 8/17/17 11:58 AM, John Fastabend wrote:
>>>> +    /* reserve BPF programs early so can abort easily on failures */
>>>> +    if (map_flags & BPF_SOCKMAP_STRPARSER) {
>>> why have two 'flags' arguments and new helper just for this?
>>> can normal update() be used and extra bits of flag there?
>>>
>> The new helper is needed regardless to handle consuming the skops ctx
>> pointer from programs attached to cgroups. This way we can attach sockets
>> in cgroups when they enter specified TCP states.
>>
>> The map_flags arg was because I expect we may end up with a few more flags
>> in sockmap and thought it was reasonable to keep separate namespaces for
>> the two flag types, BPF_ and BPF_SOCKMAP_*. It does however have the one
>> issue that when doing the update via syscall the flags are not available.
>>
>> If there is no objection to consuming some bits of the normal flags a small
>> patch could do that. I think we will need at least two more bits going forward
>> for additional features. I guess though the map flags is not pressed for
>> bit space yet though.
>>
> 
> hmm. looking at the patch further. I'm not excited with this new api.

OK first thanks for looking it over.

> Why this BPF_SOCKMAP_STRPARSER flag is needed at all?
> The sample code doesn't use it. What could be the use case?

The use case is to allow 'redirect' to a socket but without having the
receive side use strparser/verdict programs. If the flags not set we
wont run the strparser.

> Some future proofing? but the code seems to work properly only in
> strparser mode... like 'verdict' bpf prog is called only
> from strparser callback... and no other way to call it...
> and normal map_update_elem() implies BPF_SOCKMAP_STRPARSER too...
> I don't see detach api either..
> .

The detach is not yet implemented, I'll add it shortly.

> Why BPF_CGROUP_SMAP_INGRESS has 'cgroup' suffix in there?
> It doesn't deal with cgroups. It attaches a pair of prog_fds to sockmap.
> I guess it's ok-ish to use BPF_PROG_ATTACH syscall command for that,
> but using BPF_CGROUP_SMAP_INGRESS as 'attach type' is very confusing.

Point taken, bad naming scheme but easy enough to fix.

> Why couldn't you use multiple normal bpf_map_update() commands to
> store two prog_fds into stockmap ?
> You could have reserved two special numerical key values 0xdead and
> 0xbeef or -1/-2 for these two progs?
> If I read it correctly this new prog_attach sub-command doesn't attach
> to any event, it only stores two hidden bpf programs inside sockmap.
> 

see proposal below.

> Later proper prog_attach to actual cgroup is done with normal cgroup_fd
> and BPF_CGROUP_SOCK_OPS prog type which looks completely independent
> of sockmap, no?

Yes agreed. Although I expect sockmap will primarily be used with
BPF_CGROUP_SOCK_OPS with population of a sock map done through the
SOCK_OPS helpers. But, I agree sockmap fundamentally can stand on its
own without cgroups.

> It seems right now there is psock <-> one sockmap restriction
> which looks implementation related. If we ever want to remove that
> restriction such uapi will prevent us from doing it, since it's
> doing too much stuff under the cover.

The psock <-> sockmap binding is a result of the programs being
attached to the sockmap and not to the sock object. I think the
proposal below resolves this.

> 
> i like the sock to sock redirect concept of the patch, but uapi needs
> to be improved before it goes to released kernel.
> 
> How about we break it down into smaller chunks of work and
> expose what's happening underneath as explicit user driven actions?
> Like:
> - map_fd = create_map(BPF_MAP_TYPE_SOCKMAP) will create an empty map
> - add new explicit helper to create psock in given map_fd or
>   use prog_attach() cmd to attach to socket instead of sockmap ?
> - two map_udpate_elem(map_fd, -1, strparser_prog_fd)
>   map_update_elem(map_fd, -2, verdict_prog_fd)
>   will store the progs in there which are currently hidden.
>   map_delete_elem(-1) and (-2) would clear them.
>   Alternatively pass both FDs at once as 8-byte key into single
>   map_update() cmd
>   or create new psock object that will keep strpaser and verdict progs?
> - keep prog_attach(BPF_CGROUP_SOCK_OPS) as-is for
>   BPF_CGROUP_SOCK_OPS prog type which can call new helper
>   bpf_foo(ctx, ...) that will convert ctx socket into strparser mode
>   without making association to sockmap.
> 
> My main objection to the api is that hidden psock object makes
> the api confusing to use and I'm struggling to see how it will
> be extensible. Like what if we want more than two strparser+verdict
> progs in sockmap? or multiple sockmaps out of the same verdict prog?
> I'm also struggling to wrap my head how prog_attach cmd attaches
> to a map and hidden creation of strparser in a socket.
> It seems right now psock == one rx socket that works in strparser
> mode while sockmap is secondary set of tx sockets to redirect to.
> If so, psock and sockmap should be independent in uapi.
> Like prog_attach would attach strpaser and verdict programs to
> a socket switching it to strparser mode.
> If SK_REDIRECT is not returned it would be dropping skbs (like now),
> if redirect requested, the smap_do_verdict() will pick the next
> socket from sockmap and there will be no rigid connection
> between sockmap and psock and we can use multiple sockmaps
> from the same verdict program. smap_do_verdict() only needs
> to know peer socket to redirect to.
> Thoughts?
> 

OK I think there are a couple things we can do here.

>From an API perspective having all socks in a sockmap inherit the same
BPF programs is useful when working with cgroups. It keeps things consistent
and is pretty effective for applying policy to cgroup sockets. But,
in some cases it breaks down a bit and that is where the map_flags
and BPF_SOCKMAP_STRPARSER entered the picture. After this discussion
I think we can clean this up. Here is my proposal, let me know what you
think.

First I would like to continue allowing socks to inherit BPF programs
from sock map if users set this up. To clean it up and make it extensible
though,

  - instead of doing the attach_fd2 which would break quickly if we need
    fd(3,4,...) use two separate attach types,

         BPF_SMAP_STREAM_PARSER
         BPF_SMAP_STREAM_VERDICT

    the target fd is the map fd just as before.

    This allows us to easily extend as needed by adding another type and
    the map space is a u32 so we have plenty of room for extensions.

 - implement the detach for the above to remove the programs

Next lets just remove the map_flags BPF_SOCKMAP_STRPARSER. The UAPI is
simplified this way and the inheritance rule is clear. If BPF programs
are attached to the map they are inherited. If there is no BPF program
attached the socks do not use strparser/verdict logic and are purely
for redirect actions.

A sock may be in multiple maps but can only inherit a single BPF
stream/verdict program. Otherwise we would have no way to "know"
which stream parser to run.

Future extensions could provide an API for doing per sock attach operations
and I see no reason they would not be compatible. By adding two more
attach types,

        BPF_SOCK_STREAM_PARSER
        BPF_SOCK_STREAM_VERDICT

we can provide specific sock BPF programs. With verifier work we could
even make bpf helpers

        bpf_sock_prog_attach(skops, prog, type, flags)
        bpf_sock_map_attach(sockmap, key, prog, type, flags)

I think both this and the above work together nicely also the code can
support this with some additional work. To summarize the API then with
above changes,

 syscall:

  bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
  bpf_prog_attach(verdict_prog, map_fd, BPF_SMAP_STREAM_VERDICT, 0);
  bpf_prog_attach(parse_prog, map_fd, BPF_SMAP_STREAM_PARSER, 0);
  bpf_map_update_elem(map_fd, key, sock_fd, BPF_ANY)
  bpf_map_delete_elem(map_fd, key)

 helpers:
  to insert sock from sock ops progrm
      bpf_sock_map_update(skops, map, key, flags);
  to redirect skb to a sock in a sockmap
      bpf_sk_redirect_map(map, key, flags)

 future work:
  bpf_prog_attach(verdict_prog, map_fd, BPF_SOCK_STREAM_VERDICT, 0)
  bpf_prog_attach(parse_prog, map_fd, BPF_SOCK_STREAM_PARSER, 0)

How does this look? I think it will be both extensible and very usable
now.

Thanks,
John

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-18  7:35         ` John Fastabend
@ 2017-08-18 18:32           ` Alexei Starovoitov
  2017-08-19  3:30             ` John Fastabend
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2017-08-18 18:32 UTC (permalink / raw)
  To: John Fastabend, davem, daniel; +Cc: tgraf, netdev, tom

On 8/18/17 12:35 AM, John Fastabend wrote:
> From an API perspective having all socks in a sockmap inherit the same
> BPF programs is useful when working with cgroups. It keeps things consistent
> and is pretty effective for applying policy to cgroup sockets.

agree. it's all clear, see modified proposal below.

> But,
> in some cases it breaks down a bit and that is where the map_flags
> and BPF_SOCKMAP_STRPARSER entered the picture. After this discussion
> I think we can clean this up. Here is my proposal, let me know what you
> think.
>
> First I would like to continue allowing socks to inherit BPF programs
> from sock map if users set this up. To clean it up and make it extensible
> though,
>
>   - instead of doing the attach_fd2 which would break quickly if we need
>     fd(3,4,...) use two separate attach types,
>
>          BPF_SMAP_STREAM_PARSER
>          BPF_SMAP_STREAM_VERDICT
>
>     the target fd is the map fd just as before.
>
>     This allows us to easily extend as needed by adding another type and
>     the map space is a u32 so we have plenty of room for extensions.
>
>  - implement the detach for the above to remove the programs
>
> Next lets just remove the map_flags BPF_SOCKMAP_STRPARSER. The UAPI is
> simplified this way and the inheritance rule is clear. If BPF programs
> are attached to the map they are inherited. If there is no BPF program
> attached the socks do not use strparser/verdict logic and are purely
> for redirect actions.
>
> A sock may be in multiple maps but can only inherit a single BPF
> stream/verdict program. Otherwise we would have no way to "know"
> which stream parser to run.
>
> Future extensions could provide an API for doing per sock attach operations
> and I see no reason they would not be compatible. By adding two more
> attach types,
>
>         BPF_SOCK_STREAM_PARSER
>         BPF_SOCK_STREAM_VERDICT
>
> we can provide specific sock BPF programs. With verifier work we could
> even make bpf helpers
>
>         bpf_sock_prog_attach(skops, prog, type, flags)
>         bpf_sock_map_attach(sockmap, key, prog, type, flags)
>
> I think both this and the above work together nicely also the code can
> support this with some additional work. To summarize the API then with
> above changes,
>
>  syscall:
>
>   bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
>   bpf_prog_attach(verdict_prog, map_fd, BPF_SMAP_STREAM_VERDICT, 0);
>   bpf_prog_attach(parse_prog, map_fd, BPF_SMAP_STREAM_PARSER, 0);
>   bpf_map_update_elem(map_fd, key, sock_fd, BPF_ANY)
>   bpf_map_delete_elem(map_fd, key)
>
>  helpers:
>   to insert sock from sock ops progrm
>       bpf_sock_map_update(skops, map, key, flags);
>   to redirect skb to a sock in a sockmap
>       bpf_sk_redirect_map(map, key, flags)
>
>  future work:
>   bpf_prog_attach(verdict_prog, map_fd, BPF_SOCK_STREAM_VERDICT, 0)
>   bpf_prog_attach(parse_prog, map_fd, BPF_SOCK_STREAM_PARSER, 0)
>
> How does this look? I think it will be both extensible and very usable
> now.

Above sounds much better than the present situation.
Can we take it even further and split psock from sockmap?
My understanding that psock->key is there only because you tied
psock with the map and using map as a storage for the rx socket.
imo separating rx and tx sockets will make it cleaner.
Like we can have new syscall cmd that creates psock that holds
strpaser, verdict and potentially other programs.
Later sock ops program will use a helper:
bpf_psock_update(skops, psock_obj_handle, flags);
to assign single skops socket into this psock object.
The programs (strparser, verdict) will be applied to this skops socket,
so your inheritance requirement is satisfied.
And use sockmap only for TX sockets. Either user space via syscall
will store them in there or sockops program will store them into the map
via bpf_sock_map_update(skops, sockmap, key, flags); helper.
Later the verdict program will use
bpf_sk_redirect_map(sockmap, key, flags);
and for the program author no need to worry about 'type' of socket
in the sockmap. All sockets in there are TX sockets to redirect to.
And the same verdict program can use multiple sockmaps.
Similarly user space can create multiple psock objects with
same strparser+verdict programs or different and sockops prog
can pick and choose which psock to use to assign RX socket into.

Another alternative:
Instead of new psock object to store single socket (like current
implementation does), we can do two types of sockmap.
One for a set of RX sockets. All of them will have the same
strparser+verdict progs and psock with skbuff queue will be part
of this sockmap type.
And another sockmap type for TX sockets that don't have skbuff queues
at all and can only be used to redirect the RX socket into.
So bpf_rx_sock_map_update() helper will be used only on RX_SOCKMAP map
and bpf_tx_sock_map_update() helper will be used only on TX_SOCKMAP,
while bpf_sk_redirect_map() can only be used on TX_SOCKMAP.

Or you have cases when two RX sockets need to redirect into each
other and in both cases strparser+verdict need to run?
In such case we need to allow bpf_sk_redirect_map() to use on
RX_SOCKMAP map as well,
but looking at current implementation you only allow one psock per map,
so two sockets forwarding to each other cannot work due to only one queue.
Am I missing anything from what you want to achieve?
Thoughts?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-18 18:32           ` Alexei Starovoitov
@ 2017-08-19  3:30             ` John Fastabend
  2017-08-19  4:50               ` Alexei Starovoitov
  0 siblings, 1 reply; 31+ messages in thread
From: John Fastabend @ 2017-08-19  3:30 UTC (permalink / raw)
  To: Alexei Starovoitov, davem, daniel; +Cc: tgraf, netdev, tom

[...] (trimmed email leaving proposal - 1 summary)

>>
>>  syscall:
>>
>>   bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
>>   bpf_prog_attach(verdict_prog, map_fd, BPF_SMAP_STREAM_VERDICT, 0);
>>   bpf_prog_attach(parse_prog, map_fd, BPF_SMAP_STREAM_PARSER, 0);
>>   bpf_map_update_elem(map_fd, key, sock_fd, BPF_ANY)
>>   bpf_map_delete_elem(map_fd, key)
>>
>>  helpers:
>>   to insert sock from sock ops progrm
>>       bpf_sock_map_update(skops, map, key, flags);
>>   to redirect skb to a sock in a sockmap
>>       bpf_sk_redirect_map(map, key, flags)
>>
>>  future work:
>>   bpf_prog_attach(verdict_prog, map_fd, BPF_SOCK_STREAM_VERDICT, 0)
>>   bpf_prog_attach(parse_prog, map_fd, BPF_SOCK_STREAM_PARSER, 0)
>>
>> How does this look? I think it will be both extensible and very usable
>> now.
> 
> Above sounds much better than the present situation.
> Can we take it even further and split psock from sockmap?

So psock data structure itself is almost entirely split at this
point anyways. The remaining two pieces are back pointers to the
map and a key so we can remove the entry when needed. The reason
is we need to remove the map entry when the socket is closed.

We use the sock insertion into the map as the trigger to create
the psock.

> My understanding that psock->key is there only because you tied
> psock with the map and using map as a storage for the rx socket

Almost. To clarify the psock only ever uses the key/map entries
to remove itself on a TCP state change that would close the sock.
.
> imo separating rx and tx sockets will make it cleaner.
> Like we can have new syscall cmd that creates psock that holds
> strpaser, verdict and potentially other programs.
> Later sock ops program will use a helper:
> bpf_psock_update(skops, psock_obj_handle, flags);
> to assign single skops socket into this psock object.
> The programs (strparser, verdict) will be applied to this skops socket,
> so your inheritance requirement is satisfied.
> And use sockmap only for TX sockets. Either user space via syscall
> will store them in there or sockops program will store them into the map
> via bpf_sock_map_update(skops, sockmap, key, flags); helper.
> Later the verdict program will use
> bpf_sk_redirect_map(sockmap, key, flags);
> and for the program author no need to worry about 'type' of socket
> in the sockmap. All sockets in there are TX sockets to redirect to.
> And the same verdict program can use multiple sockmaps.
> Similarly user space can create multiple psock objects with
> same strparser+verdict programs or different and sockops prog
> can pick and choose which psock to use to assign RX socket into.
> 

I prefer the alternative below it seems a bit cleaner to me. I think
a very similar programming flow to the above can be achieved using
the primitives below.

> Another alternative:
> Instead of new psock object to store single socket (like current
> implementation does), we can do two types of sockmap.
> One for a set of RX sockets. All of them will have the same
> strparser+verdict progs and psock with skbuff queue will be part
> of this sockmap type.
> And another sockmap type for TX sockets that don't have skbuff queues

Clarification the skbuff queue in the psock data structure,

	struct sk_buff_head rxqueue;

is attached to the TX socket actually and run through the workqueue.
The naming is perhaps unfortunate, it made sense to me because the
sending sock is receiving skbs from multiple socks.

> at all and can only be used to redirect the RX socket into.
> So bpf_rx_sock_map_update() helper will be used only on RX_SOCKMAP map
> and bpf_tx_sock_map_update() helper will be used only on TX_SOCKMAP,
> while bpf_sk_redirect_map() can only be used on TX_SOCKMAP.
> 

So this is really close to what I proposed above. For a TX_SOCKMAP
simply do not attach any programs,

   bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
   [...]

For an RX_SOCKMAP,

   bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
   bpf_prog_attach(verdict_prog, map_fd, BPF_SMAP_STREAM_VERDICT, 0);
   bpf_prog_attach(parse_prog, map_fd, BPF_SMAP_STREAM_PARSER, 0);   

With the new attach type (compared to the fd2 thing before) we can easily
extend maps to contain other program types as needed. So in the future
we might have TX_SOCKMAP, RX_SOCKMAP, FOO_SOCKMAP, ...

I don't see the need to have the API enforce the map type via update
specifiers bpf_{rx|tx}_sock_map_update. The programmer should "know"
the type by virtue of the programs attached. This is more flexible
as well because it allows a map to be TX only, RX only or TX/RX.

With this proposal we can relax the restriction where a sock can only
be in a single map and even allow a sock to be in the same map multiple
times. The limitation we do have to enforce is allowing a sock in the
a map with different BPF_SMAP_STREAM_* programs. But I think this
should be clear to the programmer (with good tracing functions and
error codes).

Slight aside: but by creating map size of 1 we have an object that
contains programs and later we can attach a sock to it, looks like
the following,

      create_map(BPF_MAP_TYPE_SOCKMAP,...)
      bpf_prog_attach(...) 
      [...]
      bpf_update_map_elem(fd, map, key, flags)

I think this is very close to your first approach where you suggested
a program container object.

> Or you have cases when two RX sockets need to redirect into each
> other and in both cases strparser+verdict need to run?

If we don't do rx, tx restrictions and use my suggestion here we
don't have this limitation. OR because we allow socks in multiple
maps now the user can simply put the sockets in different maps.

> In such case we need to allow bpf_sk_redirect_map() to use on
> RX_SOCKMAP map as well,
> but looking at current implementation you only allow one psock per map,
> so two sockets forwarding to each other cannot work due to only one queue.
> Am I missing anything from what you want to achieve?

I don't think so. But lets get rid of the one psock per map, I took a shot
at relaxing that today and was able to get it with a refcount on the psock
which seems to work OK.

Also reorganizing the psock structure into clear sections tx_psock, rx_psock,
general_psock will probably help readers.

> Thoughts?
> 

What do you think of my counter proposal I started coding it up and it
actually (other than pushing code snippets around) seems to work out
nicely with the existing code base. I think it is really a nice improvement.

Thanks,
John

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-19  3:30             ` John Fastabend
@ 2017-08-19  4:50               ` Alexei Starovoitov
  2017-08-19 20:52                 ` John Fastabend
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2017-08-19  4:50 UTC (permalink / raw)
  To: John Fastabend, davem, daniel; +Cc: tgraf, netdev, tom

On 8/18/17 8:30 PM, John Fastabend wrote:
> So this is really close to what I proposed above. For a TX_SOCKMAP
> simply do not attach any programs,
>
>    bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
>    [...]
>
> For an RX_SOCKMAP,
>
>    bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
>    bpf_prog_attach(verdict_prog, map_fd, BPF_SMAP_STREAM_VERDICT, 0);
>    bpf_prog_attach(parse_prog, map_fd, BPF_SMAP_STREAM_PARSER, 0);
>
> With the new attach type (compared to the fd2 thing before) we can easily
> extend maps to contain other program types as needed. So in the future
> we might have TX_SOCKMAP, RX_SOCKMAP, FOO_SOCKMAP, ...

agree. that sounds as good generalization.

> I don't see the need to have the API enforce the map type via update
> specifiers bpf_{rx|tx}_sock_map_update. The programmer should "know"
> the type by virtue of the programs attached. This is more flexible
> as well because it allows a map to be TX only, RX only or TX/RX.

makes sense. good point.

> With this proposal we can relax the restriction where a sock can only
> be in a single map and even allow a sock to be in the same map multiple
> times. The limitation we do have to enforce is allowing a sock in the
> a map with different BPF_SMAP_STREAM_* programs. But I think this
> should be clear to the programmer (with good tracing functions and
> error codes).
>
> Slight aside: but by creating map size of 1 we have an object that
> contains programs and later we can attach a sock to it, looks like
> the following,
>
>       create_map(BPF_MAP_TYPE_SOCKMAP,...)
>       bpf_prog_attach(...)
>       [...]
>       bpf_update_map_elem(fd, map, key, flags)
>
> I think this is very close to your first approach where you suggested
> a program container object.

yep.

>> Or you have cases when two RX sockets need to redirect into each
>> other and in both cases strparser+verdict need to run?
> If we don't do rx, tx restrictions and use my suggestion here we
> don't have this limitation. OR because we allow socks in multiple
> maps now the user can simply put the sockets in different maps.

agree. good point as well.

>> In such case we need to allow bpf_sk_redirect_map() to use on
>> RX_SOCKMAP map as well,
>> but looking at current implementation you only allow one psock per map,
>> so two sockets forwarding to each other cannot work due to only one queue.
>> Am I missing anything from what you want to achieve?
> I don't think so. But lets get rid of the one psock per map, I took a shot
> at relaxing that today and was able to get it with a refcount on the psock
> which seems to work OK.

+1

> Also reorganizing the psock structure into clear sections tx_psock, rx_psock,
> general_psock will probably help readers.

nice. thanks!

>> Thoughts?
>>
> What do you think of my counter proposal I started coding it up and it
> actually (other than pushing code snippets around) seems to work out
> nicely with the existing code base. I think it is really a nice improvement.

ok. I think we're mostly on the same page and patches will
either bring us to the full agreement or show where we disagree :)
To clarify, I think the current code base is pretty good.
I'm only arguing to fix up the rough spots of the uapi to make
sure we don't corner ourselves with future extensions that I feel
inevitably will follow.
The feature itself is quite important and I feel a bit sad that it 
landed without enough due diligence. The RFC patches didn't get
much attentions and I didn't have time until now to look into them
in depth.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [net-next PATCH 06/10] bpf: sockmap with sk redirect support
  2017-08-19  4:50               ` Alexei Starovoitov
@ 2017-08-19 20:52                 ` John Fastabend
  0 siblings, 0 replies; 31+ messages in thread
From: John Fastabend @ 2017-08-19 20:52 UTC (permalink / raw)
  To: Alexei Starovoitov, davem, daniel; +Cc: tgraf, netdev, tom

On 08/18/2017 09:50 PM, Alexei Starovoitov wrote:
> On 8/18/17 8:30 PM, John Fastabend wrote:
>> So this is really close to what I proposed above. For a TX_SOCKMAP
>> simply do not attach any programs,
>>
>>    bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
>>    [...]
>>
>> For an RX_SOCKMAP,
>>
>>    bpf_create_map(BPF_MAP_TYPE_SOCKMAP, .... )
>>    bpf_prog_attach(verdict_prog, map_fd, BPF_SMAP_STREAM_VERDICT, 0);
>>    bpf_prog_attach(parse_prog, map_fd, BPF_SMAP_STREAM_PARSER, 0);
>>
>> With the new attach type (compared to the fd2 thing before) we can easily
>> extend maps to contain other program types as needed. So in the future
>> we might have TX_SOCKMAP, RX_SOCKMAP, FOO_SOCKMAP, ...
> 
> agree. that sounds as good generalization.
> 
>> I don't see the need to have the API enforce the map type via update
>> specifiers bpf_{rx|tx}_sock_map_update. The programmer should "know"
>> the type by virtue of the programs attached. This is more flexible
>> as well because it allows a map to be TX only, RX only or TX/RX.
> 
> makes sense. good point.
> 
>> With this proposal we can relax the restriction where a sock can only
>> be in a single map and even allow a sock to be in the same map multiple
>> times. The limitation we do have to enforce is allowing a sock in the
>> a map with different BPF_SMAP_STREAM_* programs. But I think this
>> should be clear to the programmer (with good tracing functions and
>> error codes).
>>
>> Slight aside: but by creating map size of 1 we have an object that
>> contains programs and later we can attach a sock to it, looks like
>> the following,
>>
>>       create_map(BPF_MAP_TYPE_SOCKMAP,...)
>>       bpf_prog_attach(...)
>>       [...]
>>       bpf_update_map_elem(fd, map, key, flags)
>>
>> I think this is very close to your first approach where you suggested
>> a program container object.
> 
> yep.
> 
>>> Or you have cases when two RX sockets need to redirect into each
>>> other and in both cases strparser+verdict need to run?
>> If we don't do rx, tx restrictions and use my suggestion here we
>> don't have this limitation. OR because we allow socks in multiple
>> maps now the user can simply put the sockets in different maps.
> 
> agree. good point as well.
> 
>>> In such case we need to allow bpf_sk_redirect_map() to use on
>>> RX_SOCKMAP map as well,
>>> but looking at current implementation you only allow one psock per map,
>>> so two sockets forwarding to each other cannot work due to only one queue.
>>> Am I missing anything from what you want to achieve?
>> I don't think so. But lets get rid of the one psock per map, I took a shot
>> at relaxing that today and was able to get it with a refcount on the psock
>> which seems to work OK.
> 
> +1
> 
>> Also reorganizing the psock structure into clear sections tx_psock, rx_psock,
>> general_psock will probably help readers.
> 
> nice. thanks!
> 
>>> Thoughts?
>>>
>> What do you think of my counter proposal I started coding it up and it
>> actually (other than pushing code snippets around) seems to work out
>> nicely with the existing code base. I think it is really a nice improvement.
> 
> ok. I think we're mostly on the same page and patches will
> either bring us to the full agreement or show where we disagree :)

I'll work up the patches Monday/Tuesday and we should have plenty of time to
work out any kinks. The bit I did Friday makes me think the changes to support
this should be straight forward.

Thanks,
John

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-08-19 20:52 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-16  5:30 [net-next PATCH 00/10] BPF: sockmap and sk redirect support John Fastabend
2017-08-16  5:30 ` [net-next PATCH 01/10] net: early init support for strparser John Fastabend
2017-08-16  5:31 ` [net-next PATCH 02/10] net: add sendmsg_locked and sendpage_locked to af_inet6 John Fastabend
2017-08-16  5:31 ` [net-next PATCH 03/10] net: fixes for skb_send_sock John Fastabend
2017-08-16  5:31 ` [net-next PATCH 04/10] bpf: introduce new program type for skbs on sockets John Fastabend
2017-08-16  5:32 ` [net-next PATCH 05/10] bpf: export bpf_prog_inc_not_zero John Fastabend
2017-08-16  5:32 ` [net-next PATCH 06/10] bpf: sockmap with sk redirect support John Fastabend
2017-08-17  5:40   ` Alexei Starovoitov
2017-08-17 18:58     ` John Fastabend
2017-08-17 22:28       ` Alexei Starovoitov
2017-08-18  7:35         ` John Fastabend
2017-08-18 18:32           ` Alexei Starovoitov
2017-08-19  3:30             ` John Fastabend
2017-08-19  4:50               ` Alexei Starovoitov
2017-08-19 20:52                 ` John Fastabend
2017-08-16  5:33 ` [net-next PATCH 07/10] bpf: add access to sock fields and pkt data from sk_skb programs John Fastabend
2017-08-17  5:42   ` Alexei Starovoitov
2017-08-17 12:40     ` Daniel Borkmann
2017-08-16  5:33 ` [net-next PATCH 08/10] bpf: sockmap sample program John Fastabend
2017-08-16  5:33 ` [net-next PATCH 09/10] bpf: selftests: add tests for new __sk_buff members John Fastabend
2017-08-16  5:34 ` [net-next PATCH 10/10] bpf: selftests add sockmap tests John Fastabend
2017-08-16 15:25 ` [net-next PATCH 00/10] BPF: sockmap and sk redirect support Daniel Borkmann
2017-08-16 18:28 ` David Miller
2017-08-16 18:35   ` David Miller
2017-08-16 19:06     ` John Fastabend
2017-08-16 19:13       ` David Miller
2017-08-16 19:17         ` Eric Dumazet
2017-08-16 19:34           ` John Fastabend
2017-08-16 21:22             ` David Miller
2017-08-16 21:35             ` David Ahern
2017-08-16 21:37               ` John Fastabend

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.