netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH net-next 0/7] eBPF support for cls_bpf
@ 2015-02-11  0:15 Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 1/7] ebpf: remove kernel test stubs Daniel Borkmann
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

I'm sending this out only as RFC as the merge window is open
anyway and Dave's pull request pending.

My plan would be to get this work fully refined and remove
some rough edges until net-next opens up again, submitting
it as non-RFC.

So for the time being similarly as the recent OVS eBPF action
patchset this may serve as a discussion ground, also wrt to
the BPF tutorial at netdev01 [1].

I presume on top of this e.g. Jiri might want to follow-up
with act_bpf, I'm also open/happy to contribute to it.

The series starts with a couple of cleanups and making the
prog type infrastructure pluggable in eBPF. Most interesting
is probably the last patch that adds actual support and the
iproute2 bits from the link below.

With regards to accessing fields from the context (here: skb),
I leave that for future work just as we currently do in socket
filters, also with regard to the currently ongoing ABI discussion
from tracing side. Nevertheless, the state with these patches
still allows for interesting functionality to be implemented as
complex classifiers, e.g. such as a fully fledged C-like flow
dissector as shown in bpf samples directory and the like.

iproute2 part:

   http://git.breakpoint.cc/cgit/dborkman/iproute2.git/log/?h=ebpf

I have configured and built LLVM with: --enable-experimental-targets=BPF

  [1] https://www.netdev01.org/sessions/15

Thanks !

Daniel Borkmann (7):
  ebpf: remove kernel test stubs
  ebpf: constify various function pointer structs
  ebpf: check first for MAXINSNS in bpf_prog_load
  ebpf: extend program type/subsystem registration
  ebpf: export BPF_PSEUDO_MAP_FD to uapi
  ebpf: remove CONFIG_BPF_SYSCALL ifdefs in socket filter code
  cls_bpf: add initial eBPF support for programmable classifiers

 include/linux/bpf.h          |  46 +++++++---
 include/linux/filter.h       |   2 -
 include/uapi/linux/bpf.h     |   3 +
 include/uapi/linux/pkt_cls.h |   1 +
 kernel/bpf/Makefile          |   3 -
 kernel/bpf/arraymap.c        |   6 +-
 kernel/bpf/hashtab.c         |   6 +-
 kernel/bpf/helpers.c         |   9 +-
 kernel/bpf/syscall.c         |  96 +++++++++++++++------
 kernel/bpf/test_stub.c       |  78 -----------------
 kernel/bpf/verifier.c        |  28 ++++--
 net/core/filter.c            | 169 +++++++++++++++++-------------------
 net/sched/cls_bpf.c          | 200 ++++++++++++++++++++++++++++++++-----------
 samples/bpf/libbpf.h         |   4 +-
 samples/bpf/test_verifier.c  |   5 +-
 15 files changed, 374 insertions(+), 282 deletions(-)
 delete mode 100644 kernel/bpf/test_stub.c

-- 
1.9.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net-next 1/7] ebpf: remove kernel test stubs
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  2015-02-11  0:42   ` Alexei Starovoitov
  2015-02-11  0:15 ` [PATCH net-next 2/7] ebpf: constify various function pointer structs Daniel Borkmann
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

Now that we have BPF_PROG_TYPE_SOCKET_FILTER up and running,
we can remove the test stubs which were added to get the
verifier suite up. We can just let the test cases probe under
socket filter type instead. In the fill/spill test case, we
cannot (yet) access fields from the context (skb), but we may
adapt that test case in future.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 kernel/bpf/Makefile         |  3 --
 kernel/bpf/test_stub.c      | 78 ---------------------------------------------
 samples/bpf/test_verifier.c |  5 +--
 3 files changed, 3 insertions(+), 83 deletions(-)
 delete mode 100644 kernel/bpf/test_stub.c

diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index a5ae60f..e6983be 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -1,5 +1,2 @@
 obj-y := core.o
 obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o hashtab.o arraymap.o helpers.o
-ifdef CONFIG_TEST_BPF
-obj-$(CONFIG_BPF_SYSCALL) += test_stub.o
-endif
diff --git a/kernel/bpf/test_stub.c b/kernel/bpf/test_stub.c
deleted file mode 100644
index 0ceae1e..0000000
--- a/kernel/bpf/test_stub.c
+++ /dev/null
@@ -1,78 +0,0 @@
-/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of version 2 of the GNU General Public
- * License as published by the Free Software Foundation.
- */
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/slab.h>
-#include <linux/err.h>
-#include <linux/bpf.h>
-
-/* test stubs for BPF_MAP_TYPE_UNSPEC and for BPF_PROG_TYPE_UNSPEC
- * to be used by user space verifier testsuite
- */
-struct bpf_context {
-	u64 arg1;
-	u64 arg2;
-};
-
-static const struct bpf_func_proto *test_func_proto(enum bpf_func_id func_id)
-{
-	switch (func_id) {
-	case BPF_FUNC_map_lookup_elem:
-		return &bpf_map_lookup_elem_proto;
-	case BPF_FUNC_map_update_elem:
-		return &bpf_map_update_elem_proto;
-	case BPF_FUNC_map_delete_elem:
-		return &bpf_map_delete_elem_proto;
-	default:
-		return NULL;
-	}
-}
-
-static const struct bpf_context_access {
-	int size;
-	enum bpf_access_type type;
-} test_ctx_access[] = {
-	[offsetof(struct bpf_context, arg1)] = {
-		FIELD_SIZEOF(struct bpf_context, arg1),
-		BPF_READ
-	},
-	[offsetof(struct bpf_context, arg2)] = {
-		FIELD_SIZEOF(struct bpf_context, arg2),
-		BPF_READ
-	},
-};
-
-static bool test_is_valid_access(int off, int size, enum bpf_access_type type)
-{
-	const struct bpf_context_access *access;
-
-	if (off < 0 || off >= ARRAY_SIZE(test_ctx_access))
-		return false;
-
-	access = &test_ctx_access[off];
-	if (access->size == size && (access->type & type))
-		return true;
-
-	return false;
-}
-
-static struct bpf_verifier_ops test_ops = {
-	.get_func_proto = test_func_proto,
-	.is_valid_access = test_is_valid_access,
-};
-
-static struct bpf_prog_type_list tl_prog = {
-	.ops = &test_ops,
-	.type = BPF_PROG_TYPE_UNSPEC,
-};
-
-static int __init register_test_ops(void)
-{
-	bpf_register_prog_type(&tl_prog);
-	return 0;
-}
-late_initcall(register_test_ops);
diff --git a/samples/bpf/test_verifier.c b/samples/bpf/test_verifier.c
index b96175e..7b56b59 100644
--- a/samples/bpf/test_verifier.c
+++ b/samples/bpf/test_verifier.c
@@ -288,7 +288,8 @@ static struct bpf_test tests[] = {
 			BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8),
 
 			/* should be able to access R0 = *(R2 + 8) */
-			BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8),
+			/* BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8), */
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
@@ -687,7 +688,7 @@ static int test(void)
 		}
 		printf("#%d %s ", i, tests[i].descr);
 
-		prog_fd = bpf_prog_load(BPF_PROG_TYPE_UNSPEC, prog,
+		prog_fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, prog,
 					prog_len * sizeof(struct bpf_insn),
 					"GPL");
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 2/7] ebpf: constify various function pointer structs
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 1/7] ebpf: remove kernel test stubs Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  2015-02-11  0:43   ` Alexei Starovoitov
  2015-02-11  0:15 ` [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load Daniel Borkmann
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

We can move bpf_map_ops and bpf_verifier_ops and other structs
into RO section, bpf_map_type_list and bpf_prog_type_list into
read mostly.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 include/linux/bpf.h   | 14 +++++++-------
 kernel/bpf/arraymap.c |  6 +++---
 kernel/bpf/hashtab.c  |  6 +++---
 kernel/bpf/helpers.c  |  6 +++---
 net/core/filter.c     |  6 +++---
 5 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index bbfceb7..7844686 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -32,13 +32,13 @@ struct bpf_map {
 	u32 key_size;
 	u32 value_size;
 	u32 max_entries;
-	struct bpf_map_ops *ops;
+	const struct bpf_map_ops *ops;
 	struct work_struct work;
 };
 
 struct bpf_map_type_list {
 	struct list_head list_node;
-	struct bpf_map_ops *ops;
+	const struct bpf_map_ops *ops;
 	enum bpf_map_type type;
 };
 
@@ -109,7 +109,7 @@ struct bpf_verifier_ops {
 
 struct bpf_prog_type_list {
 	struct list_head list_node;
-	struct bpf_verifier_ops *ops;
+	const struct bpf_verifier_ops *ops;
 	enum bpf_prog_type type;
 };
 
@@ -121,7 +121,7 @@ struct bpf_prog_aux {
 	atomic_t refcnt;
 	bool is_gpl_compatible;
 	enum bpf_prog_type prog_type;
-	struct bpf_verifier_ops *ops;
+	const struct bpf_verifier_ops *ops;
 	struct bpf_map **used_maps;
 	u32 used_map_cnt;
 	struct bpf_prog *prog;
@@ -138,8 +138,8 @@ struct bpf_prog *bpf_prog_get(u32 ufd);
 int bpf_check(struct bpf_prog *fp, union bpf_attr *attr);
 
 /* verifier prototypes for helper functions called from eBPF programs */
-extern struct bpf_func_proto bpf_map_lookup_elem_proto;
-extern struct bpf_func_proto bpf_map_update_elem_proto;
-extern struct bpf_func_proto bpf_map_delete_elem_proto;
+extern const struct bpf_func_proto bpf_map_lookup_elem_proto;
+extern const struct bpf_func_proto bpf_map_update_elem_proto;
+extern const struct bpf_func_proto bpf_map_delete_elem_proto;
 
 #endif /* _LINUX_BPF_H */
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 9eb4d8a..8a66165 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -134,7 +134,7 @@ static void array_map_free(struct bpf_map *map)
 	kvfree(array);
 }
 
-static struct bpf_map_ops array_ops = {
+static const struct bpf_map_ops array_ops = {
 	.map_alloc = array_map_alloc,
 	.map_free = array_map_free,
 	.map_get_next_key = array_map_get_next_key,
@@ -143,14 +143,14 @@ static struct bpf_map_ops array_ops = {
 	.map_delete_elem = array_map_delete_elem,
 };
 
-static struct bpf_map_type_list tl = {
+static struct bpf_map_type_list array_type __read_mostly = {
 	.ops = &array_ops,
 	.type = BPF_MAP_TYPE_ARRAY,
 };
 
 static int __init register_array_map(void)
 {
-	bpf_register_map_type(&tl);
+	bpf_register_map_type(&array_type);
 	return 0;
 }
 late_initcall(register_array_map);
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index b3ba436..83c209d 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -345,7 +345,7 @@ static void htab_map_free(struct bpf_map *map)
 	kfree(htab);
 }
 
-static struct bpf_map_ops htab_ops = {
+static const struct bpf_map_ops htab_ops = {
 	.map_alloc = htab_map_alloc,
 	.map_free = htab_map_free,
 	.map_get_next_key = htab_map_get_next_key,
@@ -354,14 +354,14 @@ static struct bpf_map_ops htab_ops = {
 	.map_delete_elem = htab_map_delete_elem,
 };
 
-static struct bpf_map_type_list tl = {
+static struct bpf_map_type_list htab_type __read_mostly = {
 	.ops = &htab_ops,
 	.type = BPF_MAP_TYPE_HASH,
 };
 
 static int __init register_htab_map(void)
 {
-	bpf_register_map_type(&tl);
+	bpf_register_map_type(&htab_type);
 	return 0;
 }
 late_initcall(register_htab_map);
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 9e3414d..a3c7701 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -41,7 +41,7 @@ static u64 bpf_map_lookup_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 	return (unsigned long) value;
 }
 
-struct bpf_func_proto bpf_map_lookup_elem_proto = {
+const struct bpf_func_proto bpf_map_lookup_elem_proto = {
 	.func = bpf_map_lookup_elem,
 	.gpl_only = false,
 	.ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL,
@@ -60,7 +60,7 @@ static u64 bpf_map_update_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 	return map->ops->map_update_elem(map, key, value, r4);
 }
 
-struct bpf_func_proto bpf_map_update_elem_proto = {
+const struct bpf_func_proto bpf_map_update_elem_proto = {
 	.func = bpf_map_update_elem,
 	.gpl_only = false,
 	.ret_type = RET_INTEGER,
@@ -80,7 +80,7 @@ static u64 bpf_map_delete_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 	return map->ops->map_delete_elem(map, key);
 }
 
-struct bpf_func_proto bpf_map_delete_elem_proto = {
+const struct bpf_func_proto bpf_map_delete_elem_proto = {
 	.func = bpf_map_delete_elem,
 	.gpl_only = false,
 	.ret_type = RET_INTEGER,
diff --git a/net/core/filter.c b/net/core/filter.c
index ec9baea..e823da5 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1159,19 +1159,19 @@ static bool sock_filter_is_valid_access(int off, int size, enum bpf_access_type
 	return false;
 }
 
-static struct bpf_verifier_ops sock_filter_ops = {
+static const struct bpf_verifier_ops sock_filter_ops = {
 	.get_func_proto = sock_filter_func_proto,
 	.is_valid_access = sock_filter_is_valid_access,
 };
 
-static struct bpf_prog_type_list tl = {
+static struct bpf_prog_type_list sock_filter_type __read_mostly = {
 	.ops = &sock_filter_ops,
 	.type = BPF_PROG_TYPE_SOCKET_FILTER,
 };
 
 static int __init register_sock_filter_ops(void)
 {
-	bpf_register_prog_type(&tl);
+	bpf_register_prog_type(&sock_filter_type);
 	return 0;
 }
 late_initcall(register_sock_filter_ops);
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 1/7] ebpf: remove kernel test stubs Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 2/7] ebpf: constify various function pointer structs Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  2015-02-11  1:21   ` Alexei Starovoitov
  2015-02-11  0:15 ` [PATCH net-next 4/7] ebpf: extend program type/subsystem registration Daniel Borkmann
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

Just minor ... before doing all the copying work, we may want
to check for instruction count earlier. Also, we may want to
warn the user in case we would otherwise need to truncate the
license information.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 kernel/bpf/syscall.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 536edc2..73b105c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -473,25 +473,26 @@ static int bpf_prog_load(union bpf_attr *attr)
 {
 	enum bpf_prog_type type = attr->prog_type;
 	struct bpf_prog *prog;
-	int err;
 	char license[128];
 	bool is_gpl;
+	int err;
 
 	if (CHECK_ATTR(BPF_PROG_LOAD))
 		return -EINVAL;
+	if (attr->insn_cnt >= BPF_MAXINSNS)
+		return -EINVAL;
 
 	/* copy eBPF program license from user space */
-	if (strncpy_from_user(license, u64_to_ptr(attr->license),
-			      sizeof(license) - 1) < 0)
-		return -EFAULT;
-	license[sizeof(license) - 1] = 0;
+	err = strncpy_from_user(license, u64_to_ptr(attr->license),
+				sizeof(license));
+	if (err == sizeof(license))
+		err = -ERANGE;
+	if (err < 0)
+		return err;
 
 	/* eBPF programs must be GPL compatible to use GPL-ed functions */
 	is_gpl = license_is_gpl_compatible(license);
 
-	if (attr->insn_cnt >= BPF_MAXINSNS)
-		return -EINVAL;
-
 	/* plain bpf_prog allocation */
 	prog = bpf_prog_alloc(bpf_prog_size(attr->insn_cnt), GFP_USER);
 	if (!prog)
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 4/7] ebpf: extend program type/subsystem registration
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
                   ` (2 preceding siblings ...)
  2015-02-11  0:15 ` [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 5/7] ebpf: export BPF_PSEUDO_MAP_FD to uapi Daniel Borkmann
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

When various subsystems/modules start to make use of ebpf
e.g. cls_bpf, act_bpf, ovs, ... we need to make sure, they
can register their program types only once.

Moreover, we also need to serialize various registrations,
currently program type registration is being done without
locks. (We should make sure to not race in future when we
allow registration from modules.)

Last but not least, we need to be able to register subsystems
from module context as it's not sufficient to have them only
as built-in at all time. Built-in subsystems don't need to
provide an owner though.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 include/linux/bpf.h   | 11 +++----
 kernel/bpf/helpers.c  |  3 ++
 kernel/bpf/syscall.c  | 79 +++++++++++++++++++++++++++++++++++++++------------
 kernel/bpf/verifier.c | 15 +++++-----
 net/core/filter.c     |  9 +++---
 5 files changed, 83 insertions(+), 34 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 7844686..4fe1bd3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -110,21 +110,22 @@ struct bpf_verifier_ops {
 struct bpf_prog_type_list {
 	struct list_head list_node;
 	const struct bpf_verifier_ops *ops;
+	struct module *owner;
 	enum bpf_prog_type type;
 };
 
-void bpf_register_prog_type(struct bpf_prog_type_list *tl);
+int bpf_register_prog_type(struct bpf_prog_type_list *tl);
+void bpf_unregister_prog_type(struct bpf_prog_type_list *tl);
 
 struct bpf_prog;
 
 struct bpf_prog_aux {
 	atomic_t refcnt;
-	bool is_gpl_compatible;
-	enum bpf_prog_type prog_type;
-	const struct bpf_verifier_ops *ops;
+	bool gpl_compatible;
+	const struct bpf_prog_type_list *tl;
+	struct bpf_prog *prog;
 	struct bpf_map **used_maps;
 	u32 used_map_cnt;
-	struct bpf_prog *prog;
 	struct work_struct work;
 };
 
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index a3c7701..58efb27 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -48,6 +48,7 @@ const struct bpf_func_proto bpf_map_lookup_elem_proto = {
 	.arg1_type = ARG_CONST_MAP_PTR,
 	.arg2_type = ARG_PTR_TO_MAP_KEY,
 };
+EXPORT_SYMBOL_GPL(bpf_map_lookup_elem_proto);
 
 static u64 bpf_map_update_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 {
@@ -69,6 +70,7 @@ const struct bpf_func_proto bpf_map_update_elem_proto = {
 	.arg3_type = ARG_PTR_TO_MAP_VALUE,
 	.arg4_type = ARG_ANYTHING,
 };
+EXPORT_SYMBOL_GPL(bpf_map_update_elem_proto);
 
 static u64 bpf_map_delete_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 {
@@ -87,3 +89,4 @@ const struct bpf_func_proto bpf_map_delete_elem_proto = {
 	.arg1_type = ARG_CONST_MAP_PTR,
 	.arg2_type = ARG_PTR_TO_MAP_KEY,
 };
+EXPORT_SYMBOL_GPL(bpf_map_delete_elem_proto);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 73b105c..bacec89 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -15,6 +15,7 @@
 #include <linux/anon_inodes.h>
 #include <linux/file.h>
 #include <linux/license.h>
+#include <linux/module.h>
 #include <linux/filter.h>
 
 static LIST_HEAD(bpf_map_types);
@@ -102,7 +103,6 @@ static int map_create(union bpf_attr *attr)
 	atomic_set(&map->refcnt, 1);
 
 	err = anon_inode_getfd("bpf-map", &bpf_map_fops, map, O_RDWR | O_CLOEXEC);
-
 	if (err < 0)
 		/* failed to allocate fd */
 		goto free_map;
@@ -345,26 +345,61 @@ err_put:
 	return err;
 }
 
+static DEFINE_SPINLOCK(bpf_prog_types_lock);
 static LIST_HEAD(bpf_prog_types);
 
-static int find_prog_type(enum bpf_prog_type type, struct bpf_prog *prog)
+static void bpf_put_prog_type(const struct bpf_prog_type_list *tl)
+{
+	module_put(tl->owner);
+}
+
+static const struct bpf_prog_type_list *find_prog_type(enum bpf_prog_type type,
+						       bool get_ref)
 {
-	struct bpf_prog_type_list *tl;
+	struct bpf_prog_type_list *tl, *ret = NULL;
 
-	list_for_each_entry(tl, &bpf_prog_types, list_node) {
+	rcu_read_lock();
+	list_for_each_entry_rcu(tl, &bpf_prog_types, list_node) {
 		if (tl->type == type) {
-			prog->aux->ops = tl->ops;
-			prog->aux->prog_type = type;
-			return 0;
+			if (!get_ref || try_module_get(tl->owner))
+				ret = tl;
+			break;
 		}
 	}
-	return -EINVAL;
+	rcu_read_unlock();
+
+	return ret;
+}
+
+int bpf_register_prog_type(struct bpf_prog_type_list *tl)
+{
+	if (find_prog_type(tl->type, false))
+		return -EBUSY;
+
+	spin_lock(&bpf_prog_types_lock);
+	list_add_tail_rcu(&tl->list_node, &bpf_prog_types);
+	spin_unlock(&bpf_prog_types_lock);
+
+	return 0;
 }
+EXPORT_SYMBOL_GPL(bpf_register_prog_type);
 
-void bpf_register_prog_type(struct bpf_prog_type_list *tl)
+void bpf_unregister_prog_type(struct bpf_prog_type_list *tl)
 {
-	list_add(&tl->list_node, &bpf_prog_types);
+	spin_lock(&bpf_prog_types_lock);
+	list_del_rcu(&tl->list_node);
+	spin_unlock(&bpf_prog_types_lock);
+
+	/* Wait for outstanding readers to complete before a prog
+	 * type from a module gets removed entirely.
+	 *
+	 * A try_module_get() should fail by now as our module is
+	 * in "going" state since no refs are held anymore and
+	 * module_exit() handler being called.
+	 */
+	synchronize_rcu();
 }
+EXPORT_SYMBOL_GPL(bpf_unregister_prog_type);
 
 /* fixup insn->imm field of bpf_call instructions:
  * if (insn->imm == BPF_FUNC_map_lookup_elem)
@@ -384,13 +419,15 @@ static void fixup_bpf_calls(struct bpf_prog *prog)
 		struct bpf_insn *insn = &prog->insnsi[i];
 
 		if (insn->code == (BPF_JMP | BPF_CALL)) {
+			const struct bpf_verifier_ops *ops = prog->aux->tl->ops;
+
 			/* we reach here when program has bpf_call instructions
 			 * and it passed bpf_check(), means that
 			 * ops->get_func_proto must have been supplied, check it
 			 */
-			BUG_ON(!prog->aux->ops->get_func_proto);
+			BUG_ON(!ops->get_func_proto);
 
-			fn = prog->aux->ops->get_func_proto(insn->imm);
+			fn = ops->get_func_proto(insn->imm);
 			/* all functions that have prototype and verifier allowed
 			 * programs to call them, must be real in-kernel functions
 			 */
@@ -414,10 +451,12 @@ static void free_used_maps(struct bpf_prog_aux *aux)
 void bpf_prog_put(struct bpf_prog *prog)
 {
 	if (atomic_dec_and_test(&prog->aux->refcnt)) {
+		bpf_put_prog_type(prog->aux->tl);
 		free_used_maps(prog->aux);
 		bpf_prog_free(prog);
 	}
 }
+EXPORT_SYMBOL_GPL(bpf_prog_put);
 
 static int bpf_prog_release(struct inode *inode, struct file *filp)
 {
@@ -457,7 +496,6 @@ struct bpf_prog *bpf_prog_get(u32 ufd)
 	struct bpf_prog *prog;
 
 	prog = get_prog(f);
-
 	if (IS_ERR(prog))
 		return prog;
 
@@ -465,6 +503,7 @@ struct bpf_prog *bpf_prog_get(u32 ufd)
 	fdput(f);
 	return prog;
 }
+EXPORT_SYMBOL_GPL(bpf_prog_get);
 
 /* last field in 'union bpf_attr' used by this command */
 #define	BPF_PROG_LOAD_LAST_FIELD log_buf
@@ -472,6 +511,7 @@ struct bpf_prog *bpf_prog_get(u32 ufd)
 static int bpf_prog_load(union bpf_attr *attr)
 {
 	enum bpf_prog_type type = attr->prog_type;
+	const struct bpf_prog_type_list *tl;
 	struct bpf_prog *prog;
 	char license[128];
 	bool is_gpl;
@@ -509,16 +549,19 @@ static int bpf_prog_load(union bpf_attr *attr)
 	prog->jited = false;
 
 	atomic_set(&prog->aux->refcnt, 1);
-	prog->aux->is_gpl_compatible = is_gpl;
+	prog->aux->gpl_compatible = is_gpl;
 
 	/* find program type: socket_filter vs tracing_filter */
-	err = find_prog_type(type, prog);
-	if (err < 0)
+	tl = find_prog_type(type, true);
+	if (!tl) {
+		err = -EINVAL;
 		goto free_prog;
+	}
+
+	prog->aux->tl = tl;
 
 	/* run eBPF verifier */
 	err = bpf_check(prog, attr);
-
 	if (err < 0)
 		goto free_used_maps;
 
@@ -529,7 +572,6 @@ static int bpf_prog_load(union bpf_attr *attr)
 	bpf_prog_select_runtime(prog);
 
 	err = anon_inode_getfd("bpf-prog", &bpf_prog_fops, prog, O_RDWR | O_CLOEXEC);
-
 	if (err < 0)
 		/* failed to allocate fd */
 		goto free_used_maps;
@@ -537,6 +579,7 @@ static int bpf_prog_load(union bpf_attr *attr)
 	return err;
 
 free_used_maps:
+	bpf_put_prog_type(prog->aux->tl);
 	free_used_maps(prog->aux);
 free_prog:
 	bpf_prog_free(prog);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a28e09c..857e2fc 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -627,8 +627,9 @@ static int check_map_access(struct verifier_env *env, u32 regno, int off,
 static int check_ctx_access(struct verifier_env *env, int off, int size,
 			    enum bpf_access_type t)
 {
-	if (env->prog->aux->ops->is_valid_access &&
-	    env->prog->aux->ops->is_valid_access(off, size, t))
+	const struct bpf_verifier_ops *ops = env->prog->aux->tl->ops;
+
+	if (ops->is_valid_access && ops->is_valid_access(off, size, t))
 		return 0;
 
 	verbose("invalid bpf_context access off=%d size=%d\n", off, size);
@@ -831,6 +832,7 @@ static int check_func_arg(struct verifier_env *env, u32 regno,
 static int check_call(struct verifier_env *env, int func_id)
 {
 	struct verifier_state *state = &env->cur_state;
+	const struct bpf_verifier_ops *ops = env->prog->aux->tl->ops;
 	const struct bpf_func_proto *fn = NULL;
 	struct reg_state *regs = state->regs;
 	struct bpf_map *map = NULL;
@@ -843,16 +845,15 @@ static int check_call(struct verifier_env *env, int func_id)
 		return -EINVAL;
 	}
 
-	if (env->prog->aux->ops->get_func_proto)
-		fn = env->prog->aux->ops->get_func_proto(func_id);
-
+	if (ops->get_func_proto)
+		fn = ops->get_func_proto(func_id);
 	if (!fn) {
 		verbose("unknown func %d\n", func_id);
 		return -EINVAL;
 	}
 
 	/* eBPF programs must be GPL compatible to use GPL-ed functions */
-	if (!env->prog->aux->is_gpl_compatible && fn->gpl_only) {
+	if (!env->prog->aux->gpl_compatible && fn->gpl_only) {
 		verbose("cannot call GPL only function from proprietary program\n");
 		return -EINVAL;
 	}
@@ -1194,7 +1195,7 @@ static int check_ld_abs(struct verifier_env *env, struct bpf_insn *insn)
 	struct reg_state *reg;
 	int i, err;
 
-	if (env->prog->aux->prog_type != BPF_PROG_TYPE_SOCKET_FILTER) {
+	if (env->prog->aux->tl->type != BPF_PROG_TYPE_SOCKET_FILTER) {
 		verbose("BPF_LD_ABS|IND instructions are only allowed in socket filters\n");
 		return -EINVAL;
 	}
diff --git a/net/core/filter.c b/net/core/filter.c
index e823da5..d76560f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -814,7 +814,8 @@ static void bpf_release_orig_filter(struct bpf_prog *fp)
 
 static void __bpf_prog_release(struct bpf_prog *prog)
 {
-	if (prog->aux->prog_type == BPF_PROG_TYPE_SOCKET_FILTER) {
+	if (prog->aux->tl &&
+	    prog->aux->tl->type == BPF_PROG_TYPE_SOCKET_FILTER) {
 		bpf_prog_put(prog);
 	} else {
 		bpf_release_orig_filter(prog);
@@ -1106,7 +1107,7 @@ int sk_attach_bpf(u32 ufd, struct sock *sk)
 	if (IS_ERR(prog))
 		return PTR_ERR(prog);
 
-	if (prog->aux->prog_type != BPF_PROG_TYPE_SOCKET_FILTER) {
+	if (prog->aux->tl->type != BPF_PROG_TYPE_SOCKET_FILTER) {
 		/* valid fd, but invalid program type */
 		bpf_prog_put(prog);
 		return -EINVAL;
@@ -1171,8 +1172,7 @@ static struct bpf_prog_type_list sock_filter_type __read_mostly = {
 
 static int __init register_sock_filter_ops(void)
 {
-	bpf_register_prog_type(&sock_filter_type);
-	return 0;
+	return bpf_register_prog_type(&sock_filter_type);
 }
 late_initcall(register_sock_filter_ops);
 #else
@@ -1181,6 +1181,7 @@ int sk_attach_bpf(u32 ufd, struct sock *sk)
 	return -EOPNOTSUPP;
 }
 #endif
+
 int sk_detach_filter(struct sock *sk)
 {
 	int ret = -ENOENT;
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 5/7] ebpf: export BPF_PSEUDO_MAP_FD to uapi
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
                   ` (3 preceding siblings ...)
  2015-02-11  0:15 ` [PATCH net-next 4/7] ebpf: extend program type/subsystem registration Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  2015-02-11  1:39   ` Alexei Starovoitov
  2015-02-11  0:15 ` [PATCH net-next 6/7] ebpf: remove CONFIG_BPF_SYSCALL ifdefs in socket filter code Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 7/7] cls_bpf: add initial eBPF support for programmable classifiers Daniel Borkmann
  6 siblings, 1 reply; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

We need to export BPF_PSEUDO_MAP_FD to user space, as it's used in
the ELF BPF loader where instructions are being loaded that need
map fixups (relocations). An initial stage loads all maps into the
kernel, and later on replaces related instructions in the eBPF blob
with BPF_PSEUDO_MAP_FD as source register and the actual fd as
immediate value. The kernel verifier recognizes this keyword and
replaces the map fd with a real pointer internally.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 include/linux/filter.h   | 2 --
 include/uapi/linux/bpf.h | 2 ++
 samples/bpf/libbpf.h     | 4 +++-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index caac208..5e3863d 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -145,8 +145,6 @@ struct bpf_prog_aux;
 		.off   = 0,					\
 		.imm   = ((__u64) (IMM)) >> 32 })
 
-#define BPF_PSEUDO_MAP_FD	1
-
 /* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */
 #define BPF_LD_MAP_FD(DST, MAP_FD)				\
 	BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 45da7ec..0248180 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -120,6 +120,8 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_SOCKET_FILTER,
 };
 
+#define BPF_PSEUDO_MAP_FD	1
+
 /* flags for BPF_MAP_UPDATE_ELEM command */
 #define BPF_ANY		0 /* create new element or update existing */
 #define BPF_NOEXIST	1 /* create new element if it didn't exist */
diff --git a/samples/bpf/libbpf.h b/samples/bpf/libbpf.h
index 58c5fe1..a6bb7e9 100644
--- a/samples/bpf/libbpf.h
+++ b/samples/bpf/libbpf.h
@@ -92,7 +92,9 @@ extern char bpf_log_buf[LOG_BUF_SIZE];
 		.off   = 0,					\
 		.imm   = ((__u64) (IMM)) >> 32 })
 
-#define BPF_PSEUDO_MAP_FD	1
+#ifndef BPF_PSEUDO_MAP_FD
+# define BPF_PSEUDO_MAP_FD	1
+#endif
 
 /* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */
 #define BPF_LD_MAP_FD(DST, MAP_FD)				\
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 6/7] ebpf: remove CONFIG_BPF_SYSCALL ifdefs in socket filter code
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
                   ` (4 preceding siblings ...)
  2015-02-11  0:15 ` [PATCH net-next 5/7] ebpf: export BPF_PSEUDO_MAP_FD to uapi Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  2015-02-11  0:15 ` [PATCH net-next 7/7] cls_bpf: add initial eBPF support for programmable classifiers Daniel Borkmann
  6 siblings, 0 replies; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

Socket filter code and other subsystems with upcoming eBPF support
should not need to deal with the fact that we have CONFIG_BPF_SYSCALL
defined or not. Having the bpf syscall as a config option is a nice
thing and I'd expect it to stay that way for expert users (I presume
one day the default setting of it might change, though), but code
making use of it should not care if it's actually enabled or not.
Instead, hide this via header files and let the rest deal with it.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 include/linux/bpf.h |  27 +++++++--
 net/core/filter.c   | 166 ++++++++++++++++++++++++----------------------------
 2 files changed, 100 insertions(+), 93 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4fe1bd3..def0103 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -114,9 +114,6 @@ struct bpf_prog_type_list {
 	enum bpf_prog_type type;
 };
 
-int bpf_register_prog_type(struct bpf_prog_type_list *tl);
-void bpf_unregister_prog_type(struct bpf_prog_type_list *tl);
-
 struct bpf_prog;
 
 struct bpf_prog_aux {
@@ -130,11 +127,31 @@ struct bpf_prog_aux {
 };
 
 #ifdef CONFIG_BPF_SYSCALL
+int bpf_register_prog_type(struct bpf_prog_type_list *tl);
+void bpf_unregister_prog_type(struct bpf_prog_type_list *tl);
+
 void bpf_prog_put(struct bpf_prog *prog);
+struct bpf_prog *bpf_prog_get(u32 ufd);
 #else
-static inline void bpf_prog_put(struct bpf_prog *prog) {}
+static inline int bpf_register_prog_type(struct bpf_prog_type_list *tl)
+{
+	return 0;
+}
+
+static inline void bpf_unregister_prog_type(struct bpf_prog_type_list *tl)
+{
+}
+
+static inline struct bpf_prog *bpf_prog_get(u32 ufd)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+
+static inline void bpf_prog_put(struct bpf_prog *prog)
+{
+}
 #endif
-struct bpf_prog *bpf_prog_get(u32 ufd);
+
 /* verify correctness of eBPF program */
 int bpf_check(struct bpf_prog *fp, union bpf_attr *attr);
 
diff --git a/net/core/filter.c b/net/core/filter.c
index d76560f..306b860 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1020,6 +1020,46 @@ void bpf_prog_destroy(struct bpf_prog *fp)
 }
 EXPORT_SYMBOL_GPL(bpf_prog_destroy);
 
+int sk_attach_bpf(u32 ufd, struct sock *sk)
+{
+	struct sk_filter *fp, *old_fp;
+	struct bpf_prog *prog;
+
+	if (sock_flag(sk, SOCK_FILTER_LOCKED))
+		return -EPERM;
+
+	prog = bpf_prog_get(ufd);
+	if (IS_ERR(prog))
+		return PTR_ERR(prog);
+
+	if (prog->aux->tl->type != BPF_PROG_TYPE_SOCKET_FILTER) {
+		bpf_prog_put(prog);
+		return -EINVAL;
+	}
+
+	fp = kmalloc(sizeof(*fp), GFP_KERNEL);
+	if (!fp) {
+		bpf_prog_put(prog);
+		return -ENOMEM;
+	}
+
+	fp->prog = prog;
+	atomic_set(&fp->refcnt, 0);
+
+	if (!sk_filter_charge(sk, fp)) {
+		__sk_filter_release(fp);
+		return -ENOMEM;
+	}
+
+	old_fp = rcu_dereference_protected(sk->sk_filter,
+					   sock_owned_by_user(sk));
+	rcu_assign_pointer(sk->sk_filter, fp);
+	if (old_fp)
+		sk_filter_uncharge(sk, old_fp);
+
+	return 0;
+}
+
 /**
  *	sk_attach_filter - attach a socket filter
  *	@fprog: the filter program
@@ -1094,94 +1134,6 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
 }
 EXPORT_SYMBOL_GPL(sk_attach_filter);
 
-#ifdef CONFIG_BPF_SYSCALL
-int sk_attach_bpf(u32 ufd, struct sock *sk)
-{
-	struct sk_filter *fp, *old_fp;
-	struct bpf_prog *prog;
-
-	if (sock_flag(sk, SOCK_FILTER_LOCKED))
-		return -EPERM;
-
-	prog = bpf_prog_get(ufd);
-	if (IS_ERR(prog))
-		return PTR_ERR(prog);
-
-	if (prog->aux->tl->type != BPF_PROG_TYPE_SOCKET_FILTER) {
-		/* valid fd, but invalid program type */
-		bpf_prog_put(prog);
-		return -EINVAL;
-	}
-
-	fp = kmalloc(sizeof(*fp), GFP_KERNEL);
-	if (!fp) {
-		bpf_prog_put(prog);
-		return -ENOMEM;
-	}
-	fp->prog = prog;
-
-	atomic_set(&fp->refcnt, 0);
-
-	if (!sk_filter_charge(sk, fp)) {
-		__sk_filter_release(fp);
-		return -ENOMEM;
-	}
-
-	old_fp = rcu_dereference_protected(sk->sk_filter,
-					   sock_owned_by_user(sk));
-	rcu_assign_pointer(sk->sk_filter, fp);
-
-	if (old_fp)
-		sk_filter_uncharge(sk, old_fp);
-
-	return 0;
-}
-
-/* allow socket filters to call
- * bpf_map_lookup_elem(), bpf_map_update_elem(), bpf_map_delete_elem()
- */
-static const struct bpf_func_proto *sock_filter_func_proto(enum bpf_func_id func_id)
-{
-	switch (func_id) {
-	case BPF_FUNC_map_lookup_elem:
-		return &bpf_map_lookup_elem_proto;
-	case BPF_FUNC_map_update_elem:
-		return &bpf_map_update_elem_proto;
-	case BPF_FUNC_map_delete_elem:
-		return &bpf_map_delete_elem_proto;
-	default:
-		return NULL;
-	}
-}
-
-static bool sock_filter_is_valid_access(int off, int size, enum bpf_access_type type)
-{
-	/* skb fields cannot be accessed yet */
-	return false;
-}
-
-static const struct bpf_verifier_ops sock_filter_ops = {
-	.get_func_proto = sock_filter_func_proto,
-	.is_valid_access = sock_filter_is_valid_access,
-};
-
-static struct bpf_prog_type_list sock_filter_type __read_mostly = {
-	.ops = &sock_filter_ops,
-	.type = BPF_PROG_TYPE_SOCKET_FILTER,
-};
-
-static int __init register_sock_filter_ops(void)
-{
-	return bpf_register_prog_type(&sock_filter_type);
-}
-late_initcall(register_sock_filter_ops);
-#else
-int sk_attach_bpf(u32 ufd, struct sock *sk)
-{
-	return -EOPNOTSUPP;
-}
-#endif
-
 int sk_detach_filter(struct sock *sk)
 {
 	int ret = -ENOENT;
@@ -1241,3 +1193,41 @@ out:
 	release_sock(sk);
 	return ret;
 }
+
+static const struct bpf_func_proto *
+sock_filter_func_proto(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	case BPF_FUNC_map_lookup_elem:
+		return &bpf_map_lookup_elem_proto;
+	case BPF_FUNC_map_update_elem:
+		return &bpf_map_update_elem_proto;
+	case BPF_FUNC_map_delete_elem:
+		return &bpf_map_delete_elem_proto;
+	default:
+		return NULL;
+	}
+}
+
+static bool sock_filter_is_valid_access(int off, int size,
+					enum bpf_access_type type)
+{
+	/* skb fields cannot be accessed yet */
+	return false;
+}
+
+static const struct bpf_verifier_ops sock_filter_ops = {
+	.get_func_proto = sock_filter_func_proto,
+	.is_valid_access = sock_filter_is_valid_access,
+};
+
+static struct bpf_prog_type_list sock_filter_type __read_mostly = {
+	.ops = &sock_filter_ops,
+	.type = BPF_PROG_TYPE_SOCKET_FILTER,
+};
+
+static int __init register_sock_filter_ops(void)
+{
+	return bpf_register_prog_type(&sock_filter_type);
+}
+late_initcall(register_sock_filter_ops);
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 7/7] cls_bpf: add initial eBPF support for programmable classifiers
  2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
                   ` (5 preceding siblings ...)
  2015-02-11  0:15 ` [PATCH net-next 6/7] ebpf: remove CONFIG_BPF_SYSCALL ifdefs in socket filter code Daniel Borkmann
@ 2015-02-11  0:15 ` Daniel Borkmann
  6 siblings, 0 replies; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-11  0:15 UTC (permalink / raw)
  To: jiri; +Cc: ast, netdev, Daniel Borkmann

This work extends the classic BPF programmable classifier by extending
its scope also to native eBPF code. This allows for implementing
custom C-like classifiers, compiling them with the LLVM eBPF backend
and loading the resulting object file via tc into the kernel.

Simple, minimal toy example:

  #include <linux/ip.h>
  #include <linux/if_ether.h>
  #include <linux/bpf.h>

  #include "tc_bpf_api.h"

  __section("classify")
  int cls_main(struct sk_buff *skb)
  {
    return (0x800 << 16) | load_byte(skb, ETH_HLEN + __builtin_offsetof(struct iphdr, tos));
  }

  char __license[] __section("license") = "GPL";

The classifier can then be compiled into eBPF opcodes and loaded via
tc, f.e.:

  clang -O2 -emit-llvm -c cls.c -o - | llc -march=bpf -filetype=obj -o cls.o
  tc filter add dev em1 parent 1: bpf run object-file cls.o [...]

As it has been demonstrated, the scope can even reach up to a fully
fledged flow dissector (similarly as in samples/bpf/sockex2_kern.c).
For tc, maps are allowed to be used, but from kernel context only,
in other words eBPF code can keep state across filter invocations.
Similarly as in socket filters, we may extend functionality for eBPF
classifiers over time depending on the use cases. For that purpose,
I have added the BPF_PROG_TYPE_SCHED_CLS program type for the cls_bpf
classifier module, so we can allow additional functions/accessors.

I was wondering whether cls_bpf and act_bpf may share C programs, I
can imagine that at some point, we may introduce i) some common
handlers for both (or even beyond their scope), and/or ii) some
restricted function space for each of them. Both can be abstracted
through struct bpf_verifier_ops in future. The context of a cls_bpf
versus act_bpf is slightly different though: a cls_bpf program will
return a specific classid whereas act_bpf a drop/non-drop return
code. That said, we can surely have a "classify" and "action" section
in a single object file, or considered mentioned constraint add a
possibility of a shared section.

The workflow for getting native eBPF running from tc [1] is as
follows: for f_bpf, I've added a slightly modified ELF parser code
from Alexei's kernel sample, which reads out the LLVM compiled
object, sets up maps (and dynamically fixes up map fds) if any,
and loads the eBPF instructions all centrally through the bpf
syscall. The resulting fd from the loaded program itself is being
passed down to cls_bpf, which looks up struct bpf_prog from the
fd store, and holds reference, so that it stays available also
after tc program lifetime. On tc filter destruction, it will then
drop its reference.

  [1] http://git.breakpoint.cc/cgit/dborkman/iproute2.git/log/?h=ebpf

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 include/uapi/linux/bpf.h     |   1 +
 include/uapi/linux/pkt_cls.h |   1 +
 kernel/bpf/verifier.c        |  15 +++-
 net/sched/cls_bpf.c          | 200 ++++++++++++++++++++++++++++++++-----------
 4 files changed, 165 insertions(+), 52 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 0248180..3fa1af8 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -118,6 +118,7 @@ enum bpf_map_type {
 enum bpf_prog_type {
 	BPF_PROG_TYPE_UNSPEC,
 	BPF_PROG_TYPE_SOCKET_FILTER,
+	BPF_PROG_TYPE_SCHED_CLS,
 };
 
 #define BPF_PSEUDO_MAP_FD	1
diff --git a/include/uapi/linux/pkt_cls.h b/include/uapi/linux/pkt_cls.h
index 25731df..1f192cb 100644
--- a/include/uapi/linux/pkt_cls.h
+++ b/include/uapi/linux/pkt_cls.h
@@ -397,6 +397,7 @@ enum {
 	TCA_BPF_CLASSID,
 	TCA_BPF_OPS_LEN,
 	TCA_BPF_OPS,
+	TCA_BPF_EFD,
 	__TCA_BPF_MAX,
 };
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 857e2fc..9aa4747 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1173,6 +1173,17 @@ static int check_ld_imm(struct verifier_env *env, struct bpf_insn *insn)
 	return 0;
 }
 
+static bool may_access_skb(enum bpf_prog_type type)
+{
+	switch (type) {
+	case BPF_PROG_TYPE_SOCKET_FILTER:
+	case BPF_PROG_TYPE_SCHED_CLS:
+		return true;
+	default:
+		return false;
+	}
+}
+
 /* verify safety of LD_ABS|LD_IND instructions:
  * - they can only appear in the programs where ctx == skb
  * - since they are wrappers of function calls, they scratch R1-R5 registers,
@@ -1195,8 +1206,8 @@ static int check_ld_abs(struct verifier_env *env, struct bpf_insn *insn)
 	struct reg_state *reg;
 	int i, err;
 
-	if (env->prog->aux->tl->type != BPF_PROG_TYPE_SOCKET_FILTER) {
-		verbose("BPF_LD_ABS|IND instructions are only allowed in socket filters\n");
+	if (!may_access_skb(env->prog->aux->tl->type)) {
+		verbose("BPF_LD_ABS|IND instructions not allowed for this program type\n");
 		return -EINVAL;
 	}
 
diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
index 5f3ee9e..c6e1328 100644
--- a/net/sched/cls_bpf.c
+++ b/net/sched/cls_bpf.c
@@ -16,6 +16,8 @@
 #include <linux/types.h>
 #include <linux/skbuff.h>
 #include <linux/filter.h>
+#include <linux/bpf.h>
+
 #include <net/rtnetlink.h>
 #include <net/pkt_cls.h>
 #include <net/sock.h>
@@ -37,18 +39,27 @@ struct cls_bpf_prog {
 	struct tcf_result res;
 	struct list_head link;
 	u32 handle;
-	u16 bpf_num_ops;
+	union {
+		u32 bpf_fd;
+		u16 bpf_num_ops;
+	};
 	struct tcf_proto *tp;
 	struct rcu_head rcu;
 };
 
 static const struct nla_policy bpf_policy[TCA_BPF_MAX + 1] = {
 	[TCA_BPF_CLASSID]	= { .type = NLA_U32 },
+	[TCA_BPF_EFD]		= { .type = NLA_U32 },
 	[TCA_BPF_OPS_LEN]	= { .type = NLA_U16 },
 	[TCA_BPF_OPS]		= { .type = NLA_BINARY,
 				    .len = sizeof(struct sock_filter) * BPF_MAXINSNS },
 };
 
+static bool cls_bpf_is_ebpf(const struct cls_bpf_prog *prog)
+{
+	return prog->bpf_ops == NULL;
+}
+
 static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp,
 			    struct tcf_result *res)
 {
@@ -94,7 +105,10 @@ static void cls_bpf_delete_prog(struct tcf_proto *tp, struct cls_bpf_prog *prog)
 {
 	tcf_exts_destroy(&prog->exts);
 
-	bpf_prog_destroy(prog->filter);
+	if (cls_bpf_is_ebpf(prog))
+		bpf_prog_put(prog->filter);
+	else
+		bpf_prog_destroy(prog->filter);
 
 	kfree(prog->bpf_ops);
 	kfree(prog);
@@ -114,6 +128,7 @@ static int cls_bpf_delete(struct tcf_proto *tp, unsigned long arg)
 	list_del_rcu(&prog->link);
 	tcf_unbind_filter(tp, &prog->res);
 	call_rcu(&prog->rcu, __cls_bpf_delete_prog);
+
 	return 0;
 }
 
@@ -151,69 +166,104 @@ static unsigned long cls_bpf_get(struct tcf_proto *tp, u32 handle)
 	return ret;
 }
 
-static int cls_bpf_modify_existing(struct net *net, struct tcf_proto *tp,
-				   struct cls_bpf_prog *prog,
-				   unsigned long base, struct nlattr **tb,
-				   struct nlattr *est, bool ovr)
+static int cls_bpf_prog_from_ops(struct nlattr **tb,
+				 struct cls_bpf_prog *prog, u32 classid)
 {
 	struct sock_filter *bpf_ops;
-	struct tcf_exts exts;
-	struct sock_fprog_kern tmp;
+	struct sock_fprog_kern fprog_tmp;
 	struct bpf_prog *fp;
 	u16 bpf_size, bpf_num_ops;
-	u32 classid;
 	int ret;
 
-	if (!tb[TCA_BPF_OPS_LEN] || !tb[TCA_BPF_OPS] || !tb[TCA_BPF_CLASSID])
-		return -EINVAL;
-
-	tcf_exts_init(&exts, TCA_BPF_ACT, TCA_BPF_POLICE);
-	ret = tcf_exts_validate(net, tp, tb, est, &exts, ovr);
-	if (ret < 0)
-		return ret;
-
-	classid = nla_get_u32(tb[TCA_BPF_CLASSID]);
 	bpf_num_ops = nla_get_u16(tb[TCA_BPF_OPS_LEN]);
-	if (bpf_num_ops > BPF_MAXINSNS || bpf_num_ops == 0) {
-		ret = -EINVAL;
-		goto errout;
-	}
+	if (bpf_num_ops > BPF_MAXINSNS || bpf_num_ops == 0)
+		return -EINVAL;
 
 	bpf_size = bpf_num_ops * sizeof(*bpf_ops);
-	if (bpf_size != nla_len(tb[TCA_BPF_OPS])) {
-		ret = -EINVAL;
-		goto errout;
-	}
+	if (bpf_size != nla_len(tb[TCA_BPF_OPS]))
+		return -EINVAL;
 
 	bpf_ops = kzalloc(bpf_size, GFP_KERNEL);
-	if (bpf_ops == NULL) {
-		ret = -ENOMEM;
-		goto errout;
-	}
+	if (bpf_ops == NULL)
+		return -ENOMEM;
 
 	memcpy(bpf_ops, nla_data(tb[TCA_BPF_OPS]), bpf_size);
 
-	tmp.len = bpf_num_ops;
-	tmp.filter = bpf_ops;
+	fprog_tmp.len = bpf_num_ops;
+	fprog_tmp.filter = bpf_ops;
 
-	ret = bpf_prog_create(&fp, &tmp);
-	if (ret)
-		goto errout_free;
+	ret = bpf_prog_create(&fp, &fprog_tmp);
+	if (ret < 0) {
+		kfree(bpf_ops);
+		return ret;
+	}
 
 	prog->bpf_num_ops = bpf_num_ops;
 	prog->bpf_ops = bpf_ops;
 	prog->filter = fp;
 	prog->res.classid = classid;
 
+	return 0;
+}
+
+static int cls_bpf_prog_from_efd(struct nlattr **tb,
+				 struct cls_bpf_prog *prog, u32 classid)
+{
+	struct bpf_prog *fp;
+	u32 bpf_fd;
+
+	bpf_fd = nla_get_u32(tb[TCA_BPF_EFD]);
+
+	fp = bpf_prog_get(bpf_fd);
+	if (IS_ERR(fp))
+		return PTR_ERR(fp);
+
+	if (fp->aux->tl->type != BPF_PROG_TYPE_SCHED_CLS) {
+		bpf_prog_put(fp);
+		return -EINVAL;
+	}
+
+	prog->bpf_ops = NULL;
+	prog->bpf_fd = bpf_fd;
+	prog->filter = fp;
+	prog->res.classid = classid;
+
+	return 0;
+}
+
+static int cls_bpf_modify_existing(struct net *net, struct tcf_proto *tp,
+				   struct cls_bpf_prog *prog,
+				   unsigned long base, struct nlattr **tb,
+				   struct nlattr *est, bool ovr)
+{
+	struct tcf_exts exts;
+	bool is_bpf, is_ebpf;
+	u32 classid;
+	int ret;
+
+	is_bpf = tb[TCA_BPF_OPS_LEN] && tb[TCA_BPF_OPS];
+	is_ebpf = tb[TCA_BPF_EFD];
+	if ((!is_bpf && !is_ebpf) || !tb[TCA_BPF_CLASSID])
+		return -EINVAL;
+
+	tcf_exts_init(&exts, TCA_BPF_ACT, TCA_BPF_POLICE);
+	ret = tcf_exts_validate(net, tp, tb, est, &exts, ovr);
+	if (ret < 0)
+		return ret;
+
+	classid = nla_get_u32(tb[TCA_BPF_CLASSID]);
+
+	ret = is_bpf ? cls_bpf_prog_from_ops(tb, prog, classid) :
+		       cls_bpf_prog_from_efd(tb, prog, classid);
+	if (ret < 0) {
+		tcf_exts_destroy(&exts);
+		return ret;
+	}
+
 	tcf_bind_filter(tp, &prog->res, base);
 	tcf_exts_change(tp, &prog->exts, &exts);
 
 	return 0;
-errout_free:
-	kfree(bpf_ops);
-errout:
-	tcf_exts_destroy(&exts);
-	return ret;
 }
 
 static u32 cls_bpf_grab_new_handle(struct tcf_proto *tp,
@@ -290,10 +340,10 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
 	}
 
 	*arg = (unsigned long) prog;
+
 	return 0;
 errout:
 	kfree(prog);
-
 	return ret;
 }
 
@@ -301,7 +351,7 @@ static int cls_bpf_dump(struct net *net, struct tcf_proto *tp, unsigned long fh,
 			struct sk_buff *skb, struct tcmsg *tm)
 {
 	struct cls_bpf_prog *prog = (struct cls_bpf_prog *) fh;
-	struct nlattr *nest, *nla;
+	struct nlattr *nest;
 
 	if (prog == NULL)
 		return skb->len;
@@ -314,15 +364,23 @@ static int cls_bpf_dump(struct net *net, struct tcf_proto *tp, unsigned long fh,
 
 	if (nla_put_u32(skb, TCA_BPF_CLASSID, prog->res.classid))
 		goto nla_put_failure;
-	if (nla_put_u16(skb, TCA_BPF_OPS_LEN, prog->bpf_num_ops))
-		goto nla_put_failure;
 
-	nla = nla_reserve(skb, TCA_BPF_OPS, prog->bpf_num_ops *
-			  sizeof(struct sock_filter));
-	if (nla == NULL)
-		goto nla_put_failure;
+	if (cls_bpf_is_ebpf(prog)) {
+		if (nla_put_u32(skb, TCA_BPF_EFD, prog->bpf_fd))
+			goto nla_put_failure;
+	} else {
+		struct nlattr *nla;
+
+		if (nla_put_u16(skb, TCA_BPF_OPS_LEN, prog->bpf_num_ops))
+			goto nla_put_failure;
 
-	memcpy(nla_data(nla), prog->bpf_ops, nla_len(nla));
+		nla = nla_reserve(skb, TCA_BPF_OPS, prog->bpf_num_ops *
+				  sizeof(struct sock_filter));
+		if (nla == NULL)
+			goto nla_put_failure;
+
+		memcpy(nla_data(nla), prog->bpf_ops, nla_len(nla));
+	}
 
 	if (tcf_exts_dump(skb, &prog->exts) < 0)
 		goto nla_put_failure;
@@ -356,6 +414,37 @@ skip:
 	}
 }
 
+static const struct bpf_func_proto *bpf_cls_func_proto(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	default:
+		return NULL;
+	case BPF_FUNC_map_lookup_elem:
+		return &bpf_map_lookup_elem_proto;
+	case BPF_FUNC_map_update_elem:
+		return &bpf_map_update_elem_proto;
+	case BPF_FUNC_map_delete_elem:
+		return &bpf_map_delete_elem_proto;
+	}
+}
+
+static bool bpf_cls_valid_access(int off, int size, enum bpf_access_type type)
+{
+	/* TODO: skb fields cannot be accessed yet */
+	return false;
+}
+
+static const struct bpf_verifier_ops bpf_cls_vops = {
+	.get_func_proto		= bpf_cls_func_proto,
+	.is_valid_access	= bpf_cls_valid_access,
+};
+
+static struct bpf_prog_type_list bpf_cls_type = {
+	.ops = &bpf_cls_vops,
+	.type = BPF_PROG_TYPE_SCHED_CLS,
+	.owner = THIS_MODULE,
+};
+
 static struct tcf_proto_ops cls_bpf_ops __read_mostly = {
 	.kind		=	"bpf",
 	.owner		=	THIS_MODULE,
@@ -371,12 +460,23 @@ static struct tcf_proto_ops cls_bpf_ops __read_mostly = {
 
 static int __init cls_bpf_init_mod(void)
 {
-	return register_tcf_proto_ops(&cls_bpf_ops);
+	int ret;
+
+	ret = bpf_register_prog_type(&bpf_cls_type);
+	if (ret)
+		return ret;
+
+	ret = register_tcf_proto_ops(&cls_bpf_ops);
+	if (ret)
+		bpf_unregister_prog_type(&bpf_cls_type);
+
+	return ret;
 }
 
 static void __exit cls_bpf_exit_mod(void)
 {
 	unregister_tcf_proto_ops(&cls_bpf_ops);
+	bpf_unregister_prog_type(&bpf_cls_type);
 }
 
 module_init(cls_bpf_init_mod);
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 1/7] ebpf: remove kernel test stubs
  2015-02-11  0:15 ` [PATCH net-next 1/7] ebpf: remove kernel test stubs Daniel Borkmann
@ 2015-02-11  0:42   ` Alexei Starovoitov
  0 siblings, 0 replies; 13+ messages in thread
From: Alexei Starovoitov @ 2015-02-11  0:42 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: Jiří Pírko, Network Development

On Tue, Feb 10, 2015 at 4:15 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> Now that we have BPF_PROG_TYPE_SOCKET_FILTER up and running,
> we can remove the test stubs which were added to get the
> verifier suite up. We can just let the test cases probe under
> socket filter type instead. In the fill/spill test case, we
> cannot (yet) access fields from the context (skb), but we may
> adapt that test case in future.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

Acked-by: Alexei Starovoitov <ast@plumgrid.com>

has been on my todo list for a while. Thanks a bunch!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 2/7] ebpf: constify various function pointer structs
  2015-02-11  0:15 ` [PATCH net-next 2/7] ebpf: constify various function pointer structs Daniel Borkmann
@ 2015-02-11  0:43   ` Alexei Starovoitov
  0 siblings, 0 replies; 13+ messages in thread
From: Alexei Starovoitov @ 2015-02-11  0:43 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: Jiří Pírko, Network Development

On Tue, Feb 10, 2015 at 4:15 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> We can move bpf_map_ops and bpf_verifier_ops and other structs
> into RO section, bpf_map_type_list and bpf_prog_type_list into
> read mostly.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

sure. makes sense.
Acked-by: Alexei Starovoitov <ast@plumgrid.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load
  2015-02-11  0:15 ` [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load Daniel Borkmann
@ 2015-02-11  1:21   ` Alexei Starovoitov
  2015-02-12 20:43     ` Daniel Borkmann
  0 siblings, 1 reply; 13+ messages in thread
From: Alexei Starovoitov @ 2015-02-11  1:21 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: Jiří Pírko, Network Development

On Tue, Feb 10, 2015 at 4:15 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> Just minor ... before doing all the copying work, we may want
> to check for instruction count earlier. Also, we may want to
> warn the user in case we would otherwise need to truncate the
> license information.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> ---
>  kernel/bpf/syscall.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 536edc2..73b105c 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -473,25 +473,26 @@ static int bpf_prog_load(union bpf_attr *attr)
>  {
>         enum bpf_prog_type type = attr->prog_type;
>         struct bpf_prog *prog;
> -       int err;
>         char license[128];
>         bool is_gpl;
> +       int err;
>
>         if (CHECK_ATTR(BPF_PROG_LOAD))
>                 return -EINVAL;
> +       if (attr->insn_cnt >= BPF_MAXINSNS)
> +               return -EINVAL;
>
>         /* copy eBPF program license from user space */
> -       if (strncpy_from_user(license, u64_to_ptr(attr->license),
> -                             sizeof(license) - 1) < 0)
> -               return -EFAULT;
> -       license[sizeof(license) - 1] = 0;
> +       err = strncpy_from_user(license, u64_to_ptr(attr->license),
> +                               sizeof(license));
> +       if (err == sizeof(license))
> +               err = -ERANGE;

I think this error is misleading.
In case of license we care whether it's gpl or not.
This boolean indicator we remember for the life of the program.
We don't keep the license.
So if user specified 'my_ultra_long_proprietery_license'
that should be fine. The program should still be accepted
and marked as non-gpl.

> +       if (err < 0)
> +               return err;
>
>         /* eBPF programs must be GPL compatible to use GPL-ed functions */
>         is_gpl = license_is_gpl_compatible(license);
>
> -       if (attr->insn_cnt >= BPF_MAXINSNS)
> -               return -EINVAL;

moving this check, I guess, is fine. No one should depend
on order of errors.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 5/7] ebpf: export BPF_PSEUDO_MAP_FD to uapi
  2015-02-11  0:15 ` [PATCH net-next 5/7] ebpf: export BPF_PSEUDO_MAP_FD to uapi Daniel Borkmann
@ 2015-02-11  1:39   ` Alexei Starovoitov
  0 siblings, 0 replies; 13+ messages in thread
From: Alexei Starovoitov @ 2015-02-11  1:39 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: Jiří Pírko, Network Development

On Tue, Feb 10, 2015 at 4:15 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> We need to export BPF_PSEUDO_MAP_FD to user space, as it's used in
> the ELF BPF loader where instructions are being loaded that need
> map fixups (relocations). An initial stage loads all maps into the
> kernel, and later on replaces related instructions in the eBPF blob
> with BPF_PSEUDO_MAP_FD as source register and the actual fd as
> immediate value. The kernel verifier recognizes this keyword and
> replaces the map fd with a real pointer internally.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

oops. thanks. didn't realize that I forgot to add it to uapi.
Acked-by: Alexei Starovoitov <ast@plumgrid.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load
  2015-02-11  1:21   ` Alexei Starovoitov
@ 2015-02-12 20:43     ` Daniel Borkmann
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Borkmann @ 2015-02-12 20:43 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Jiří Pírko, Network Development

On 02/11/2015 02:21 AM, Alexei Starovoitov wrote:
...
> So if user specified 'my_ultra_long_proprietery_license'
> that should be fine. The program should still be accepted
> and marked as non-gpl.

Yep, true. Most likely I'll drop this one for non-RFC anyway
as it's not much of value. ;)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-02-12 20:44 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-11  0:15 [RFC PATCH net-next 0/7] eBPF support for cls_bpf Daniel Borkmann
2015-02-11  0:15 ` [PATCH net-next 1/7] ebpf: remove kernel test stubs Daniel Borkmann
2015-02-11  0:42   ` Alexei Starovoitov
2015-02-11  0:15 ` [PATCH net-next 2/7] ebpf: constify various function pointer structs Daniel Borkmann
2015-02-11  0:43   ` Alexei Starovoitov
2015-02-11  0:15 ` [PATCH net-next 3/7] ebpf: check first for MAXINSNS in bpf_prog_load Daniel Borkmann
2015-02-11  1:21   ` Alexei Starovoitov
2015-02-12 20:43     ` Daniel Borkmann
2015-02-11  0:15 ` [PATCH net-next 4/7] ebpf: extend program type/subsystem registration Daniel Borkmann
2015-02-11  0:15 ` [PATCH net-next 5/7] ebpf: export BPF_PSEUDO_MAP_FD to uapi Daniel Borkmann
2015-02-11  1:39   ` Alexei Starovoitov
2015-02-11  0:15 ` [PATCH net-next 6/7] ebpf: remove CONFIG_BPF_SYSCALL ifdefs in socket filter code Daniel Borkmann
2015-02-11  0:15 ` [PATCH net-next 7/7] cls_bpf: add initial eBPF support for programmable classifiers Daniel Borkmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).