netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp
@ 2014-03-12 21:43 Alexei Starovoitov
  2014-03-12 21:43 ` [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2014-03-12 21:43 UTC (permalink / raw)
  To: David S. Miller
  Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt,
	Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross,
	Thomas Gleixner, Eric Dumazet, Linus Torvalds, Andrew Morton,
	Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg,
	Arjan van de Ven, Christoph Hellwig, Pavel Emelyanov,
	linux-kernel, netdev

Hi All,

V1 patches:
http://thread.gmane.org/gmane.linux.kernel/1605783
V2 patches:
http://thread.gmane.org/gmane.linux.kernel/1642325
V3 patches:
http://thread.gmane.org/gmane.linux.kernel/1656538

V4 summary:
- addressed Daniel comments
- RFC for seccomp with extended BPF
- added extended BPF design doc

V5 summary:
- fixed commit one-liner, removed empty line
- added Hagen's ack

V6 summary:
- unrolled loop in populate_seccomp_data() to help gcc on arm
- removed empty line at the end of the file
- removed redundant (u32) cast in JSET
- fixed BPF_RVAL instead of BPF_SRC for BPF_RET in sk_convert_filter()
- updated commit log
- added Daniel's Reviewed-by
- added Kees's Reviewed-by

V7 summary:
1/3:
- addressed Dave's feedback regarding typecasting:
  added 'jited' flag to sk_filter and union for bpf_func/bpf_func_ext
- added a comment to sk_run_filter_ext() about ctx<->skb relation
- removed CPU specific code from sk_run_filter() and sk_run_filter_ext()
  because of that revised arm32 cache-hit bpf micro-bench numbers slightly
  slower, but seccomp and cache-miss arm32 numbers stayed the same
2/3 and 3/3: no changes

V8 summary:
1/3:
- fixed sk_get_filter() issue caught by Daniel:
  need to save old filter, so it can be returned via sk_get_filter()
  count memory out of socket optmem budget
- addressed Eric's feedback:
  removed 'notrace'
  replaced integer registers constants and stack size with #define
- retested with my own bpf/ebpf testuite, seccomp and Pavel's
  so_get_filter test from crtools/test/zdtm/live/static/
- trimmed cc list, since it looks too big
2/3 and 3/3: no changes

V9 summary:
1/3:
- addressed David's feedback:
- changed priority, so that bpf_jit_enable takes precedence over bpf_ext_enable
- made sk_run_filter_ext() static and private to filter.c
  and added 'ctx == seccomp' and 'ctx == skb' wrappers, so that
  compiler can do 'ctx' type verification at the call site.
  offending union in struct sk_filter now looks like:
  union {
    unsigned int (*bpf_func)(const struct sk_buff *skb,
                             const struct sock_filter *fp);
    unsigned int (*bpf_func_ext)(const struct sk_buff *skb, <<< was void* before
                                 const struct sock_filter_ext *fp);
  }
- kept 'unsigned jited:1', since that's my reading of 'bool vs bitfield' thread
2/3: call sk_run_filter_ext_seccomp(const struct seccomp_data*,...) instead of
     sk_run_filter_ext(void*,...) which is now private
3/3: no change

V10 summary:
1/1:
- addressed David's feedback:
  added conditional #define for bpf_jit_enable
  removed 64-bit requirement from XADD_DW ebpf insn
- silenced gcc warning in arch/arm/net/bpf_jit due to missing seccomp_data
- cleaned up stack[64] with stack[ARRAY_SIZE(stack)]
2/3 and 3/3: no changes

x86_64, i386 and arm32 look clean.

Thanks!

Alexei Starovoitov (3):
  filter: add Extended BPF interpreter and converter
  seccomp: convert seccomp to use extended BPF
  doc: filter: add Extended BPF documentation

 Documentation/networking/filter.txt |  181 ++++++++
 arch/arm/net/bpf_jit_32.c           |    3 +-
 arch/powerpc/net/bpf_jit_comp.c     |    3 +-
 arch/s390/net/bpf_jit_comp.c        |    3 +-
 arch/sparc/net/bpf_jit_comp.c       |    3 +-
 arch/x86/net/bpf_jit_comp.c         |    3 +-
 include/linux/filter.h              |   47 +-
 include/linux/netdevice.h           |    5 +
 include/linux/seccomp.h             |    1 -
 include/net/sock.h                  |    4 +-
 include/uapi/linux/filter.h         |   33 +-
 kernel/seccomp.c                    |  118 +++--
 net/core/filter.c                   |  857 ++++++++++++++++++++++++++++++++++-
 net/core/sysctl_net_core.c          |    7 +
 14 files changed, 1158 insertions(+), 110 deletions(-)

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-12 21:43 [PATCH v10 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov
@ 2014-03-12 21:43 ` Alexei Starovoitov
  2014-03-14 12:58   ` Pablo Neira Ayuso
  2014-03-12 21:43 ` [PATCH v10 net-next 2/3] seccomp: convert seccomp to use extended BPF Alexei Starovoitov
  2014-03-12 21:43 ` [PATCH v10 net-next 3/3] doc: filter: add Extended BPF documentation Alexei Starovoitov
  2 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2014-03-12 21:43 UTC (permalink / raw)
  To: David S. Miller
  Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt,
	Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross,
	Thomas Gleixner, Eric Dumazet, Linus Torvalds, Andrew Morton,
	Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg,
	Arjan van de Ven, Christoph Hellwig, Pavel Emelyanov,
	linux-kernel, netdev

Extended BPF extends old BPF in the following ways:
- from 2 to 10 registers
  Original BPF has two registers (A and X) and hidden frame pointer.
  Extended BPF has ten registers and read-only frame pointer.
- from 32-bit registers to 64-bit registers
  semantics of old 32-bit ALU operations are preserved via 32-bit
  subregisters
- if (cond) jump_true; else jump_false;
  old BPF insns are replaced with:
  if (cond) jump_true; /* else fallthrough */
- adds signed > and >= insns
- 16 4-byte stack slots for register spill-fill replaced with
  up to 512 bytes of multi-use stack space
- introduces bpf_call insn and register passing convention for zero
  overhead calls from/to other kernel functions (not part of this patch)
- adds arithmetic right shift insn
- adds swab32/swab64 insns
- adds atomic_add insn
- old tax/txa insns are replaced with 'mov dst,src' insn

Extended BPF is designed to be JITed with one to one mapping, which
allows GCC/LLVM backends to generate optimized BPF code that performs
almost as fast as natively compiled code

sk_convert_filter() remaps old style insns into extended:
'sock_filter' instructions are remapped on the fly to
'sock_filter_ext' extended instructions when
sysctl net.core.bpf_ext_enable=1

Old filter comes through sk_attach_filter() or sk_unattached_filter_create()
 if (bpf_ext_enable && !bpf_jit_enable) {
    convert to new
    sk_chk_filter() - check old bpf
    use sk_run_filter_ext_skb() - new interpreter
 } else {
    sk_chk_filter() - check old bpf
    if (bpf_jit_enable)
        use old jit
    else
        use sk_run_filter() - old interpreter
 }

sk_run_filter_ext_skb() interpreter is noticeably faster
than sk_run_filter() for two reasons:

1.fall-through jumps
  Old BPF jump instructions are forced to go either 'true' or 'false'
  branch which causes branch-miss penalty.
  Extended BPF jump instructions have one branch and fall-through,
  which fit CPU branch predictor logic better.
  'perf stat' shows drastic difference for branch-misses.

2.jump-threaded implementation of interpreter vs switch statement
  Instead of single tablejump at the top of 'switch' statement, GCC will
  generate multiple tablejump instructions, which helps CPU branch predictor

Performance of two BPF filters generated by libpcap was measured
on x86_64, i386 and arm32.

fprog #1 is taken from Documentation/networking/filter.txt:
tcpdump -i eth0 port 22 -dd

fprog #2 is taken from 'man tcpdump':
tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
   ((tcp[12]&0xf0)>>2)) != 0)' -dd

Other libpcap programs have similar performance differences.

Raw performance data from BPF micro-benchmark:
SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss)
time in nsec per call, smaller is better
--x86_64--
         fprog #1  fprog #1   fprog #2  fprog #2
         cache-hit cache-miss cache-hit cache-miss
old BPF     90       101       192       202
ext BPF     31        71       47         97
old BPF jit 12        34       17         44
ext BPF jit TBD

--i386--
         fprog #1  fprog #1   fprog #2  fprog #2
         cache-hit cache-miss cache-hit cache-miss
old BPF    107        136      227       252
ext BPF     40        119       69       172

--arm32--
         fprog #1  fprog #1   fprog #2  fprog #2
         cache-hit cache-miss cache-hit cache-miss
old BPF    202        300      475       540
ext BPF    180        270      330       470
old BPF jit 26        182       37       202
new BPF jit TBD

Tested with trinify BPF fuzzer

Future work:

0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf

1. add extended BPF JIT for x86_64

2. add inband old/new demux and extended BPF verifier, so that new programs
   can be loaded through old sk_attach_filter() and sk_unattached_filter_create()
   interfaces

3. tracing filters systemtap-like with extended BPF

4. OVS with extended BPF

5. nftables with extended BPF

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Reviewed-by: Daniel Borkmann <dborkman@redhat.com>
---
 arch/arm/net/bpf_jit_32.c       |    3 +-
 arch/powerpc/net/bpf_jit_comp.c |    3 +-
 arch/s390/net/bpf_jit_comp.c    |    3 +-
 arch/sparc/net/bpf_jit_comp.c   |    3 +-
 arch/x86/net/bpf_jit_comp.c     |    3 +-
 include/linux/filter.h          |   47 ++-
 include/linux/netdevice.h       |    5 +
 include/net/sock.h              |    4 +-
 include/uapi/linux/filter.h     |   33 +-
 net/core/filter.c               |  852 ++++++++++++++++++++++++++++++++++++++-
 net/core/sysctl_net_core.c      |    7 +
 11 files changed, 921 insertions(+), 42 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 271b5e971568..e72ff51f4561 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -925,6 +925,7 @@ void bpf_jit_compile(struct sk_filter *fp)
 		bpf_jit_dump(fp->len, alloc_size, 2, ctx.target);
 
 	fp->bpf_func = (void *)ctx.target;
+	fp->jited = 1;
 out:
 	kfree(ctx.offsets);
 	return;
@@ -932,7 +933,7 @@ out:
 
 void bpf_jit_free(struct sk_filter *fp)
 {
-	if (fp->bpf_func != sk_run_filter)
+	if (fp->jited)
 		module_free(NULL, fp->bpf_func);
 	kfree(fp);
 }
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 555034f8505e..c0c5fcb0736a 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -689,6 +689,7 @@ void bpf_jit_compile(struct sk_filter *fp)
 		((u64 *)image)[0] = (u64)code_base;
 		((u64 *)image)[1] = local_paca->kernel_toc;
 		fp->bpf_func = (void *)image;
+		fp->jited = 1;
 	}
 out:
 	kfree(addrs);
@@ -697,7 +698,7 @@ out:
 
 void bpf_jit_free(struct sk_filter *fp)
 {
-	if (fp->bpf_func != sk_run_filter)
+	if (fp->jited)
 		module_free(NULL, fp->bpf_func);
 	kfree(fp);
 }
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 708d60e40066..bf56fe51b5c1 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -877,6 +877,7 @@ void bpf_jit_compile(struct sk_filter *fp)
 	if (jit.start) {
 		set_memory_ro((unsigned long)header, header->pages);
 		fp->bpf_func = (void *) jit.start;
+		fp->jited = 1;
 	}
 out:
 	kfree(addrs);
@@ -887,7 +888,7 @@ void bpf_jit_free(struct sk_filter *fp)
 	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
 	struct bpf_binary_header *header = (void *)addr;
 
-	if (fp->bpf_func == sk_run_filter)
+	if (!fp->jited)
 		goto free_filter;
 	set_memory_rw(addr, header->pages);
 	module_free(NULL, header);
diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c
index 01fe9946d388..8c01be66f67d 100644
--- a/arch/sparc/net/bpf_jit_comp.c
+++ b/arch/sparc/net/bpf_jit_comp.c
@@ -809,6 +809,7 @@ cond_branch:			f_offset = addrs[i + filter[i].jf];
 	if (image) {
 		bpf_flush_icache(image, image + proglen);
 		fp->bpf_func = (void *)image;
+		fp->jited = 1;
 	}
 out:
 	kfree(addrs);
@@ -817,7 +818,7 @@ out:
 
 void bpf_jit_free(struct sk_filter *fp)
 {
-	if (fp->bpf_func != sk_run_filter)
+	if (fp->jited)
 		module_free(NULL, fp->bpf_func);
 	kfree(fp);
 }
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 4ed75dd81d05..7fa182cd3973 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -772,6 +772,7 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
 		bpf_flush_icache(header, image + proglen);
 		set_memory_ro((unsigned long)header, header->pages);
 		fp->bpf_func = (void *)image;
+		fp->jited = 1;
 	}
 out:
 	kfree(addrs);
@@ -791,7 +792,7 @@ static void bpf_jit_free_deferred(struct work_struct *work)
 
 void bpf_jit_free(struct sk_filter *fp)
 {
-	if (fp->bpf_func != sk_run_filter) {
+	if (fp->jited) {
 		INIT_WORK(&fp->work, bpf_jit_free_deferred);
 		schedule_work(&fp->work);
 	} else {
diff --git a/include/linux/filter.h b/include/linux/filter.h
index e568c8ef896b..6e6aab5e062b 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -25,20 +25,45 @@ struct sock;
 struct sk_filter
 {
 	atomic_t		refcnt;
-	unsigned int         	len;	/* Number of filter blocks */
+	/* len - number of insns in sock_filter program
+	 * len_ext - number of insns in socket_filter_ext program
+	 * jited - true if either original or extended program was JITed
+	 * orig_prog - original sock_filter program if not NULL
+	 */
+	unsigned int		len;
+	unsigned int		len_ext;
+	unsigned int		jited:1;
+	struct sock_filter	*orig_prog;
 	struct rcu_head		rcu;
-	unsigned int		(*bpf_func)(const struct sk_buff *skb,
-					    const struct sock_filter *filter);
+	union {
+		unsigned int (*bpf_func)(const struct sk_buff *skb,
+					 const struct sock_filter *fp);
+		unsigned int (*bpf_func_ext)(const struct sk_buff *skb,
+					     const struct sock_filter_ext *fp);
+	};
 	union {
 		struct sock_filter     	insns[0];
+		struct sock_filter_ext	insns_ext[0];
 		struct work_struct	work;
 	};
 };
 
-static inline unsigned int sk_filter_size(unsigned int proglen)
+/* Extended BPF has 10 general purpose 64-bit registers and stack frame */
+#define MAX_EBPF_REG 11
+
+/* Extended BPF program can access up to 512 bytes of stack space */
+#define MAX_EBPF_STACK 512
+
+static inline unsigned int sk_filter_size(unsigned int len,
+					  unsigned int len_ext)
 {
-	return max(sizeof(struct sk_filter),
-		   offsetof(struct sk_filter, insns[proglen]));
+	if (len_ext)
+		return max(sizeof(struct sk_filter),
+			   offsetof(struct sk_filter, insns_ext[len_ext])) +
+			len * sizeof(struct sock_filter);
+	else
+		return max(sizeof(struct sk_filter),
+			   offsetof(struct sk_filter, insns[len]));
 }
 
 extern int sk_filter(struct sock *sk, struct sk_buff *skb);
@@ -52,7 +77,15 @@ extern int sk_detach_filter(struct sock *sk);
 extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen);
 extern int sk_get_filter(struct sock *sk, struct sock_filter __user *filter, unsigned len);
 extern void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to);
+int sk_convert_filter(struct sock_filter *old_prog, int len,
+		      struct sock_filter_ext *new_prog,	int *p_new_len);
+unsigned int sk_run_filter_ext_skb(const struct sk_buff *ctx,
+				   const struct sock_filter_ext *insn);
+struct seccomp_data;
+unsigned int sk_run_filter_ext_seccomp(const struct seccomp_data *ctx,
+				       const struct sock_filter_ext *insn);
 
+#define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns)
 #ifdef CONFIG_BPF_JIT
 #include <stdarg.h>
 #include <linux/linkage.h>
@@ -70,7 +103,6 @@ static inline void bpf_jit_dump(unsigned int flen, unsigned int proglen,
 		print_hex_dump(KERN_ERR, "JIT code: ", DUMP_PREFIX_OFFSET,
 			       16, 1, image, proglen, false);
 }
-#define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns)
 #else
 #include <linux/slab.h>
 static inline void bpf_jit_compile(struct sk_filter *fp)
@@ -80,7 +112,6 @@ static inline void bpf_jit_free(struct sk_filter *fp)
 {
 	kfree(fp);
 }
-#define SK_RUN_FILTER(FILTER, SKB) sk_run_filter(SKB, FILTER->insns)
 #endif
 
 static inline int bpf_tell_extensions(void)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index b8d8c805fd75..55341329eefc 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3053,7 +3053,12 @@ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
 extern int		netdev_max_backlog;
 extern int		netdev_tstamp_prequeue;
 extern int		weight_p;
+#ifdef CONFIG_BPF_JIT
 extern int		bpf_jit_enable;
+#else
+#define bpf_jit_enable	0
+#endif
+extern int		bpf_ext_enable;
 
 bool netdev_has_upper_dev(struct net_device *dev, struct net_device *upper_dev);
 struct net_device *netdev_all_upper_get_next_dev_rcu(struct net_device *dev,
diff --git a/include/net/sock.h b/include/net/sock.h
index 967856970a51..9262ad02f83b 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1633,14 +1633,14 @@ static inline void sk_filter_release(struct sk_filter *fp)
 
 static inline void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp)
 {
-	atomic_sub(sk_filter_size(fp->len), &sk->sk_omem_alloc);
+	atomic_sub(sk_filter_size(fp->len, fp->len_ext), &sk->sk_omem_alloc);
 	sk_filter_release(fp);
 }
 
 static inline void sk_filter_charge(struct sock *sk, struct sk_filter *fp)
 {
 	atomic_inc(&fp->refcnt);
-	atomic_add(sk_filter_size(fp->len), &sk->sk_omem_alloc);
+	atomic_add(sk_filter_size(fp->len, fp->len_ext), &sk->sk_omem_alloc);
 }
 
 /*
diff --git a/include/uapi/linux/filter.h b/include/uapi/linux/filter.h
index 8eb9ccaa5b48..4e98fe16ba88 100644
--- a/include/uapi/linux/filter.h
+++ b/include/uapi/linux/filter.h
@@ -1,5 +1,6 @@
 /*
  * Linux Socket Filter Data Structures
+ * Extended BPF is Copyright (c) 2011-2014, PLUMgrid, http://plumgrid.com
  */
 
 #ifndef _UAPI__LINUX_FILTER_H__
@@ -19,7 +20,7 @@
  *	Try and keep these values and structures similar to BSD, especially
  *	the BPF code definitions which need to match so you can share filters
  */
- 
+
 struct sock_filter {	/* Filter block */
 	__u16	code;   /* Actual filter code */
 	__u8	jt;	/* Jump true */
@@ -27,6 +28,14 @@ struct sock_filter {	/* Filter block */
 	__u32	k;      /* Generic multiuse field */
 };
 
+struct sock_filter_ext {
+	__u8	code;    /* opcode */
+	__u8    a_reg:4; /* dest register */
+	__u8    x_reg:4; /* source register */
+	__s16	off;     /* signed offset */
+	__s32	imm;     /* signed immediate constant */
+};
+
 struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 	unsigned short		len;	/* Number of filter blocks */
 	struct sock_filter __user *filter;
@@ -45,12 +54,14 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 #define         BPF_JMP         0x05
 #define         BPF_RET         0x06
 #define         BPF_MISC        0x07
+#define         BPF_ALU64       0x07
 
 /* ld/ldx fields */
 #define BPF_SIZE(code)  ((code) & 0x18)
 #define         BPF_W           0x00
 #define         BPF_H           0x08
 #define         BPF_B           0x10
+#define         BPF_DW          0x18
 #define BPF_MODE(code)  ((code) & 0xe0)
 #define         BPF_IMM         0x00
 #define         BPF_ABS         0x20
@@ -58,6 +69,7 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 #define         BPF_MEM         0x60
 #define         BPF_LEN         0x80
 #define         BPF_MSH         0xa0
+#define         BPF_XADD        0xc0 /* exclusive add */
 
 /* alu/jmp fields */
 #define BPF_OP(code)    ((code) & 0xf0)
@@ -68,16 +80,24 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 #define         BPF_OR          0x40
 #define         BPF_AND         0x50
 #define         BPF_LSH         0x60
-#define         BPF_RSH         0x70
+#define         BPF_RSH         0x70 /* logical shift right */
 #define         BPF_NEG         0x80
 #define		BPF_MOD		0x90
 #define		BPF_XOR		0xa0
+#define		BPF_MOV		0xb0 /* mov reg to reg */
+#define		BPF_ARSH	0xc0 /* sign extending arithmetic shift right */
+#define		BPF_BSWAP32	0xd0 /* swap lower 4 bytes of 64-bit register */
+#define		BPF_BSWAP64	0xe0 /* swap all 8 bytes of 64-bit register */
 
 #define         BPF_JA          0x00
-#define         BPF_JEQ         0x10
-#define         BPF_JGT         0x20
-#define         BPF_JGE         0x30
-#define         BPF_JSET        0x40
+#define         BPF_JEQ         0x10 /* jump == */
+#define         BPF_JGT         0x20 /* GT is unsigned '>', JA in x86 */
+#define         BPF_JGE         0x30 /* GE is unsigned '>=', JAE in x86 */
+#define         BPF_JSET        0x40 /* if (A & X) */
+#define         BPF_JNE         0x50 /* jump != */
+#define         BPF_JSGT        0x60 /* SGT is signed '>', GT in x86 */
+#define         BPF_JSGE        0x70 /* SGE is signed '>=', GE in x86 */
+#define         BPF_CALL        0x80 /* function call */
 #define BPF_SRC(code)   ((code) & 0x08)
 #define         BPF_K           0x00
 #define         BPF_X           0x08
@@ -134,5 +154,4 @@ struct sock_fprog {	/* Required for SO_ATTACH_FILTER. */
 #define SKF_NET_OFF   (-0x100000)
 #define SKF_LL_OFF    (-0x200000)
 
-
 #endif /* _UAPI__LINUX_FILTER_H__ */
diff --git a/net/core/filter.c b/net/core/filter.c
index ad30d626a5bd..41775acbd69c 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1,5 +1,6 @@
 /*
  * Linux Socket Filter - Kernel level socket filtering
+ * Extended BPF is Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
  *
  * Author:
  *     Jay Schulist <jschlst@samba.org>
@@ -40,6 +41,8 @@
 #include <linux/seccomp.h>
 #include <linux/if_vlan.h>
 
+int bpf_ext_enable __read_mostly;
+
 /* No hurry in this branch
  *
  * Exported for the bpf jit load helper.
@@ -134,11 +137,7 @@ unsigned int sk_run_filter(const struct sk_buff *skb,
 	 * Process array of filter instructions.
 	 */
 	for (;; fentry++) {
-#if defined(CONFIG_X86_32)
-#define	K (fentry->k)
-#else
 		const u32 K = fentry->k;
-#endif
 
 		switch (fentry->code) {
 		case BPF_S_ALU_ADD_X:
@@ -637,6 +636,9 @@ void sk_filter_release_rcu(struct rcu_head *rcu)
 {
 	struct sk_filter *fp = container_of(rcu, struct sk_filter, rcu);
 
+	if (fp->orig_prog)
+		/* original bpf program was stored by sk_convert_filter() */
+		kfree(fp->orig_prog);
 	bpf_jit_free(fp);
 }
 EXPORT_SYMBOL(sk_filter_release_rcu);
@@ -646,6 +648,9 @@ static int __sk_prepare_filter(struct sk_filter *fp)
 	int err;
 
 	fp->bpf_func = sk_run_filter;
+	fp->len_ext = 0;
+	fp->jited = 0;
+	fp->orig_prog = NULL;
 
 	err = sk_chk_filter(fp->insns, fp->len);
 	if (err)
@@ -655,6 +660,99 @@ static int __sk_prepare_filter(struct sk_filter *fp)
 	return 0;
 }
 
+static int sk_prepare_filter_ext(struct sk_filter **pfp,
+				 struct sock_fprog *fprog, struct sock *sk)
+{
+	unsigned int fsize = sizeof(struct sock_filter) * fprog->len;
+	struct sock_filter *old_prog;
+	unsigned int sk_fsize;
+	struct sk_filter *fp;
+	int new_len;
+	int err;
+
+	BUILD_BUG_ON(sizeof(struct sock_filter) !=
+		     sizeof(struct sock_filter_ext));
+
+	/* store old program into buffer:
+	 * sk_chk_filter() will remap opcodes
+	 * sk_get_filter() will return it back to the user
+	 */
+	if (sk)
+		old_prog = sock_kmalloc(sk, fsize, GFP_KERNEL);
+	else
+		old_prog = kmalloc(fsize, GFP_KERNEL);
+	if (!old_prog)
+		return -ENOMEM;
+
+	if (sk) {
+		if (copy_from_user(old_prog, fprog->filter, fsize)) {
+			err = -EFAULT;
+			goto free_prog;
+		}
+	} else {
+		memcpy(old_prog, fprog->filter, fsize);
+	}
+
+	/* calculate bpf_ext program length */
+	err = sk_convert_filter(fprog->filter, fprog->len, NULL, &new_len);
+	if (err)
+		goto free_prog;
+
+	sk_fsize = sk_filter_size(0, new_len);
+	/* allocate sk_filter to store bpf_ext program */
+	if (sk)
+		fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL);
+	else
+		fp = kmalloc(sk_fsize, GFP_KERNEL);
+	if (!fp) {
+		err = -ENOMEM;
+		goto free_prog;
+	}
+
+	/* remap sock_filter insns into sock_filter_ext insns */
+	err = sk_convert_filter(old_prog, fprog->len, fp->insns_ext, &new_len);
+	if (err)
+		/* 2nd sk_convert_filter() can fail only if it fails
+		 * to allocate memory, remapping must succeed
+		 */
+		goto free_fp;
+
+	/* now chk_filter can overwrite old_prog while checking */
+	err = sk_chk_filter(old_prog, fprog->len);
+	if (err)
+		goto free_fp;
+
+	if (!sk) {
+		/* discard old prog for unattached filters */
+		kfree(old_prog);
+		fp->orig_prog = NULL;
+	} else {
+		fp->orig_prog = old_prog;
+	}
+
+	atomic_set(&fp->refcnt, 1);
+	fp->len = fprog->len;
+	fp->len_ext = new_len;
+	fp->jited = 0;
+
+	/* sock_filter_ext insns must be executed by sk_run_filter_ext_skb */
+	fp->bpf_func_ext = sk_run_filter_ext_skb;
+
+	*pfp = fp;
+	return 0;
+free_fp:
+	if (sk)
+		sock_kfree_s(sk, fp, sk_fsize);
+	else
+		kfree(fp);
+free_prog:
+	if (sk)
+		sock_kfree_s(sk, old_prog, fsize);
+	else
+		kfree(old_prog);
+	return err;
+}
+
 /**
  *	sk_unattached_filter_create - create an unattached filter
  *	@fprog: the filter program
@@ -676,7 +774,10 @@ int sk_unattached_filter_create(struct sk_filter **pfp,
 	if (fprog->filter == NULL)
 		return -EINVAL;
 
-	fp = kmalloc(sk_filter_size(fprog->len), GFP_KERNEL);
+	if (bpf_ext_enable && !bpf_jit_enable)
+		return sk_prepare_filter_ext(pfp, fprog, NULL);
+
+	fp = kmalloc(sk_filter_size(fprog->len, 0), GFP_KERNEL);
 	if (!fp)
 		return -ENOMEM;
 	memcpy(fp->insns, fprog->filter, fsize);
@@ -716,7 +817,7 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
 {
 	struct sk_filter *fp, *old_fp;
 	unsigned int fsize = sizeof(struct sock_filter) * fprog->len;
-	unsigned int sk_fsize = sk_filter_size(fprog->len);
+	unsigned int sk_fsize = sk_filter_size(fprog->len, 0);
 	int err;
 
 	if (sock_flag(sk, SOCK_FILTER_LOCKED))
@@ -726,21 +827,27 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk)
 	if (fprog->filter == NULL)
 		return -EINVAL;
 
-	fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL);
-	if (!fp)
-		return -ENOMEM;
-	if (copy_from_user(fp->insns, fprog->filter, fsize)) {
-		sock_kfree_s(sk, fp, sk_fsize);
-		return -EFAULT;
-	}
+	if (bpf_ext_enable && !bpf_jit_enable) {
+		err = sk_prepare_filter_ext(&fp, fprog, sk);
+		if (err)
+			return err;
+	} else {
+		fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL);
+		if (!fp)
+			return -ENOMEM;
+		if (copy_from_user(fp->insns, fprog->filter, fsize)) {
+			sock_kfree_s(sk, fp, sk_fsize);
+			return -EFAULT;
+		}
 
-	atomic_set(&fp->refcnt, 1);
-	fp->len = fprog->len;
+		atomic_set(&fp->refcnt, 1);
+		fp->len = fprog->len;
 
-	err = __sk_prepare_filter(fp);
-	if (err) {
-		sk_filter_uncharge(sk, fp);
-		return err;
+		err = __sk_prepare_filter(fp);
+		if (err) {
+			sk_filter_uncharge(sk, fp);
+			return err;
+		}
 	}
 
 	old_fp = rcu_dereference_protected(sk->sk_filter,
@@ -853,6 +960,7 @@ void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to)
 int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf, unsigned int len)
 {
 	struct sk_filter *filter;
+	struct sock_filter *fp;
 	int i, ret;
 
 	lock_sock(sk);
@@ -869,10 +977,14 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf, unsigned int
 		goto out;
 
 	ret = -EFAULT;
+	if (filter->orig_prog)
+		fp = filter->orig_prog;
+	else
+		fp = filter->insns;
 	for (i = 0; i < filter->len; i++) {
 		struct sock_filter fb;
 
-		sk_decode_filter(&filter->insns[i], &fb);
+		sk_decode_filter(&fp[i], &fb);
 		if (copy_to_user(&ubuf[i], &fb, sizeof(fb)))
 			goto out;
 	}
@@ -882,3 +994,703 @@ out:
 	release_sock(sk);
 	return ret;
 }
+
+/**
+ *	sk_convert_filter - convert filter program
+ *	@old_prog: the filter program
+ *	@len: the length of filter program
+ *	@new_prog: buffer where converted program will be stored
+ *	@p_new_len: pointer to store length of converted program
+ *
+ * remap 'sock_filter' style BPF instruction set to 'sock_filter_ext' style
+ *
+ * first, call sk_convert_filter(old_prog, len, NULL, &new_len) to calculate new
+ * program length in one pass
+ *
+ * then new_prog = kmalloc(sizeof(struct sock_filter_ext) * new_len);
+ *
+ * and call it again: sk_convert_filter(old_prog, len, new_prog, &new_len);
+ * to remap in two passes: 1st pass finds new jump offsets, 2nd pass remaps
+ *
+ * old BPF register A is mapped to EBPF register 6
+ * old BPF register X is mapped to EBPF register 7
+ * frame pointer is always register 10
+ * 'void *ctx' is stored in register 1
+ * for socket filters: ctx == 'struct sk_buff *'
+ * for seccomp: ctx == 'struct seccomp_data *'
+ */
+#define A_REG 6
+#define X_REG 7
+#define TMP_REG 2
+#define CTX_REG 1
+#define FP_REG 10
+int sk_convert_filter(struct sock_filter *old_prog, int len,
+		      struct sock_filter_ext *new_prog, int *p_new_len)
+{
+	struct sock_filter_ext *new_insn;
+	struct sock_filter *fp;
+	int *addrs = NULL;
+	int new_len = 0;
+	int pass = 0;
+	int tgt, i;
+	u8 bpf_src;
+
+	BUILD_BUG_ON(BPF_MEMWORDS * sizeof(u32) > MAX_EBPF_STACK);
+	BUILD_BUG_ON(FP_REG + 1 != MAX_EBPF_REG);
+
+	if (len <= 0 || len >= BPF_MAXINSNS)
+		return -EINVAL;
+
+	if (new_prog) {
+		addrs = kzalloc(len * sizeof(*addrs), GFP_KERNEL);
+		if (!addrs)
+			return -ENOMEM;
+	}
+
+do_pass:
+	new_insn = new_prog;
+	fp = old_prog;
+	for (i = 0; i < len; fp++, i++) {
+		struct sock_filter_ext tmp_insns[3] = {};
+		struct sock_filter_ext *insn = tmp_insns;
+
+		if (addrs)
+			addrs[i] = new_insn - new_prog;
+
+		switch (fp->code) {
+		/* all arithmetic insns and skb loads map as-is */
+		case BPF_ALU | BPF_ADD | BPF_X:
+		case BPF_ALU | BPF_ADD | BPF_K:
+		case BPF_ALU | BPF_SUB | BPF_X:
+		case BPF_ALU | BPF_SUB | BPF_K:
+		case BPF_ALU | BPF_AND | BPF_X:
+		case BPF_ALU | BPF_AND | BPF_K:
+		case BPF_ALU | BPF_OR | BPF_X:
+		case BPF_ALU | BPF_OR | BPF_K:
+		case BPF_ALU | BPF_LSH | BPF_X:
+		case BPF_ALU | BPF_LSH | BPF_K:
+		case BPF_ALU | BPF_RSH | BPF_X:
+		case BPF_ALU | BPF_RSH | BPF_K:
+		case BPF_ALU | BPF_XOR | BPF_X:
+		case BPF_ALU | BPF_XOR | BPF_K:
+		case BPF_ALU | BPF_MUL | BPF_X:
+		case BPF_ALU | BPF_MUL | BPF_K:
+		case BPF_ALU | BPF_DIV | BPF_X:
+		case BPF_ALU | BPF_DIV | BPF_K:
+		case BPF_ALU | BPF_MOD | BPF_X:
+		case BPF_ALU | BPF_MOD | BPF_K:
+		case BPF_ALU | BPF_NEG:
+		case BPF_LD | BPF_ABS | BPF_W:
+		case BPF_LD | BPF_ABS | BPF_H:
+		case BPF_LD | BPF_ABS | BPF_B:
+		case BPF_LD | BPF_IND | BPF_W:
+		case BPF_LD | BPF_IND | BPF_H:
+		case BPF_LD | BPF_IND | BPF_B:
+			insn->code = fp->code;
+			insn->a_reg = A_REG;
+			insn->x_reg = X_REG;
+			insn->imm = fp->k;
+			break;
+
+		/* jump opcodes map as-is, but offsets need adjustment */
+		case BPF_JMP | BPF_JA:
+			tgt = i + fp->k + 1;
+			insn->code = fp->code;
+#define EMIT_JMP \
+	do { \
+		if (tgt >= len || tgt < 0) \
+			goto err; \
+		insn->off = addrs ? addrs[tgt] - addrs[i] - 1 : 0; \
+		/* adjust pc relative offset for 2nd or 3rd insn */ \
+		insn->off -= insn - tmp_insns; \
+	} while (0)
+
+			EMIT_JMP;
+			break;
+
+		case BPF_JMP | BPF_JEQ | BPF_K:
+		case BPF_JMP | BPF_JEQ | BPF_X:
+		case BPF_JMP | BPF_JSET | BPF_K:
+		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP | BPF_JGT | BPF_K:
+		case BPF_JMP | BPF_JGT | BPF_X:
+		case BPF_JMP | BPF_JGE | BPF_K:
+		case BPF_JMP | BPF_JGE | BPF_X:
+			if (BPF_SRC(fp->code) == BPF_K &&
+			    (int)fp->k < 0) {
+				/* extended BPF immediates are signed,
+				 * zero extend immediate into tmp register
+				 * and use it in compare insn
+				 */
+				insn->code = BPF_ALU | BPF_MOV | BPF_K;
+				insn->a_reg = TMP_REG;
+				insn->imm = fp->k;
+				insn++;
+
+				insn->a_reg = A_REG;
+				insn->x_reg = TMP_REG;
+				bpf_src = BPF_X;
+			} else {
+				insn->a_reg = A_REG;
+				insn->x_reg = X_REG;
+				insn->imm = fp->k;
+				bpf_src = BPF_SRC(fp->code);
+			}
+			/* common case where 'jump_false' is next insn */
+			if (fp->jf == 0) {
+				insn->code = BPF_JMP | BPF_OP(fp->code) |
+					bpf_src;
+				tgt = i + fp->jt + 1;
+				EMIT_JMP;
+				break;
+			}
+			/* convert JEQ into JNE when 'jump_true' is next insn */
+			if (fp->jt == 0 && BPF_OP(fp->code) == BPF_JEQ) {
+				insn->code = BPF_JMP | BPF_JNE | bpf_src;
+				tgt = i + fp->jf + 1;
+				EMIT_JMP;
+				break;
+			}
+			/* other jumps are mapped into two insns: Jxx and JA */
+			tgt = i + fp->jt + 1;
+			insn->code = BPF_JMP | BPF_OP(fp->code) | bpf_src;
+			EMIT_JMP;
+
+			insn++;
+			insn->code = BPF_JMP | BPF_JA;
+			tgt = i + fp->jf + 1;
+			EMIT_JMP;
+			break;
+
+		/* ldxb 4*([14]&0xf) is remaped into 3 insns */
+		case BPF_LDX | BPF_MSH | BPF_B:
+			insn->code = BPF_LD | BPF_ABS | BPF_B;
+			insn->a_reg = X_REG;
+			insn->imm = fp->k;
+
+			insn++;
+			insn->code = BPF_ALU | BPF_AND | BPF_K;
+			insn->a_reg = X_REG;
+			insn->imm = 0xf;
+
+			insn++;
+			insn->code = BPF_ALU | BPF_LSH | BPF_K;
+			insn->a_reg = X_REG;
+			insn->imm = 2;
+			break;
+
+		/* RET_K, RET_A are remaped into 2 insns */
+		case BPF_RET | BPF_A:
+		case BPF_RET | BPF_K:
+			insn->code = BPF_ALU | BPF_MOV |
+				(BPF_RVAL(fp->code) == BPF_K ? BPF_K : BPF_X);
+			insn->a_reg = 0;
+			insn->x_reg = A_REG;
+			insn->imm = fp->k;
+
+			insn++;
+			insn->code = BPF_RET | BPF_K;
+			break;
+
+		/* store to stack */
+		case BPF_ST:
+		case BPF_STX:
+			insn->code = BPF_STX | BPF_MEM | BPF_W;
+			insn->a_reg = FP_REG;
+			insn->x_reg = fp->code == BPF_ST ? A_REG : X_REG;
+			insn->off = -(BPF_MEMWORDS - fp->k) * 4;
+			break;
+
+		/* load from stack */
+		case BPF_LD | BPF_MEM:
+		case BPF_LDX | BPF_MEM:
+			insn->code = BPF_LDX | BPF_MEM | BPF_W;
+			insn->a_reg =
+				BPF_CLASS(fp->code) == BPF_LD ? A_REG : X_REG;
+			insn->x_reg = FP_REG;
+			insn->off = -(BPF_MEMWORDS - fp->k) * 4;
+			break;
+
+		/* A = K or X = K */
+		case BPF_LD | BPF_IMM:
+		case BPF_LDX | BPF_IMM:
+			insn->code = BPF_ALU | BPF_MOV | BPF_K;
+			insn->a_reg =
+				BPF_CLASS(fp->code) == BPF_LD ? A_REG : X_REG;
+			insn->imm = fp->k;
+			break;
+
+		/* X = A */
+		case BPF_MISC | BPF_TAX:
+			insn->code = BPF_ALU64 | BPF_MOV | BPF_X;
+			insn->a_reg = X_REG;
+			insn->x_reg = A_REG;
+			break;
+
+		/* A = X */
+		case BPF_MISC | BPF_TXA:
+			insn->code = BPF_ALU64 | BPF_MOV | BPF_X;
+			insn->a_reg = A_REG;
+			insn->x_reg = X_REG;
+			break;
+
+		/* A = skb->len or X = skb->len */
+		case BPF_LD | BPF_W | BPF_LEN:
+		case BPF_LDX | BPF_W | BPF_LEN:
+			insn->code = BPF_LDX | BPF_MEM | BPF_W;
+			insn->a_reg =
+				BPF_CLASS(fp->code) == BPF_LD ? A_REG : X_REG;
+			insn->x_reg = CTX_REG;
+			insn->off = offsetof(struct sk_buff, len);
+			break;
+
+		/* access seccomp_data fields */
+		case BPF_LDX | BPF_ABS | BPF_W:
+			insn->code = BPF_LDX | BPF_MEM | BPF_W;
+			insn->a_reg = A_REG;
+			insn->x_reg = CTX_REG;
+			insn->off = fp->k;
+			break;
+
+		default:
+			/* pr_err("unknown opcode %02x\n", fp->code); */
+			goto err;
+		}
+
+		insn++;
+		if (new_prog) {
+			memcpy(new_insn, tmp_insns,
+			       sizeof(*insn) * (insn - tmp_insns));
+		}
+		new_insn += insn - tmp_insns;
+	}
+
+	if (!new_prog) {
+		/* only calculating new length */
+		*p_new_len = new_insn - new_prog;
+		return 0;
+	}
+
+	pass++;
+	if (new_len != new_insn - new_prog) {
+		new_len = new_insn - new_prog;
+		if (pass > 2)
+			goto err;
+		goto do_pass;
+	}
+	kfree(addrs);
+	if (*p_new_len != new_len)
+		/* inconsistent new program length */
+		pr_err("sk_convert_filter() usage error\n");
+	return 0;
+err:
+	kfree(addrs);
+	return -EINVAL;
+}
+
+/**
+ *	sk_run_filter_ext - run an extended filter
+ *	@ctx: buffer to run the filter on
+ *	@insn: filter to apply
+ *
+ * Decode and execute extended BPF instructions.
+ * @ctx is the data we are operating on.
+ * @filter is the array of filter instructions.
+ */
+static u32 sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn)
+{
+	u64 stack[MAX_EBPF_STACK / sizeof(u64)];
+	u64 regs[MAX_EBPF_REG];
+	void *ptr;
+	u64 tmp;
+	int off;
+
+#define K insn->imm
+#define A regs[insn->a_reg]
+#define X regs[insn->x_reg]
+
+#define CONT ({insn++; goto select_insn; })
+#define CONT_JMP ({insn++; goto select_insn; })
+/* some compilers may need help:
+ * #define CONT_JMP ({insn++; goto *jumptable[insn->code]; })
+ */
+
+	static const void *jumptable[256] = {
+		[0 ... 255] = &&default_label,
+#define DL(A, B, C) [A|B|C] = &&A##_##B##_##C,
+		DL(BPF_ALU, BPF_ADD, BPF_X)
+		DL(BPF_ALU, BPF_ADD, BPF_K)
+		DL(BPF_ALU, BPF_SUB, BPF_X)
+		DL(BPF_ALU, BPF_SUB, BPF_K)
+		DL(BPF_ALU, BPF_AND, BPF_X)
+		DL(BPF_ALU, BPF_AND, BPF_K)
+		DL(BPF_ALU, BPF_OR, BPF_X)
+		DL(BPF_ALU, BPF_OR, BPF_K)
+		DL(BPF_ALU, BPF_LSH, BPF_X)
+		DL(BPF_ALU, BPF_LSH, BPF_K)
+		DL(BPF_ALU, BPF_RSH, BPF_X)
+		DL(BPF_ALU, BPF_RSH, BPF_K)
+		DL(BPF_ALU, BPF_XOR, BPF_X)
+		DL(BPF_ALU, BPF_XOR, BPF_K)
+		DL(BPF_ALU, BPF_MUL, BPF_X)
+		DL(BPF_ALU, BPF_MUL, BPF_K)
+		DL(BPF_ALU, BPF_MOV, BPF_X)
+		DL(BPF_ALU, BPF_MOV, BPF_K)
+		DL(BPF_ALU, BPF_DIV, BPF_X)
+		DL(BPF_ALU, BPF_DIV, BPF_K)
+		DL(BPF_ALU, BPF_MOD, BPF_X)
+		DL(BPF_ALU, BPF_MOD, BPF_K)
+		DL(BPF_ALU64, BPF_ADD, BPF_X)
+		DL(BPF_ALU64, BPF_ADD, BPF_K)
+		DL(BPF_ALU64, BPF_SUB, BPF_X)
+		DL(BPF_ALU64, BPF_SUB, BPF_K)
+		DL(BPF_ALU64, BPF_AND, BPF_X)
+		DL(BPF_ALU64, BPF_AND, BPF_K)
+		DL(BPF_ALU64, BPF_OR, BPF_X)
+		DL(BPF_ALU64, BPF_OR, BPF_K)
+		DL(BPF_ALU64, BPF_LSH, BPF_X)
+		DL(BPF_ALU64, BPF_LSH, BPF_K)
+		DL(BPF_ALU64, BPF_RSH, BPF_X)
+		DL(BPF_ALU64, BPF_RSH, BPF_K)
+		DL(BPF_ALU64, BPF_XOR, BPF_X)
+		DL(BPF_ALU64, BPF_XOR, BPF_K)
+		DL(BPF_ALU64, BPF_MUL, BPF_X)
+		DL(BPF_ALU64, BPF_MUL, BPF_K)
+		DL(BPF_ALU64, BPF_MOV, BPF_X)
+		DL(BPF_ALU64, BPF_MOV, BPF_K)
+		DL(BPF_ALU64, BPF_ARSH, BPF_X)
+		DL(BPF_ALU64, BPF_ARSH, BPF_K)
+		DL(BPF_ALU64, BPF_DIV, BPF_X)
+		DL(BPF_ALU64, BPF_DIV, BPF_K)
+		DL(BPF_ALU64, BPF_MOD, BPF_X)
+		DL(BPF_ALU64, BPF_MOD, BPF_K)
+		DL(BPF_ALU64, BPF_BSWAP32, BPF_X)
+		DL(BPF_ALU64, BPF_BSWAP64, BPF_X)
+		DL(BPF_ALU, BPF_NEG, 0)
+		DL(BPF_JMP, BPF_CALL, 0)
+		DL(BPF_JMP, BPF_JA, 0)
+		DL(BPF_JMP, BPF_JEQ, BPF_X)
+		DL(BPF_JMP, BPF_JEQ, BPF_K)
+		DL(BPF_JMP, BPF_JNE, BPF_X)
+		DL(BPF_JMP, BPF_JNE, BPF_K)
+		DL(BPF_JMP, BPF_JGT, BPF_X)
+		DL(BPF_JMP, BPF_JGT, BPF_K)
+		DL(BPF_JMP, BPF_JGE, BPF_X)
+		DL(BPF_JMP, BPF_JGE, BPF_K)
+		DL(BPF_JMP, BPF_JSGT, BPF_X)
+		DL(BPF_JMP, BPF_JSGT, BPF_K)
+		DL(BPF_JMP, BPF_JSGE, BPF_X)
+		DL(BPF_JMP, BPF_JSGE, BPF_K)
+		DL(BPF_JMP, BPF_JSET, BPF_X)
+		DL(BPF_JMP, BPF_JSET, BPF_K)
+		DL(BPF_STX, BPF_MEM, BPF_B)
+		DL(BPF_STX, BPF_MEM, BPF_H)
+		DL(BPF_STX, BPF_MEM, BPF_W)
+		DL(BPF_STX, BPF_MEM, BPF_DW)
+		DL(BPF_ST, BPF_MEM, BPF_B)
+		DL(BPF_ST, BPF_MEM, BPF_H)
+		DL(BPF_ST, BPF_MEM, BPF_W)
+		DL(BPF_ST, BPF_MEM, BPF_DW)
+		DL(BPF_LDX, BPF_MEM, BPF_B)
+		DL(BPF_LDX, BPF_MEM, BPF_H)
+		DL(BPF_LDX, BPF_MEM, BPF_W)
+		DL(BPF_LDX, BPF_MEM, BPF_DW)
+		DL(BPF_STX, BPF_XADD, BPF_W)
+		DL(BPF_STX, BPF_XADD, BPF_DW)
+		DL(BPF_LD, BPF_ABS, BPF_W)
+		DL(BPF_LD, BPF_ABS, BPF_H)
+		DL(BPF_LD, BPF_ABS, BPF_B)
+		DL(BPF_LD, BPF_IND, BPF_W)
+		DL(BPF_LD, BPF_IND, BPF_H)
+		DL(BPF_LD, BPF_IND, BPF_B)
+		DL(BPF_RET, BPF_K, 0)
+#undef DL
+	};
+
+	regs[FP_REG] = (u64)(ulong)&stack[ARRAY_SIZE(stack)];
+	regs[CTX_REG] = (u64)(ulong)ctx;
+
+	/* execute 1st insn */
+select_insn:
+	goto *jumptable[insn->code];
+
+	/* ALU */
+#define ALU(OPCODE, OP) \
+	BPF_ALU64_##OPCODE##_BPF_X: \
+		A = A OP X; \
+		CONT; \
+	BPF_ALU_##OPCODE##_BPF_X: \
+		A = (u32)A OP (u32)X; \
+		CONT; \
+	BPF_ALU64_##OPCODE##_BPF_K: \
+		A = A OP K; \
+		CONT; \
+	BPF_ALU_##OPCODE##_BPF_K: \
+		A = (u32)A OP (u32)K; \
+		CONT;
+
+	ALU(BPF_ADD, +)
+	ALU(BPF_SUB, -)
+	ALU(BPF_AND, &)
+	ALU(BPF_OR, |)
+	ALU(BPF_LSH, <<)
+	ALU(BPF_RSH, >>)
+	ALU(BPF_XOR, ^)
+	ALU(BPF_MUL, *)
+#undef ALU
+
+BPF_ALU_BPF_NEG_0:
+	A = (u32)-A;
+	CONT;
+BPF_ALU_BPF_MOV_BPF_X:
+	A = (u32)X;
+	CONT;
+BPF_ALU_BPF_MOV_BPF_K:
+	A = (u32)K;
+	CONT;
+BPF_ALU64_BPF_MOV_BPF_X:
+	A = X;
+	CONT;
+BPF_ALU64_BPF_MOV_BPF_K:
+	A = K;
+	CONT;
+BPF_ALU64_BPF_ARSH_BPF_X:
+	(*(s64 *) &A) >>= X;
+	CONT;
+BPF_ALU64_BPF_ARSH_BPF_K:
+	(*(s64 *) &A) >>= K;
+	CONT;
+BPF_ALU64_BPF_MOD_BPF_X:
+	tmp = A;
+	if (X)
+		A = do_div(tmp, X);
+	CONT;
+BPF_ALU_BPF_MOD_BPF_X:
+	tmp = (u32)A;
+	if (X)
+		A = do_div(tmp, (u32)X);
+	CONT;
+BPF_ALU64_BPF_MOD_BPF_K:
+	tmp = A;
+	if (K)
+		A = do_div(tmp, K);
+	CONT;
+BPF_ALU_BPF_MOD_BPF_K:
+	tmp = (u32)A;
+	if (K)
+		A = do_div(tmp, (u32)K);
+	CONT;
+BPF_ALU64_BPF_DIV_BPF_X:
+	if (X)
+		do_div(A, X);
+	CONT;
+BPF_ALU_BPF_DIV_BPF_X:
+	tmp = (u32)A;
+	if (X)
+		do_div(tmp, (u32)X);
+	A = (u32)tmp;
+	CONT;
+BPF_ALU64_BPF_DIV_BPF_K:
+	if (K)
+		do_div(A, K);
+	CONT;
+BPF_ALU_BPF_DIV_BPF_K:
+	tmp = (u32)A;
+	if (K)
+		do_div(tmp, (u32)K);
+	A = (u32)tmp;
+	CONT;
+BPF_ALU64_BPF_BSWAP32_BPF_X:
+	A = swab32(A);
+	CONT;
+BPF_ALU64_BPF_BSWAP64_BPF_X:
+	A = swab64(A);
+	CONT;
+
+	/* CALL */
+BPF_JMP_BPF_CALL_0:
+	return 0; /* not implemented yet */
+
+	/* JMP */
+BPF_JMP_BPF_JA_0:
+	insn += insn->off;
+	CONT;
+BPF_JMP_BPF_JEQ_BPF_X:
+	if (A == X) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JEQ_BPF_K:
+	if (A == K) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JNE_BPF_X:
+	if (A != X) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JNE_BPF_K:
+	if (A != K) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JGT_BPF_X:
+	if (A > X) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JGT_BPF_K:
+	if (A > K) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JGE_BPF_X:
+	if (A >= X) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JGE_BPF_K:
+	if (A >= K) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JSGT_BPF_X:
+	if (((s64)A) > ((s64)X)) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JSGT_BPF_K:
+	if (((s64)A) > ((s64)K)) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JSGE_BPF_X:
+	if (((s64)A) >= ((s64)X)) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JSGE_BPF_K:
+	if (((s64)A) >= ((s64)K)) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JSET_BPF_X:
+	if (A & X) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+BPF_JMP_BPF_JSET_BPF_K:
+	if (A & K) {
+		insn += insn->off;
+		CONT_JMP;
+	}
+	CONT;
+
+	/* STX and ST and LDX*/
+#define LDST(SIZEOP, SIZE) \
+	BPF_STX_BPF_MEM_##SIZEOP: \
+		*(SIZE *)(ulong)(A + insn->off) = X; \
+		CONT; \
+	BPF_ST_BPF_MEM_##SIZEOP: \
+		*(SIZE *)(ulong)(A + insn->off) = K; \
+		CONT; \
+	BPF_LDX_BPF_MEM_##SIZEOP: \
+		A = *(SIZE *)(ulong)(X + insn->off); \
+		CONT;
+
+	LDST(BPF_B, u8)
+	LDST(BPF_H, u16)
+	LDST(BPF_W, u32)
+	LDST(BPF_DW, u64)
+#undef LDST
+
+BPF_STX_BPF_XADD_BPF_W: /* lock xadd *(u32 *)(A + insn->off) += X */
+	atomic_add((u32)X, (atomic_t *)(ulong)(A + insn->off));
+	CONT;
+BPF_STX_BPF_XADD_BPF_DW: /* lock xadd *(u64 *)(A + insn->off) += X */
+	atomic64_add((u64)X, (atomic64_t *)(ulong)(A + insn->off));
+	CONT;
+
+BPF_LD_BPF_ABS_BPF_W: /* A = *(u32 *)(SKB + K) */
+	off = K;
+load_word:
+	/* sk_convert_filter() and sk_chk_filter_ext() will make sure
+	 * that BPF_LD+BPD_ABS and BPF_LD+BPF_IND insns are only
+	 * appearing in the programs where ctx == skb
+	 */
+	ptr = load_pointer((struct sk_buff *)ctx, off, 4, &tmp);
+	if (likely(ptr != NULL)) {
+		A = get_unaligned_be32(ptr);
+		CONT;
+	}
+	return 0;
+
+BPF_LD_BPF_ABS_BPF_H: /* A = *(u16 *)(SKB + K) */
+	off = K;
+load_half:
+	ptr = load_pointer((struct sk_buff *)ctx, off, 2, &tmp);
+	if (likely(ptr != NULL)) {
+		A = get_unaligned_be16(ptr);
+		CONT;
+	}
+	return 0;
+
+BPF_LD_BPF_ABS_BPF_B: /* A = *(u8 *)(SKB + K) */
+	off = K;
+load_byte:
+	ptr = load_pointer((struct sk_buff *)ctx, off, 1, &tmp);
+	if (likely(ptr != NULL)) {
+		A = *(u8 *)ptr;
+		CONT;
+	}
+	return 0;
+
+BPF_LD_BPF_IND_BPF_W: /* A = *(u32 *)(SKB + X + K) */
+	off = K + X;
+	goto load_word;
+
+BPF_LD_BPF_IND_BPF_H: /* A = *(u16 *)(SKB + X + K) */
+	off = K + X;
+	goto load_half;
+
+BPF_LD_BPF_IND_BPF_B: /* A = *(u8 *)(SKB + X + K) */
+	off = K + X;
+	goto load_byte;
+
+	/* RET */
+BPF_RET_BPF_K_0:
+	return regs[0/* R0 */];
+
+default_label:
+	/* sk_chk_filter_ext() and sk_convert_filter() guarantee
+	 * that we never reach here
+	 */
+	WARN_RATELIMIT(1, "unknown opcode %02x\n", insn->code);
+	return 0;
+#undef CONT
+#undef A
+#undef X
+#undef K
+#undef LOAD_IMM
+}
+__attribute__((alias("sk_run_filter_ext")))
+u32 sk_run_filter_ext_seccomp(const struct seccomp_data *ctx,
+			      const struct sock_filter_ext *insn);
+__attribute__((alias("sk_run_filter_ext")))
+u32 sk_run_filter_ext_skb(const struct sk_buff *ctx,
+			  const struct sock_filter_ext *insn);
+EXPORT_SYMBOL(sk_run_filter_ext_skb);
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index cf9cd13509a7..e1b979312588 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -273,6 +273,13 @@ static struct ctl_table net_core_table[] = {
 	},
 #endif
 	{
+		.procname	= "bpf_ext_enable",
+		.data		= &bpf_ext_enable,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec
+	},
+	{
 		.procname	= "netdev_tstamp_prequeue",
 		.data		= &netdev_tstamp_prequeue,
 		.maxlen		= sizeof(int),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v10 net-next 2/3] seccomp: convert seccomp to use extended BPF
  2014-03-12 21:43 [PATCH v10 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov
  2014-03-12 21:43 ` [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov
@ 2014-03-12 21:43 ` Alexei Starovoitov
  2014-03-12 21:43 ` [PATCH v10 net-next 3/3] doc: filter: add Extended BPF documentation Alexei Starovoitov
  2 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2014-03-12 21:43 UTC (permalink / raw)
  To: David S. Miller
  Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt,
	Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross,
	Thomas Gleixner, Eric Dumazet, Linus Torvalds, Andrew Morton,
	Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg,
	Arjan van de Ven, Christoph Hellwig, Pavel Emelyanov,
	linux-kernel, netdev

use sk_convert_filter() to convert seccomp BPF into extended BPF

05-sim-long_jumps.c of libseccomp was used as micro-benchmark:
  seccomp_rule_add_exact(ctx,...
  seccomp_rule_add_exact(ctx,...
  rc = seccomp_load(ctx);
  for (i = 0; i < 10000000; i++)
     syscall(199, 100);

'short filter' has 2 rules
'large filter' has 200 rules

'short filter' performance is slightly better on x86_64,i386,arm32
'large filter' is much faster on x86_64 and i386
and shows no difference on arm32

--x86_64-- short filter
old BPF: 2.7 sec
 39.12%  bench  libc-2.15.so       [.] syscall
  8.10%  bench  [kernel.kallsyms]  [k] sk_run_filter
  6.31%  bench  [kernel.kallsyms]  [k] system_call
  5.59%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
  4.37%  bench  [kernel.kallsyms]  [k] trace_hardirqs_off_caller
  3.70%  bench  [kernel.kallsyms]  [k] __secure_computing
  3.67%  bench  [kernel.kallsyms]  [k] lock_is_held
  3.03%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
new BPF: 2.58 sec
 42.05%  bench  libc-2.15.so       [.] syscall
  6.91%  bench  [kernel.kallsyms]  [k] system_call
  6.25%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
  6.07%  bench  [kernel.kallsyms]  [k] __secure_computing
  5.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_ext

--arm32-- short filter
old BPF: 4.0 sec
 39.92%  bench  [kernel.kallsyms]  [k] vector_swi
 16.60%  bench  [kernel.kallsyms]  [k] sk_run_filter
 14.66%  bench  libc-2.17.so       [.] syscall
  5.42%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
  5.10%  bench  [kernel.kallsyms]  [k] __secure_computing
new BPF: 3.7 sec
 35.93%  bench  [kernel.kallsyms]  [k] vector_swi
 21.89%  bench  libc-2.17.so       [.] syscall
 13.45%  bench  [kernel.kallsyms]  [k] sk_run_filter_ext
  6.25%  bench  [kernel.kallsyms]  [k] __secure_computing
  3.96%  bench  [kernel.kallsyms]  [k] syscall_trace_exit

--x86_64-- large filter
old BPF: 8.6 seconds
    73.38%    bench  [kernel.kallsyms]  [k] sk_run_filter
    10.70%    bench  libc-2.15.so       [.] syscall
     5.09%    bench  [kernel.kallsyms]  [k] seccomp_bpf_load
     1.97%    bench  [kernel.kallsyms]  [k] system_call
ext BPF: 5.7 seconds
    66.20%    bench  [kernel.kallsyms]  [k] sk_run_filter_ext
    16.75%    bench  libc-2.15.so       [.] syscall
     3.31%    bench  [kernel.kallsyms]  [k] system_call
     2.88%    bench  [kernel.kallsyms]  [k] __secure_computing

--i386-- large filter
old BPF: 5.4 sec
ext BPF: 3.8 sec

--arm32-- large filter
old BPF: 13.5 sec
 73.88%  bench  [kernel.kallsyms]  [k] sk_run_filter
 10.29%  bench  [kernel.kallsyms]  [k] vector_swi
  6.46%  bench  libc-2.17.so       [.] syscall
  2.94%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
  1.19%  bench  [kernel.kallsyms]  [k] __secure_computing
  0.87%  bench  [kernel.kallsyms]  [k] sys_getuid
new BPF: 13.5 sec
 76.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_ext
 10.98%  bench  [kernel.kallsyms]  [k] vector_swi
  5.87%  bench  libc-2.17.so       [.] syscall
  1.77%  bench  [kernel.kallsyms]  [k] __secure_computing
  0.93%  bench  [kernel.kallsyms]  [k] sys_getuid

BPF filters generated by seccomp are very branchy, so ext BPF
performance is better than old BPF.

Performance gains will be even higher when extended BPF JIT
is committed.

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 include/linux/seccomp.h |    1 -
 kernel/seccomp.c        |  118 ++++++++++++++++++++++-------------------------
 net/core/filter.c       |    5 --
 3 files changed, 56 insertions(+), 68 deletions(-)

diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 6f19cfd1840e..4054b0994071 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -76,7 +76,6 @@ static inline int seccomp_mode(struct seccomp *s)
 #ifdef CONFIG_SECCOMP_FILTER
 extern void put_seccomp_filter(struct task_struct *tsk);
 extern void get_seccomp_filter(struct task_struct *tsk);
-extern u32 seccomp_bpf_load(int off);
 #else  /* CONFIG_SECCOMP_FILTER */
 static inline void put_seccomp_filter(struct task_struct *tsk)
 {
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index b7a10048a32c..9bd265eaad05 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -55,60 +55,31 @@ struct seccomp_filter {
 	atomic_t usage;
 	struct seccomp_filter *prev;
 	unsigned short len;  /* Instruction count */
-	struct sock_filter insns[];
+	struct sock_filter_ext insns[];
 };
 
 /* Limit any path through the tree to 256KB worth of instructions. */
 #define MAX_INSNS_PER_PATH ((1 << 18) / sizeof(struct sock_filter))
 
-/**
- * get_u32 - returns a u32 offset into data
- * @data: a unsigned 64 bit value
- * @index: 0 or 1 to return the first or second 32-bits
- *
- * This inline exists to hide the length of unsigned long.  If a 32-bit
- * unsigned long is passed in, it will be extended and the top 32-bits will be
- * 0. If it is a 64-bit unsigned long, then whatever data is resident will be
- * properly returned.
- *
+/*
  * Endianness is explicitly ignored and left for BPF program authors to manage
  * as per the specific architecture.
  */
-static inline u32 get_u32(u64 data, int index)
+static void populate_seccomp_data(struct seccomp_data *sd)
 {
-	return ((u32 *)&data)[index];
-}
-
-/* Helper for bpf_load below. */
-#define BPF_DATA(_name) offsetof(struct seccomp_data, _name)
-/**
- * bpf_load: checks and returns a pointer to the requested offset
- * @off: offset into struct seccomp_data to load from
- *
- * Returns the requested 32-bits of data.
- * seccomp_check_filter() should assure that @off is 32-bit aligned
- * and not out of bounds.  Failure to do so is a BUG.
- */
-u32 seccomp_bpf_load(int off)
-{
-	struct pt_regs *regs = task_pt_regs(current);
-	if (off == BPF_DATA(nr))
-		return syscall_get_nr(current, regs);
-	if (off == BPF_DATA(arch))
-		return syscall_get_arch(current, regs);
-	if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
-		unsigned long value;
-		int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
-		int index = !!(off % sizeof(u64));
-		syscall_get_arguments(current, regs, arg, 1, &value);
-		return get_u32(value, index);
-	}
-	if (off == BPF_DATA(instruction_pointer))
-		return get_u32(KSTK_EIP(current), 0);
-	if (off == BPF_DATA(instruction_pointer) + sizeof(u32))
-		return get_u32(KSTK_EIP(current), 1);
-	/* seccomp_check_filter should make this impossible. */
-	BUG();
+	struct task_struct *task = current;
+	struct pt_regs *regs = task_pt_regs(task);
+
+	sd->nr = syscall_get_nr(task, regs);
+	sd->arch = syscall_get_arch(task, regs);
+	/* unroll syscall_get_args to help gcc on arm */
+	syscall_get_arguments(task, regs, 0, 1, (unsigned long *)&sd->args[0]);
+	syscall_get_arguments(task, regs, 1, 1, (unsigned long *)&sd->args[1]);
+	syscall_get_arguments(task, regs, 2, 1, (unsigned long *)&sd->args[2]);
+	syscall_get_arguments(task, regs, 3, 1, (unsigned long *)&sd->args[3]);
+	syscall_get_arguments(task, regs, 4, 1, (unsigned long *)&sd->args[4]);
+	syscall_get_arguments(task, regs, 5, 1, (unsigned long *)&sd->args[5]);
+	sd->instruction_pointer = KSTK_EIP(task);
 }
 
 /**
@@ -133,17 +104,17 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen)
 
 		switch (code) {
 		case BPF_S_LD_W_ABS:
-			ftest->code = BPF_S_ANC_SECCOMP_LD_W;
+			ftest->code = BPF_LDX | BPF_W | BPF_ABS;
 			/* 32-bit aligned and not out of bounds. */
 			if (k >= sizeof(struct seccomp_data) || k & 3)
 				return -EINVAL;
 			continue;
 		case BPF_S_LD_W_LEN:
-			ftest->code = BPF_S_LD_IMM;
+			ftest->code = BPF_LD | BPF_IMM;
 			ftest->k = sizeof(struct seccomp_data);
 			continue;
 		case BPF_S_LDX_W_LEN:
-			ftest->code = BPF_S_LDX_IMM;
+			ftest->code = BPF_LDX | BPF_IMM;
 			ftest->k = sizeof(struct seccomp_data);
 			continue;
 		/* Explicitly include allowed calls. */
@@ -185,6 +156,7 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen)
 		case BPF_S_JMP_JGT_X:
 		case BPF_S_JMP_JSET_K:
 		case BPF_S_JMP_JSET_X:
+			sk_decode_filter(ftest, ftest);
 			continue;
 		default:
 			return -EINVAL;
@@ -202,18 +174,21 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen)
 static u32 seccomp_run_filters(int syscall)
 {
 	struct seccomp_filter *f;
+	struct seccomp_data sd;
 	u32 ret = SECCOMP_RET_ALLOW;
 
 	/* Ensure unexpected behavior doesn't result in failing open. */
 	if (WARN_ON(current->seccomp.filter == NULL))
 		return SECCOMP_RET_KILL;
 
+	populate_seccomp_data(&sd);
+
 	/*
 	 * All filters in the list are evaluated and the lowest BPF return
 	 * value always takes priority (ignoring the DATA).
 	 */
 	for (f = current->seccomp.filter; f; f = f->prev) {
-		u32 cur_ret = sk_run_filter(NULL, f->insns);
+		u32 cur_ret = sk_run_filter_ext_seccomp(&sd, f->insns);
 		if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
 			ret = cur_ret;
 	}
@@ -231,6 +206,8 @@ static long seccomp_attach_filter(struct sock_fprog *fprog)
 	struct seccomp_filter *filter;
 	unsigned long fp_size = fprog->len * sizeof(struct sock_filter);
 	unsigned long total_insns = fprog->len;
+	struct sock_filter *fp;
+	int new_len;
 	long ret;
 
 	if (fprog->len == 0 || fprog->len > BPF_MAXINSNS)
@@ -252,28 +229,42 @@ static long seccomp_attach_filter(struct sock_fprog *fprog)
 				     CAP_SYS_ADMIN) != 0)
 		return -EACCES;
 
-	/* Allocate a new seccomp_filter */
-	filter = kzalloc(sizeof(struct seccomp_filter) + fp_size,
-			 GFP_KERNEL|__GFP_NOWARN);
-	if (!filter)
+	fp = kzalloc(fp_size, GFP_KERNEL|__GFP_NOWARN);
+	if (!fp)
 		return -ENOMEM;
-	atomic_set(&filter->usage, 1);
-	filter->len = fprog->len;
 
 	/* Copy the instructions from fprog. */
 	ret = -EFAULT;
-	if (copy_from_user(filter->insns, fprog->filter, fp_size))
-		goto fail;
+	if (copy_from_user(fp, fprog->filter, fp_size))
+		goto free_prog;
 
 	/* Check and rewrite the fprog via the skb checker */
-	ret = sk_chk_filter(filter->insns, filter->len);
+	ret = sk_chk_filter(fp, fprog->len);
 	if (ret)
-		goto fail;
+		goto free_prog;
 
 	/* Check and rewrite the fprog for seccomp use */
-	ret = seccomp_check_filter(filter->insns, filter->len);
+	ret = seccomp_check_filter(fp, fprog->len);
 	if (ret)
-		goto fail;
+		goto free_prog;
+
+	/* convert 'sock_filter' insns to 'sock_filter_ext' insns */
+	ret = sk_convert_filter(fp, fprog->len, NULL, &new_len);
+	if (ret)
+		goto free_prog;
+
+	/* Allocate a new seccomp_filter */
+	filter = kzalloc(sizeof(struct seccomp_filter) +
+			 sizeof(struct sock_filter_ext) * new_len,
+			 GFP_KERNEL|__GFP_NOWARN);
+	if (!filter)
+		goto free_prog;
+
+	ret = sk_convert_filter(fp, fprog->len, filter->insns, &new_len);
+	if (ret)
+		goto free_filter;
+	atomic_set(&filter->usage, 1);
+	filter->len = new_len;
 
 	/*
 	 * If there is an existing filter, make it the prev and don't drop its
@@ -282,8 +273,11 @@ static long seccomp_attach_filter(struct sock_fprog *fprog)
 	filter->prev = current->seccomp.filter;
 	current->seccomp.filter = filter;
 	return 0;
-fail:
+
+free_filter:
 	kfree(filter);
+free_prog:
+	kfree(fp);
 	return ret;
 }
 
diff --git a/net/core/filter.c b/net/core/filter.c
index 41775acbd69c..0aac018f4329 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -384,11 +384,6 @@ load_b:
 				A = 0;
 			continue;
 		}
-#ifdef CONFIG_SECCOMP_FILTER
-		case BPF_S_ANC_SECCOMP_LD_W:
-			A = seccomp_bpf_load(fentry->k);
-			continue;
-#endif
 		default:
 			WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n",
 				       fentry->code, fentry->jt,
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v10 net-next 3/3] doc: filter: add Extended BPF documentation
  2014-03-12 21:43 [PATCH v10 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov
  2014-03-12 21:43 ` [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov
  2014-03-12 21:43 ` [PATCH v10 net-next 2/3] seccomp: convert seccomp to use extended BPF Alexei Starovoitov
@ 2014-03-12 21:43 ` Alexei Starovoitov
  2 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2014-03-12 21:43 UTC (permalink / raw)
  To: David S. Miller
  Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt,
	Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross,
	Thomas Gleixner, Eric Dumazet, Linus Torvalds, Andrew Morton,
	Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg,
	Arjan van de Ven, Christoph Hellwig, Pavel Emelyanov,
	linux-kernel, netdev

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Daniel Borkmann <dborkman@redhat.com>
---
 Documentation/networking/filter.txt |  181 +++++++++++++++++++++++++++++++++++
 1 file changed, 181 insertions(+)

diff --git a/Documentation/networking/filter.txt b/Documentation/networking/filter.txt
index a06b48d2f5cc..6a0e29583a30 100644
--- a/Documentation/networking/filter.txt
+++ b/Documentation/networking/filter.txt
@@ -546,6 +546,186 @@ ffffffffa0069c8f + <x>:
 For BPF JIT developers, bpf_jit_disasm, bpf_asm and bpf_dbg provides a useful
 toolchain for developing and testing the kernel's JIT compiler.
 
+Extended BPF
+------------
+Extended BPF extends BPF in the following ways:
+- from 2 to 10 registers
+  Original BPF has two registers (A and X) and hidden frame pointer.
+  Extended BPF has ten registers and read-only frame pointer.
+- from 32-bit registers to 64-bit registers
+  semantics of old 32-bit ALU operations are preserved via 32-bit
+  subregisters
+- if (cond) jump_true; else jump_false;
+  old BPF insns are replaced with:
+  if (cond) jump_true; /* else fallthrough */
+- adds signed > and >= insns
+- 16 4-byte stack slots for register spill-fill replaced with
+  up to 512 bytes of multi-use stack space
+- introduces bpf_call insn and register passing convention for zero
+  overhead calls from/to other kernel functions (not part of this patch)
+- adds arithmetic right shift insn
+- adds swab32/swab64 insns
+- adds atomic_add insn
+- old tax/txa insns are replaced with 'mov dst,src' insn
+
+Extended BPF is designed to be JITed with one to one mapping, which
+allows GCC/LLVM compilers to generate optimized BPF code that performs
+almost as fast as natively compiled code
+
+sysctl net.core.bpf_ext_enable=1
+controls whether filters attached to sockets will be automatically
+converted to extended BPF or not.
+
+BPF is safe dynamically loadable program that can call fixed set
+of kernel functions and takes a pointer to data as an input,
+where data is skb, seccomp_data, kprobe function arguments or else.
+
+Extended Instruction Set was designed with these goals:
+- write programs in restricted C and compile into BPF with GCC/LLVM
+- just-in-time map to modern 64-bit CPU with minimal performance overhead
+  over two steps: C -> BPF -> native code
+- guarantee termination and safety of BPF program in kernel
+  with simple algorithm
+
+GCC/LLVM-bpf backend is optional.
+Extended BPF can be coded with macroses from filter.h just like original BPF,
+though the same filter done in C is easier to understand.
+sk_convert_filter() remaps original BPF insns into extended.
+
+Minimal performance overhead is achieved by having one to one mapping
+between BPF insns and native insns, and one to one mapping between BPF
+registers and native registers on 64-bit CPUs
+
+Extended BPF may allow jump forward and backward for two reasons:
+to reduce branch mispredict penalty compiler moves cold basic blocks out of
+fall-through path and to reduce code duplication that would be hard to avoid
+if only jump forward was available.
+To guarantee termination simple non-recursive depth-first-search verifies
+that there are no back-edges (no loops in the program), program is a DAG
+with root at the first insn, all branches end at the last RET insn and
+all instructions are reachable.
+Original BPF actually allows unreachable insns. Though it's safe, it will be
+fixed when extended BPF replaces BPF completely.
+
+Original BPF has two registers (A and X) and hidden frame pointer.
+Extended BPF has ten registers and read-only frame pointer.
+Since 64-bit CPUs are passing arguments to the functions via registers
+the number of args from BPF program to in-kernel function is restricted to 5
+and one register is used to accept return value from in-kernel function.
+x86_64 passes first 6 arguments in registers.
+aarch64/sparcv9/mips64 have 7-8 registers for arguments.
+x86_64 has 6 callee saved registers.
+aarch64/sparcv9/mips64 have 11 or more callee saved registers.
+
+Therefore extended BPF calling convention is defined as:
+R0 - return value from in-kernel function
+R1-R5 - arguments from BPF program to in-kernel function
+R6-R9 - callee saved registers that in-kernel function will preserve
+R10 - read-only frame pointer to access stack
+
+so that all BPF registers map one to one to HW registers on x86_64,aarch64,etc
+and BPF calling convention maps directly to ABIs used by kernel on 64-bit
+architectures.
+On 32-bit architectures JIT may map programs that use only 32-bit arithmetic
+and let more complex programs to be interpreted.
+
+R0-R5 are scratch registers and BPF program needs spill/fill them if necessary
+across calls.
+Note that there is only one BPF program == one BPF function and it cannot call
+other BPF functions. It can only call predefined in-kernel functions.
+
+All BPF registers are 64-bit with 32-bit lower subregister that zero-extends
+into 64-bit if written to. That behavior maps directly to x86_64 and arm64
+subregister defintion, but makes other JITs more difficult.
+
+Original BPF and extended BPF are two operand instructions, which helps
+to do one-to-one mapping between BPF insn and x86 insn during JIT.
+
+Extended BPF doesn't have pre-defined endianness not to favor one
+architecture vs another. Therefore bswap insn is available.
+Original BPF doesn't have such insn and does bswap as part of sk_load_word call
+which is often unnecessary if we want to compare the value with the constant.
+Restricted C code might be written differently depending on endianness
+and GCC/LLVM-bpf will take an endianness flag.
+
+32-bit architectures run 64-bit extended BPF programs via interpreter.
+Their JITs may convert BPF programs that only use 32-bit subregs into native
+instruction set and let the rest being interpreted.
+
+Extended BPF is 64-bit, because on 64-bit architectures, pointers are 64-bit
+and we want to pass 64-bit values in/out kernel functions, so 32-bit BPF
+registers would require to define register-pair ABI, there won't be a direct
+BPF register to HW register mapping and JIT would need to do
+combine/split/move operations for every register in and out of the function,
+which is complex, bug prone and slow.
+Another reason is atomic 64-bit counters
+
+Just like original BPF, extended BPF is safe, deterministic and kernel can
+easily prove that. The safety of the program is determined in two steps.
+First step does depth-first-search to disallow loops and other CFG validation.
+Second step starts from the first insn and descends all possible paths.
+It simulates execution of every insn and observes the state change of
+registers and stack.
+At the start of the program the register R1 contains a pointer to context
+and has type PTR_TO_CTX. If checker sees an insn that does R2=R1, then R2 has
+now type PTR_TO_CTX as well and can be used on right hand side of expression.
+If R1=PTR_TO_CTX and insn is R2=R1+1, then R2=INVALID_PTR and it is readable.
+If register was never written to, it's not readable.
+After kernel function call, R1-R5 are reset to unreadable and R0 has a return
+type of the function. Since R6-R9 are callee saved, their state is preserved
+across the call.
+load/store instructions are allowed only with registers of valid types, which
+are PTR_TO_CTX, PTR_TO_TABLE, PTR_TO_STACK. They are bounds and alginment
+checked.
+
+Input context pointer is generic. Its contents are defined by specific use case.
+For seccomp R1 points to seccomp_data
+For converted BPF filters R1 points to skb
+Through get_context_access callback BPF checker is customized, so that BPF
+program can only access certain fields of input context with specified size
+and alignment.
+For example, the following insn:
+  BPF_INSN_LD(BPF_W, R0, R6, 8)
+intends to load word from address R6 + 8 and store it into R0
+If R6=PTR_TO_CTX, then get_context_access callback should let the checker know
+that offset 8 of size 4 bytes can be accessed for reading, otherwise the checker
+will reject the program.
+
+If R6=PTR_TO_STACK, then access should be aligned and be within stack bounds,
+which are hard coded to [-512, 0]. In this example offset is 8, so it will fail
+verification.
+The checker will allow BPF program to read data from stack only after it wrote
+into it.
+Pointer register spill/fill is tracked as well, since four (R6-R9) callee saved
+registers may not be enough for some programs.
+
+Allowed function calls are customized via get_func_proto callback.
+
+One of the useful functions that can be made available to BPF program
+are bpf_table_lookup/bpf_table_update.
+They can help tracing filters collect different types of statistics.
+Example: pc addresses for drop_monitor filter
+
+In seccomp and socket filter use cases extended BPF program consists
+of intructions only, but for tracing filters case BPF program may contain
+BPF tables as well.
+There are no special instructions to access BPF tables. The access is done
+via function calls.
+
+BPF program identifies the table by table_id and accesses it in C like:
+elem = bpf_table_lookup(ctx, table_id, key);
+
+BPF checker matches 'table_id' against known tables, verifies that 'key' points
+to stack and table->key_size bytes are initialized.
+bpf_table_lookup() is a normal kernel function. It needs to do a lookup and
+return either valid pointer to the element or NULL.
+BPF checker will verify that the program accesses the pointer only
+after comparing it to NULL.
+It's up to implementation to decide how lookup is done and meaning of the key.
+
+Just like original, extended BPF is limited to 4096 insns, which means that any
+program will terminate quickly and will call fixed number of kernel functions.
+
 Misc
 ----
 
@@ -561,3 +741,4 @@ the underlying architecture.
 
 Jay Schulist <jschlst@samba.org>
 Daniel Borkmann <dborkman@redhat.com>
+Alexei Starovoitov <ast@plumgrid.com>
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-12 21:43 ` [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov
@ 2014-03-14 12:58   ` Pablo Neira Ayuso
  2014-03-14 15:37     ` Alexei Starovoitov
  0 siblings, 1 reply; 10+ messages in thread
From: Pablo Neira Ayuso @ 2014-03-14 12:58 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry,
	Steven Rostedt, Peter Zijlstra, H. Peter Anvin,
	Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Eric Dumazet,
	Linus Torvalds, Andrew Morton, Frederic Weisbecker,
	Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven,
	Christoph Hellwig, Pavel Emelyanov, linux-kernel, netdev

On Wed, Mar 12, 2014 at 02:43:32PM -0700, Alexei Starovoitov wrote:
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index e568c8ef896b..6e6aab5e062b 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -25,20 +25,45 @@ struct sock;
>  struct sk_filter
>  {
>  	atomic_t		refcnt;
> -	unsigned int         	len;	/* Number of filter blocks */
> +	/* len - number of insns in sock_filter program
> +	 * len_ext - number of insns in socket_filter_ext program
> +	 * jited - true if either original or extended program was JITed
> +	 * orig_prog - original sock_filter program if not NULL
> +	 */
> +	unsigned int		len;
> +	unsigned int		len_ext;
> +	unsigned int		jited:1;

This is consuming 4 bytes just to store the jited bit. I think you can
scratch that bit from len, given the maximum filter length for bpf. I
think the the jited bit change that David suggested have to come in
first place as a separated patch in the series.

> +	struct sock_filter	*orig_prog;

If your new extended filtering is not used, this consumes 8 extra
bytes + len_ext (bytes) in x86_64. I think a more generic way to make
this is that you can move the original bpf filter and its length at
the bottom of this structure after insns to store something like:

struct sk_bpf_compat {
        struct sock_filter      *prog;
        unsigned int            len;
};

This would be only allocated when you filtering approach is used. For
that you'll need some enum in sk_filter to indicate the filtering
approach, but we'll save 8 bytes per filter in the end with regards to
this current patch.

>  	struct rcu_head		rcu;
> -	unsigned int		(*bpf_func)(const struct sk_buff *skb,
> -					    const struct sock_filter *filter);
> +	union {
> +		unsigned int (*bpf_func)(const struct sk_buff *skb,
> +					 const struct sock_filter *fp);
> +		unsigned int (*bpf_func_ext)(const struct sk_buff *skb,
> +					     const struct sock_filter_ext *fp);
> +	};
>  	union {
>  		struct sock_filter     	insns[0];
> +		struct sock_filter_ext	insns_ext[0];
>  		struct work_struct	work;
>  	};
>  };
>  

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-14 12:58   ` Pablo Neira Ayuso
@ 2014-03-14 15:37     ` Alexei Starovoitov
  2014-03-14 19:51       ` Alexei Starovoitov
  0 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2014-03-14 15:37 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry,
	Steven Rostedt, Peter Zijlstra, H. Peter Anvin,
	Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Eric Dumazet,
	Linus Torvalds, Andrew Morton, Frederic Weisbecker,
	Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven,
	Christoph Hellwig, Pavel Emelyanov, LKML, Network Development

On Fri, Mar 14, 2014 at 5:58 AM, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> On Wed, Mar 12, 2014 at 02:43:32PM -0700, Alexei Starovoitov wrote:
>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>> index e568c8ef896b..6e6aab5e062b 100644
>> --- a/include/linux/filter.h
>> +++ b/include/linux/filter.h
>> @@ -25,20 +25,45 @@ struct sock;
>>  struct sk_filter
>>  {
>>       atomic_t                refcnt;
>> -     unsigned int            len;    /* Number of filter blocks */
>> +     /* len - number of insns in sock_filter program
>> +      * len_ext - number of insns in socket_filter_ext program
>> +      * jited - true if either original or extended program was JITed
>> +      * orig_prog - original sock_filter program if not NULL
>> +      */
>> +     unsigned int            len;
>> +     unsigned int            len_ext;
>> +     unsigned int            jited:1;
>
> This is consuming 4 bytes just to store the jited bit. I think you can
> scratch that bit from len, given the maximum filter length for bpf. I
> think the the jited bit change that David suggested have to come in
> first place as a separated patch in the series.

It was reviewed so many times that I would prefer not to break it
apart just to split it for single 'jited' bitfield, though I agree with taking
one bit from len.
I actually proposed it in 'bool vs bitfield' thread few days ago.
I think it can be done as a separate commit after this one goes in.

>> +     struct sock_filter      *orig_prog;
>
> If your new extended filtering is not used, this consumes 8 extra
> bytes + len_ext (bytes) in x86_64. I think a more generic way to make
> this is that you can move the original bpf filter and its length at
> the bottom of this structure after insns to store something like:
>
> struct sk_bpf_compat {
>         struct sock_filter      *prog;
>         unsigned int            len;
> };
>
> This would be only allocated when you filtering approach is used. For
> that you'll need some enum in sk_filter to indicate the filtering
> approach, but we'll save 8 bytes per filter in the end with regards to
> this current patch.

this is also can be done as separate commit after this one.
Though I don't like the idea, because access to 'prog' and 'len'
becomes very complicated. In every place we need a helper
function to calculate an offset to this 'sk_bpf_compat',
then typecast that memory location, etc.
Imo single pointer is much cleaner.

Thanks
Alexei

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-14 15:37     ` Alexei Starovoitov
@ 2014-03-14 19:51       ` Alexei Starovoitov
  2014-03-14 20:08         ` David Miller
  0 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2014-03-14 19:51 UTC (permalink / raw)
  To: Pablo Neira Ayuso
  Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry,
	Steven Rostedt, Peter Zijlstra, H. Peter Anvin,
	Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Eric Dumazet,
	Linus Torvalds, Andrew Morton, Frederic Weisbecker,
	Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven,
	Christoph Hellwig, Pavel Emelyanov, LKML, Network Development

On Fri, Mar 14, 2014 at 8:37 AM, Alexei Starovoitov <ast@plumgrid.com> wrote:
> On Fri, Mar 14, 2014 at 5:58 AM, Pablo Neira Ayuso <pablo@netfilter.org> wrote:
>> On Wed, Mar 12, 2014 at 02:43:32PM -0700, Alexei Starovoitov wrote:
>>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>>> index e568c8ef896b..6e6aab5e062b 100644
>>> --- a/include/linux/filter.h
>>> +++ b/include/linux/filter.h
>>> @@ -25,20 +25,45 @@ struct sock;
>>>  struct sk_filter
>>>  {
>>>       atomic_t                refcnt;
>>> -     unsigned int            len;    /* Number of filter blocks */
>>> +     /* len - number of insns in sock_filter program
>>> +      * len_ext - number of insns in socket_filter_ext program
>>> +      * jited - true if either original or extended program was JITed
>>> +      * orig_prog - original sock_filter program if not NULL
>>> +      */
>>> +     unsigned int            len;
>>> +     unsigned int            len_ext;
>>> +     unsigned int            jited:1;
>>
>> This is consuming 4 bytes just to store the jited bit. I think you can
>> scratch that bit from len, given the maximum filter length for bpf. I
>> think the the jited bit change that David suggested have to come in
>> first place as a separated patch in the series.
>
> It was reviewed so many times that I would prefer not to break it
> apart just to split it for single 'jited' bitfield, though I agree with taking
> one bit from len.
> I actually proposed it in 'bool vs bitfield' thread few days ago.
> I think it can be done as a separate commit after this one goes in.
>
>>> +     struct sock_filter      *orig_prog;
>>
>> If your new extended filtering is not used, this consumes 8 extra
>> bytes + len_ext (bytes) in x86_64. I think a more generic way to make
>> this is that you can move the original bpf filter and its length at
>> the bottom of this structure after insns to store something like:
>>
>> struct sk_bpf_compat {
>>         struct sock_filter      *prog;
>>         unsigned int            len;
>> };
>>
>> This would be only allocated when you filtering approach is used. For
>> that you'll need some enum in sk_filter to indicate the filtering
>> approach, but we'll save 8 bytes per filter in the end with regards to
>> this current patch.
>
> this is also can be done as separate commit after this one.
> Though I don't like the idea, because access to 'prog' and 'len'
> becomes very complicated. In every place we need a helper
> function to calculate an offset to this 'sk_bpf_compat',
> then typecast that memory location, etc.
> Imo single pointer is much cleaner.
>
> Thanks
> Alexei

Hi David,

can you please explain why the status of these
patches is 'deferred' in patchwork ?
Is it because of bpf vs nft thread?
I think that's orthogonal.
First of all, ebpf and nft are not really comparable.
ebpf is a low level assembler
whereas nft is a high level state machine.
As I was saying nft can be accelerated by ebpf.
Even without accelerated nft, ebpf makes faster seccomp,
tracing filters, ovs, etc.
nft state machine is not applicable for such tasks.
It feels odd even trying to compare them.
They're serving different purpose.
general purpose assembler vs packet classifier? Just different.
I'm not sure what is the concern here.

Thanks
Alexei

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-14 19:51       ` Alexei Starovoitov
@ 2014-03-14 20:08         ` David Miller
  2014-03-15 19:53           ` Daniel Borkmann
  0 siblings, 1 reply; 10+ messages in thread
From: David Miller @ 2014-03-14 20:08 UTC (permalink / raw)
  To: ast
  Cc: pablo, dborkman, mingo, wad, rostedt, a.p.zijlstra, hpa, hagen,
	jesse, tglx, edumazet, torvalds, akpm, fweisbec, acme, penberg,
	arjan, hch, xemul, linux-kernel, netdev

From: Alexei Starovoitov <ast@plumgrid.com>
Date: Fri, 14 Mar 2014 12:51:17 -0700

> can you please explain why the status of these
> patches is 'deferred' in patchwork ?
> Is it because of bpf vs nft thread?
> I think that's orthogonal.

I do not find it orthogonal, Pablo brings up some very valid points
which I agree with.

EBPF has a lot of the same user side interface limitations that the
existing BPF has, and you refuse to accept this core point Pablo is
making.

That is, that it lacks extensibility, and is too strongly tied to the
implementation.

This is exactly how we run into problems in the future, and you'll be
proposing EBPF_2.0 to address such problems.

I refuse to setup such a situation, and there is absolutely not rush
whatsoever to apply your patches.  We can take our time with this
stuff.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-14 20:08         ` David Miller
@ 2014-03-15 19:53           ` Daniel Borkmann
  2014-03-17  9:16             ` Pablo Neira Ayuso
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Borkmann @ 2014-03-15 19:53 UTC (permalink / raw)
  To: David Miller
  Cc: ast, pablo, mingo, wad, rostedt, a.p.zijlstra, hpa, hagen, jesse,
	tglx, edumazet, torvalds, akpm, fweisbec, acme, penberg, arjan,
	hch, xemul, linux-kernel, netdev

On 03/14/2014 09:08 PM, David Miller wrote:
> From: Alexei Starovoitov <ast@plumgrid.com>
> Date: Fri, 14 Mar 2014 12:51:17 -0700
>
>> can you please explain why the status of these
>> patches is 'deferred' in patchwork ?
>> Is it because of bpf vs nft thread?
>> I think that's orthogonal.
>
> I do not find it orthogonal, Pablo brings up some very valid points
> which I agree with.
>
> EBPF has a lot of the same user side interface limitations that the
> existing BPF has, and you refuse to accept this core point Pablo is
> making.
>
> That is, that it lacks extensibility, and is too strongly tied to the
> implementation.
>
> This is exactly how we run into problems in the future, and you'll be
> proposing EBPF_2.0 to address such problems.

Hm, so currently there's no interface where this is exposed to uapi,
and we surely can and should put the definitions back to the non-uapi
include to keep it inside the kernel, you're right.

I think, at least for me, the take-away of Alexei's work is, that
even (if we assume) without any further functionality, the new design
would greatly improve the interpreter (and presumably later on as
well JIT) performance based on Alexei's benchmarks, which would already
be a win for seccomp and socket filters and where ever they are being
used across the networking subsystem, and therefore out-of-the-box
without any changes for user space applications such as libpcap.

I was thinking that it could be an option to make this transparently
available to everyone, by just dropping the bpf_ext_enable knob, and
perhaps just replace the old BPF interpreter entirely in this set?
So the process would be: 1) test if normal BPF filter can be JIT'ed,
go for it, if it's not supported by JIT (or if it is disabled), run
it transparently in the new (non-exposed) BPF representation to have
a better overall performance.

Would that perhaps address the above concern? So on the big picture,
it provides a BPF performance improvement. I think if there's a wish
to extend the socket filtering api to run alternative interpreters,
such as nft, then that could still happen, of course.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter
  2014-03-15 19:53           ` Daniel Borkmann
@ 2014-03-17  9:16             ` Pablo Neira Ayuso
  0 siblings, 0 replies; 10+ messages in thread
From: Pablo Neira Ayuso @ 2014-03-17  9:16 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: David Miller, ast, mingo, wad, rostedt, a.p.zijlstra, hpa, hagen,
	jesse, tglx, edumazet, torvalds, akpm, fweisbec, acme, penberg,
	arjan, hch, xemul, linux-kernel, netdev

On Sat, Mar 15, 2014 at 08:53:55PM +0100, Daniel Borkmann wrote:
> On 03/14/2014 09:08 PM, David Miller wrote:
> >From: Alexei Starovoitov <ast@plumgrid.com>
> >Date: Fri, 14 Mar 2014 12:51:17 -0700
> >
> >>can you please explain why the status of these
> >>patches is 'deferred' in patchwork ?
> >>Is it because of bpf vs nft thread?
> >>I think that's orthogonal.
> >
> >I do not find it orthogonal, Pablo brings up some very valid points
> >which I agree with.
> >
> >EBPF has a lot of the same user side interface limitations that the
> >existing BPF has, and you refuse to accept this core point Pablo is
> >making.
> >
> >That is, that it lacks extensibility, and is too strongly tied to the
> >implementation.
> >
> >This is exactly how we run into problems in the future, and you'll be
> >proposing EBPF_2.0 to address such problems.
> 
> Hm, so currently there's no interface where this is exposed to uapi,
> and we surely can and should put the definitions back to the non-uapi
> include to keep it inside the kernel, you're right.

Yes please, move that to somewhere to avoid exposing internal
implementation details to userspace.

I also don't find a good reason to add that new /proc user interface
switch to enable/disable the conversion to the new internal
representation that this patch adds. I think benchmarking old and new
approaches is *not* a good reason to expose that to userspace. My
impression is that, without that /proc switch, the patch will be
simplified.

> I think, at least for me, the take-away of Alexei's work is, that
> even (if we assume) without any further functionality, the new design
> would greatly improve the interpreter (and presumably later on as
> well JIT) performance based on Alexei's benchmarks, which would already
> be a win for seccomp and socket filters and where ever they are being
> used across the networking subsystem, and therefore out-of-the-box
> without any changes for user space applications such as libpcap.
> 
> I was thinking that it could be an option to make this transparently
> available to everyone, by just dropping the bpf_ext_enable knob, and
> perhaps just replace the old BPF interpreter entirely in this set?
> So the process would be: 1) test if normal BPF filter can be JIT'ed,
> go for it, if it's not supported by JIT (or if it is disabled), run
> it transparently in the new (non-exposed) BPF representation to have
> a better overall performance.

That makes sense to me. If the purpose is to keep this as an internal
representation, the decisions on how to represent the filter to boost
performance should remain in the kernel-space, I don't find a good
reason for keeping it there.  Please, remove it.

Moreover, as we discussed already the new jit flag should be scratched
from the len attribute of the sk_filter object. That should be sent as
an initial patch in the series, David requested that change and he can
take it already since it's independent from this.

> Would that perhaps address the above concern? So on the big picture,
> it provides a BPF performance improvement. I think if there's a wish
> to extend the socket filtering api to run alternative interpreters,
> such as nft, then that could still happen, of course.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-03-17  9:16 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-12 21:43 [PATCH v10 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov
2014-03-12 21:43 ` [PATCH v10 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov
2014-03-14 12:58   ` Pablo Neira Ayuso
2014-03-14 15:37     ` Alexei Starovoitov
2014-03-14 19:51       ` Alexei Starovoitov
2014-03-14 20:08         ` David Miller
2014-03-15 19:53           ` Daniel Borkmann
2014-03-17  9:16             ` Pablo Neira Ayuso
2014-03-12 21:43 ` [PATCH v10 net-next 2/3] seccomp: convert seccomp to use extended BPF Alexei Starovoitov
2014-03-12 21:43 ` [PATCH v10 net-next 3/3] doc: filter: add Extended BPF documentation Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).